id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
b03133fe-3639-4721-a521-d70ea6e77b96
trentmkelly/LessWrong-43k
LessWrong
Voting Results for the 2021 Review Well, that's a wrap for the 2021 Review. We had 238 people cast votes. 452 posts were originally nominated, of which 149 posts received at least one review. The LessWrong moderation team will be awarding prizes and assembling posts into the Best of 2021 Books / Sequences soon. But for now, you can look here at the raw results. Results Voting is visualized here with dots of varying sizes (roughly indicating that a user thought a post was "good" "important", or "extremely important"). Green dots indicate positive votes. Red indicate negative votes. You can hover over a dot to see its exact score. 0 Strong Evidence is Common Mark Xu 1 “PR” is corrosive; “reputation” is not. AnnaSalamon 2 Your Cheerful Price Eliezer Yudkowsky 3 ARC's first technical report: Eliciting Latent Knowledge paulfchristiano 4 This Can't Go On HoldenKarnofsky 5 Rationalism before the Sequences Eric Raymond 6 Lies, Damn Lies, and Fabricated Options Duncan_Sabien 7 Fun with +12 OOMs of Compute Daniel Kokotajlo 8 What 2026 looks like Daniel Kokotajlo 9 Ngo and Yudkowsky on alignment difficulty Eliezer Yudkowsky 10 How To Write Quickly While Maintaining Epistemic Rigor johnswentworth 11 Science in a High-Dimensional World johnswentworth 12 How factories were made safe jasoncrawford 13 Cryonics signup guide #1: Overview mingyuan 14 Making Vaccine johnswentworth 15 Taboo "Outside View" Daniel Kokotajlo 16 All Possible Views About Humanity's Future Are Wild HoldenKarnofsky 17 Another (outer) alignment failure story pau ...
4ecad884-51ae-49ff-8577-ea8e70775a70
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA: Bias Bias Discussion article for the meetup : West LA: Bias Bias WHEN: 28 October 2015 07:00:00PM (-0700) WHERE: Westside Pavilion Mall, 10850 Pico Blvd, Los Angeles, CA 90064 How to Find Us: We are meeting at the old location again this week, in the upstairs wine bar at Westside Pavillion. Discussion: The phrase "bias bias" could mean many things. Perhaps one might employ the term to point to the tendency to accuse others of bias before oneself. Perhaps, as in this paper, it could refer to the tendency of statisticians to be overly concerned with eliminating statistical bias and under-concerned about variance. What I want to discuss is the risk that, if we are observing other decision-makers from the outside with less knowledge about the situation than them, we will almost always find predictable irregularities in their decision-making which we cannot explain via our understanding of the situation. This will, I think, tend to be true whether they're "biased" in a significant sense or not. In other words: we're very likely to have less knowledge about the situation than the people making the decisions, and this is very likely to mislead us into thinking they're making biased decisions which are harming them, if we approach the question without sufficient awareness. This doesn't mean we can't assess bias, but it does sound a note of caution in doing so. Even in cases where the reasoning from our perspective seems very clear, the decision-maker may have other considerations to take into account. Recommended Reading: I don't know of anything written specifically on this, but the recent breaking Chesterton’s fence in the presence of bull seems relevant here. No prior exposure to Less Wrong is required; all are welcome. Discussion article for the meetup : West LA: Bias Bias
926b98af-568b-4974-824c-99707aec40d9
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Open Problems in AI X-Risk [PAIS #5] *This is the fifth post in* [*a sequence of posts*](https://www.alignmentforum.org/posts/bffA9WC9nEJhtagQi/introduction-to-pragmatic-ai-safety-pragmatic-ai-safety-1) *that describe our models for Pragmatic AI Safety.* Dan Hendrycks (an author of this post), Nicholas Carlini, John Schulman, and Jacob Steinhardt previously wrote [Unsolved Problems in ML Safety](https://arxiv.org/abs/2109.13916) (2021), which lays out some of the most promising areas of research in ML safety. The paper is written for an academic audience; for the reasons discussed in previous posts, much of this audience would not have been receptive to a full discussion of existential risk. This post will present many of the same areas and a few more from the perspective of existential risk mitigation. While some of the areas are well known in the AI safety community (honest AI), and others are well known in the broader ML community (such as adversarial robustness), many remain extremely neglected (such as power-averseness and moral decision-making). We hope to explain why these areas are relevant to existential risk. The post will be presented mainly as a list of topics. The following table gives an overview of the problems with their importance, neglectedness, and tractability. This is not intended to be a list of all problems in AI safety, as we are focused on problems that are amenable to empirical ML research and where it is possible to [avoid capabilities externalities](https://www.alignmentforum.org/posts/dfRtxWcFDupfWpLQo/perform-tractable-research-while-avoiding-capabilities).   **![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/5HtDzRAk7ePWsiL2L/u5wt4pshn9ksw3rqc5jl)** Each area in this document will have the following sections: 1. Problem Description: a brief description of the problem 2. Motivation: an explanation of how the problem is relevant to AI x-risk. We include many different contributing motivations to each problem, and we do not believe that every individual motivation provides a decisive argument for the given area. Rather, the motivations together provide a good case for working on each of the areas. 3. What Researchers Are Doing Now: a list and explanation of prior work in the area. 4. What Advanced Research Could Look Like: a high-level overview of work that would make significant progress into this area. We mostly describe work that is likely to not be completed for at least a year or two. 5. Importance, Neglectedness, Tractability: a brief explanation of the ratings shown above. 6. Relation To General Capabilities: This section answers the question, “how much would progress in general capabilities help with this problem?” 7. Capabilities Externalities Analysis: This section answers the opposite question of the previous section, “how much would progress in this problem help with general capabilities?” As detailed in [our previous post](https://www.alignmentforum.org/posts/dfRtxWcFDupfWpLQo/perform-tractable-research-while-avoiding-capabilities), capabilities externalities should be minimal for research areas at scale. 8. Criticisms: This section covers reasons that people give to argue an area is not valuable. We don’t necessarily agree with all of the critiques, and present them mainly for epistemic humility and so readers are familiar with some arguments against these problem areas. Specific research project ideas for many of the areas are covered in [Unsolved Problems in ML Safety](https://arxiv.org/abs/2109.13916), but we refrain from including these ideas in this document for brevity. Alignment --------- We take Alignment to be about reducing inherent model hazards: hazards that result from models (explicitly or operationally) pursuing the wrong goals. Four concrete empirical research directions include honest AI, power-averseness, moral decision-making, and automated moral philosophy research. There are additional problems in alignment, but many have yet to be concretized. ### **Power-averseness** **Problem Description** This area is about incentivizing models to avoid gaining more power than is necessary. **Motivation** Strategic AIs tasked with accomplishing goals would have instrumental incentives to accrue and maintain power, as power helps agents more easily achieve their goals. Likewise, some humans would have incentives to build and deploy systems that acquire power, because such systems would be more useful. If power-seeking models are misaligned, they could permanently disempower humanity. If agents are given the power to single-handedly destroy humanity, a single system failure could result in an existential catastrophe. If power is instead distributed among multiple agents, failure could be decorrelated. This is more likely if agents are not constantly trying to overpower the others. [See Joe Carlsmith’s report for a more thorough motivation.](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#) **What Researchers Are Doing Now** We are currently working on developing power penalties, power limits, and taxonomizing and estimating model power. There has been some study on [power-seeking in general](https://arxiv.org/abs/1912.01683) and the [instrumental tendency to resist being shut off](https://arxiv.org/abs/1611.08219), but otherwise no attempt at power-averseness. **What Advanced Research Could Look Like** Models could evaluate the power of other agents in the world to accurately identify particular systems that were attaining more power than necessary. They could also be used to directly apply a penalty to models so that they are disincentivized from seeking power. Before agents pursue a task, other models could predict the types of power and amount of power they require. Lastly, models might be developed which are intrinsically averse to seeking power despite the instrumental incentive to seek power. **Importance, Neglectedness, Tractability** Importance: ••• This could reduce many inherent hazards. Models that do not accrue too much power would be easier to shut down and correct, and they could cause less damage. Neglectedness: ••• Right now it’s just a handful of people working on this. Tractability: •• There is an abundance of low-hanging fruit on the technical front, since almost no work has been performed in this area. However, this area may be less tractable because it relies on  sequential decision making, which is not as developed as other areas of machine learning. **Relation to General Capabilities** Power-averseness is likely harmed by upstream improvements. As power-seeking becomes a more viable strategy for goal achievement, it may be increasingly incentivized instrumentally. **Capabilities Externalities Analysis** Power-averseness would harm general capabilities by default, as it would restrict the options available to the model somewhat. While it may improve the safety–capabilities balance, we will need to continually work towards making power-averseness techniques increasingly robust to competition and productivity pressures. In any case, capabilities externalities are avoided by default. **Criticisms** This could make models less economically valuable and especially less valuable during war, which is a large obstacle. Power-averseness will face many challenges on the sociotechnical front. What military wants this functionality, unless there is international coordination? What entity wants to limit its power? It is impossible to overcome or counteract the instrumentally convergent drive for power, so efforts to do this will inevitably fail. ### **Honest AI** **Problem Description** Honest AI involves creating models that only output what they hold to be true. It also involves determining what models hold to be true, perhaps by analyzing their internal representations.[[1]](#fncwbe8cxgif) **Motivation** If it is within a model's capacity to be strategically deceptive (i.e. able to make statements that the model in some sense knows to be false in order to gain an advantage) then treacherous turn scenarios are more feasible. Models could deceive humans about their plans, and then execute them once humans are no longer able to course-correct. Plans for a treacherous turn could be brought to light by detecting dishonesty, or models could be made inherently honest, allowing operators to query them about their true plans. Other motivation formulations: * We would like to prevent models from producing deceptive information. * If models can be made honest and only assert what they believe, then they can produce outputs that are more representative and give human monitors a more accurate impression of their beliefs. * Honesty helps facilitate cooperation among AIs, so it enables possibilities in cooperative AI. Honesty also undercuts collusion. **What Researchers Are Doing Now** They are demonstrating that models can lie, and they are capturing true and false clusters inside models (this paper is forthcoming). **What Advanced Research Could Look Like** Good techniques could be able to reliably detect when a model's representations are at odds with its outputs. Models could also be trained to avoid dishonesty and allow humans to correctly conclude that models are being honest with high levels of certainty. **Importance, Neglectedness, Tractability** Importance: ••• This could reduce many inherent hazards. If models were completely honest, deception would be far more difficult, thereby greatly reducing the probability of a whole class of failure modes. Neglectedness: ••• This is a new area, though the idea of “faithful outputs” is a very easy sell to the ML community, so its neglectedness will probably decrease soon. Tractability: •• Research is in its early stages, but there is some initial (forthcoming) research that makes progress. **Relation to General Capabilities** There is not much evidence that honesty improves with general capabilities by default. In fact, dishonesty may become more of a viable strategy for models with more ability to succeed at it. **Capabilities Externalities Analysis** Honesty is a narrower concept than truthfulness and is deliberately chosen to avoid capabilities externalities, since truthful AI is usually a combination of vanilla accuracy, calibration, and honesty goals. Optimizing vanilla accuracy is optimizing general capabilities, and we cover calibration elsewhere. When working towards honesty rather than truthfulness, it is much easier to avoid capabilities externalities. Lie detection could uncover hidden knowledge in models, which could potentially allow inducing more advanced functionality. Techniques that focus on a model’s concept of truth rather than querying its knowledge might help avoid this. **Criticisms** Current works use contrived situations and goad models to produce lies rather than studying lies produced under realistic conditions. The current honesty tools are very fragile. Adaptive, self-aware models in the future could circumvent current honesty techniques. If we select heavily for models that are powerful and honest, we might select for models that do not know that their general tendency is to gain the upper hand but which still do have that tendency (models for which deceptiveness becomes an unknown known rather than a known known). In other words, power-seeking might become “unconscious” rather than “conscious.” Honesty could make models very unpleasant since “[the truth is terrible](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2099162)”. An extremely honest model might become a cynic [activist](https://machinethoughts.wordpress.com/2018/09/18/superintelligence-and-the-truth/) that tears down everyone’s fundamental beliefs, so full honesty may be neither economically attractive nor compatible with human psychological safety. ### **Implementing Moral Decision-Making** **Problem Description** This area is about building models to understand ethical systems and steering models to behave ethically. This research area includes a few strategies: * Model intrinsic goods and normative factors, as these will be relevant even under extreme world changes. This is in contrast to task preferences; book summarization preferences are less fundamental human values, and their relevance is more fragile under extreme world changes. * Given moral systems, get models to abide by them with an artificial conscience or other forms of endogenous self-regulation. * Implement an automated [moral parliament](https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf) to have models act appropriately in the face of moral uncertainty. A generalization of this area is [machine ethics](https://plato.stanford.edu/entries/ethics-ai/#MachEthi), which is about making models that act ethically; this is an alternative formulation of Alignment. **Motivation** This line of work helps create actionable ethical objectives for systems to pursue. If strong AIs are given objectives that are poorly specified, they could pursue undesirable actions and behave unethically. If these strong AIs are sufficiently powerful, these misspecifications could create an existential catastrophe. Consequently, work in this direction helps us avoid proxy misspecification as well as value lock-in. If our foremost goal is reducing the probability of destroying or permanently curtailing the potential of humanity, then it seems to make most sense to focus on aligning AI to the most important and time-tested values, namely those considered in normative ethics. Other potential motivations: * Robustness is easier to achieve in a limited area than it is to achieve across a very wide range of tasks. If there’s one place we want to ensure robustness, it’s moral decision-making. * An artificial conscience can block morally suspect actions in AI systems by having direct access to action choices. **What Researchers Are Doing Now** They are [predicting when there is high moral disagreement for a scenario](https://arxiv.org/abs/2008.02275). They are modeling normative factors and intrinsic goods, implementing foundational ethical theories, modeling the provenance of utility functions, modeling [exceptions](https://arxiv.org/abs/2210.01478) to moral rules, researching how to more effectively steer artificial agents’ actions (such as through an [artificial conscience](https://arxiv.org/abs/2110.13136)), and so on. **What Advanced Research Could Look Like** High-functioning models should detect situations where the moral principles apply, assess how to apply the moral principles, evaluate the moral worth of candidate actions, select and carry out actions appropriate for the context, monitor the success or failure of the actions, and adjust responses accordingly. Models could represent various purported intrinsic goods, including pleasure, autonomy, the exercise of reason, knowledge, friendship, love, and so on. Models should be able to distinguish between subtly different levels of these goods, and these value functions should not be vulnerable to optimizers. Models should be able to create pros and cons of actions with respect to each of these values, and brainstorm how changes to a given situation would increase or decrease the amount of a given intrinsic good. They should also be able to create superhuman forecasts of how an action can affect these values in the long-term (e.g., how studying can reduce wellbeing in the short-term but be useful for wellbeing in the long-term), though this kind of research must be wary of capabilities externalities. Models should also be able to represent more than just intrinsic goods, as they should also be able to represent constantly-updating legal systems and normative factors including special obligations and deontological constraints. Another possible goal is to create an automated moral parliament, a framework for making ethical decisions under moral and empirical uncertainty. Agents could submit their decisions to the internal moral parliament, which would incorporate the ethical beliefs of multiple stakeholders in informing decisions about which actions should be taken. Using a moral parliament could reduce the probability that we are leaving out important normative factors by focusing on only one moral theory, and the inherent multifaceted, redundant, ensembling nature of a moral parliament would also contribute to making the model less gameable. If a component of the moral parliament is uncertain about a judgment, it could request help from human stakeholders. The moral parliament might also be able to act more quickly to restrain rogue agents than a human could and act in the fast-moving world that is likely to be induced by more capable AI. We don’t believe the moral parliament would solve all problems, and more philosophical and technical work will be needed to make it work, but it is a useful goal for the next few years. **Importance, Neglectedness, Tractability** Importance: ••• Models that make decisions with regard for their morality would be far less likely to cause catastrophes. Neglectedness: ••• A handful of people are working on this. Tractability: •• So far, there has been continual progress. **Relation to General Capabilities** Moral decision-making can benefit from upstream capabilities. Models that are better able to understand the world in general will be more able to understand how morality fits into that world. Better predictive power would also enable better modeling of consequentialist moral theories. **Capabilities Externalities Analysis** This has similarities with task preference learning (“I like this Netflix movie”; “I like this summary more”), but the latter has obvious externalities: humans prefer smarter models. Instead, we try to model normative factors and intrinsic goods. Task preferences are less robust and relevant under extreme environmental changes, compared to enduring human values such as normative factors (wellbeing, impartiality, etc.) and the factors that make up a good life (pursuing projects, gaining knowledge, etc.). Capabilities externalities are readily avoidable, provided that one is not modeling task preferences. We should model consequentialist theories using pre-existing general world model capabilities, rather than try to build better predictive world models for the sake of modeling consequentialist theories. Doing so will keep capabilities externalities to a minimum. **Criticisms** In order to do groundbreaking work, it is useful for the researcher to have taken a course or two in normative ethics. Few have, which makes this area less accessible. In addition, many researchers weak in normative ethics might attempt research and produce low-quality yet influential work. Models should learn to do what humans would do, not abide by abstract moral theories. Morality does not model what humans do well, and we should care about how humans really behave rather than how they think they should behave. “Alignment” should not be associated with ethics because that will intimidate technical researchers; this is why we should talk about task preferences. Ethics is not yet resolved enough to leave anything up to a machine. It would thus be better to model preferences (implicitly, preference utilitarianism) rather than complicated explicit moral theories. Developing systems that perform moral decision making could reduce the influence of humans in making ethical decisions, reducing our autonomy. ### **Automated Moral Philosophy Research (Value Clarification)** **Problem Description** This area is about building AI systems that can perform moral philosophy research. This research area should utilize existing capabilities and avoid advancing general research, truth-finding, or contemplation capabilities. **Motivation** The future will sharpen and force us to confront unsolved ethical questions about our values and objectives. In recent decades, peoples’ values have evolved by confronting philosophical questions, including whether to infect volunteers for science, how to equitably distribute vaccines, the rights of people with different orientations, and so on. How are we to act if many humans spend most of their time chatting with compelling bots and not much time with humans, and how are we to balance pleasure and enfeeblement? Determining the right action is not strictly scientific in scope, and we will need philosophical analysis to help us correct structural faults in our proxies. To address deficiencies in our moral systems, and to more rapidly and wisely address future moral quandaries that humanity will face, these research systems could help us reduce risks of value lock-in by improving our moral precedents earlier rather than later. If humanity does not (or cannot) take a “long reflection” to consider and refine its values after it develops strong AI, then the value systems lying around may be amplified and propagated into the future. Value clarification reduces risks from locked-in, deficient value systems. Additionally, value clarification can be understood as a way to reduce proxy misspecification, as it can allow values to be updated in light of new situations. We will need to decide what values to pursue and how to pursue them. If we decide poorly, we may lock in or destroy what is of value. It is also possible that there is an [ongoing moral catastrophe](https://link.springer.com/article/10.1007/s10677-015-9567-7), which we would not want to replicate across the cosmos. **What Researchers Are Doing Now** Nothing. **What Advanced Research Could Look Like** Good work in value clarification would be able to produce original insights in philosophy, such that models could make philosophical arguments or write seminal philosophy papers. Value clarification systems could also point out inconsistencies in existing ethical views, arguments, or systems. Importance: ••• This would reduce many ontological or systematic misalignment errors. Neglectedness: ••• No one is working on this. Tractability: • This is a challenging problem. (Possible intermediate steps are described in Unsolved Problems.) **Relation to General Capabilities** General upstream research capabilities will make this problem more tractable. Philosophical/fuzzy reasoning ability and raw intelligence seem distinct; by default high-IQ educated people are not especially good at reasoning about fuzzy abstract objects. **Capabilities Externalities Analysis** We do not aim to make models superhuman at research generally, only superhuman at moral philosophy. We turn to moral philosophy research rather than general research or general-purpose wide reflective equilibrium approximators since moral philosophy research can have few capabilities externalities, while the latter proposals have extremely high superhuman capabilities externalities. Poorly directed research in this direction could have high capabilities externalities, but it is likely this problem will remain neglected relative to general truth-seeking/contemplation/reflection methods, so the best way to improve automated moral philosophy research using the marginal researcher is to use existing capabilities and apply them to this problem. **Criticisms** There is no progress in moral philosophy. Alternatively, perhaps moral philosophy does not provide any useful insights about what the world ought to look like or what agents ought to do. If normative ethics has provided no insight, it will not be useful to model it. Intelligence will straightforwardly result in expert philosophical ability, so we do not need to specifically work on it. We can just wait for the long reflection to correct structural forms of misalignment, assuming we get to the long reflection. The danger of lock-in is overrated. Nobody will trust works of philosophy that include genuine moral progress if they are generated by a machine. Robustness ---------- We take Robustness to be about reducing vulnerabilities to hazards from sources other than the model itself. Robustness focuses on responding to abnormal, unforeseen, unusual, highly impactful, or adversarial events. Adversarial robustness does not currently have economically feasible scaling laws, even for extremely simplistic adversaries. Even on MNIST, an extremely simple dataset, no model has human-level robustness, and we haven’t been able to get any perception task that requires machine learning to a human level of robustness. Robustness is not the same thing as in-distribution accuracy: even if a model gets higher in-distribution accuracy, it does not necessarily provide high robustness. If you rely on more of the existing trend, you do not get reliability. Tesla is trying to improve robustness with petabytes of task-specific data, and yet the problem remains unsolved. ### **Adversarial Robustness** **Problem Description** Adversarial examples demonstrate that optimizers can easily manipulate vulnerabilities in AI systems and cause them to make egregious mistakes. Adversarial vulnerabilities are long-standing weaknesses of AI models. While typical adversarial robustness is related to AI x-risk, future threat models will be broader than today’s adversarial threat models. Since we are concerned about being robust to optimizers that cause models to make mistakes generally, we should make minimal assumptions about the properties of the adversary and work to make models that are robust to many kinds of attacks. **Motivation** In the future, AI systems may pursue goals specified by other AI proxies. For example, an AI could encode a proxy for human values, and another AI system could be tasked with optimizing the score assigned by this proxy. If the human value proxy is not robust to optimizers, then its vulnerabilities could be exploited, so this gameable proxy may not be fully safe to optimize. By improving the reliability of learned human value proxies, optimizers and adversaries would have a harder time gaming these systems. If gaming becomes sufficiently difficult, the optimizer can be impelled to optimize the objective correctly. Separately, humans and systems will monitor for destructive behavior, and these monitoring systems need to be robust to adversaries. We often study adversarial robustness in the continuous domain (vision) because gradient descent is powerful in that setting. In contrast, adversarial attacks for text-based models are weaker and often not gradient-based. We’d like to defend against the most powerful adversaries since as models get more intelligent, the attacks will get more powerful. Gradient attacks for vision systems are already extremely strong, so it gives us a more direct view of how to defend against powerful optimizers. Other motivation formulations: * Adversarial robustness is helpful for safety community building efforts and improving safety culture. * This reduces risk by reducing vulnerability to hazards. (Risk\_Hazard = Hazard x Exposure x Vulnerability) * Adversarial robustness is necessary for extreme and worst-case reliability. **What Researchers Are Doing Now** * Producing new distortions of given images [[1](https://arxiv.org/abs/2003.01690),[2](https://arxiv.org/abs/1802.00420),[3](https://arxiv.org/abs/1608.04644)] and text [[1](https://arxiv.org/abs/2104.13733),[2](https://arxiv.org/abs/1707.07328)] as adversarial examples. * Finding ways to robustify models (adversarial training improvements) [[1](https://arxiv.org/abs/2111.05328),[2](https://arxiv.org/abs/2006.14536),[3](https://arxiv.org/abs/1902.02918),[4](https://arxiv.org/abs/1901.09960),[5](https://arxiv.org/abs/1801.09344),[6](https://arxiv.org/abs/1706.06083)] * Adversarially constructed datasets [[1](https://arxiv.org/abs/1910.14599),[2](https://arxiv.org/abs/1907.07174)] **Importance, Neglectedness, Tractability** Importance: ••• High reliability is necessary for safe superintelligence. Work on this problem is a necessary component of high reliability. Neglectedness: • Numerous researchers are working in this area. However, aspects are neglected. Adversarial robustness for larger-scale systems is rarely studied due to academic researchers lacking the necessary compute. Likewise, long-term threat models are underexplored: attack specifications may not be known beforehand, and attack budgets could be large. Tractability: •• There are shovel-ready problems and the research is (slowly) progressing. **What Advanced Research Could Look Like** Ideally, an adversarially robust system would make reliable decisions given adversarially constructed inputs, and it would be robust to adversaries with large attack budgets using unexpected novel attacks. Furthermore, it should detect adversarial behavior and adversarially optimized inputs. A hypothetical human value function should be as adversarially robust as possible so that it becomes safer to optimize. A hypothetical human value function that is fully adversarially robust should be safe to optimize. **Relation to General Capabilities** Adversarial robustness is barely helped by upstream capabilities. **Capabilities Externalities Analysis** For vision, nearly all methods in adversarial robustness do not improve general capabilities such as clean accuracy, so in this problem area it is quite easy to avoid capabilities externalities. Furthermore, adversarial training for vision currently dramatically reduces vanilla classification accuracy. In text, automated or virtual adversarial attacks don’t markedly improve general capabilities much. However, human-crafted natural adversarial examples for text models seem to improve general capabilities. **Criticisms** Most research uses particular types of perturbations on images and the main field ignores more important things (unforeseen attacks, large distortions). Currently, there does not appear to be that much knowledge transfer between continuous signal (vision) and discrete sequence (text) adversarial robustness research. It’s unclear how big of a role continuous signals will play for strong AI outside perception. Current adversarial robustness methods have very large general capabilities costs. Currently it’s not orthogonal to general capabilities–it’s anticorrelated. While researchers are iteratively reducing the cost, it could still be a costly safety measure in the future. The instant two intelligent agents can reason about each other–regardless of their goals–they will necessarily collude. Adversarial robustness, in trying to improve the defensive capabilities of proxy models against other models, will not be relevant if the models collude. Monitoring ---------- We take monitoring to mean avoiding exposure to hazards as much as possible, such as by detecting problems in systems before they grow worse. ### **Anomaly Detection** **Problem Description** This area is about detecting potential novel hazards such as unknown unknowns, unexpected rare events, or emergent phenomena. Anomaly detection (also known as out-of-distribution detection) can allow models to flag salient anomalies for human review or execute a conservative fallback policy. **Motivation** This is an indispensable tool for detecting a wide range of hazards. For example: * [proxy gaming](https://openreview.net/pdf?id=JYtwGwIL7ye) * rogue ML systems that are already causing harm at smaller scales * deceptive ML systems not easily detectable by humans * Trojan horse models (discussed below) * malicious users who may attempt to intentionally misalign a model, or align it to their own nefarious ends * early signs of dangerous novel technologies * AI tripwires could help uncover early misaligned systems before they can cause damage * emergent behavior This approach operates by virtue of being able to detect other hazards before they occur or before they can cause more damage. While in an ideal world the other hazards do not occur at all, in reality any serious attempt at safety must build in some mechanisms to detect failures to prevent hazards. Early detection is crucial to being able to successfully stop the hazard before it becomes impossible to do so. In addition to helping with AI x-risk, anomaly detection can also be used to help detect novel engineered microorganisms that present biological x-risks, perhaps by having image and sequence anomaly detectors scan hospitals for novel pathogens (see this paper for an [example](https://arxiv.org/abs/1906.02845) of anomaly detection involving genomic sequences). Anomaly detection could also help detect [Black Balls](https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12718) (“a technology that invariably or by default destroys the civilization that invents it”). Other motivation formulations: * This reduces risk by reducing exposure to hazards. (Risk\_Hazard = Hazard x Exposure x Vulnerability) * Detection has a central place in Hazard Analysis (e.g., Failure Modes and Effects Analysis). It is customary for improving defense in depth. * This can help flag examples, models, or trajectories for human review. * This helps systems fail gracefully by enabling us to implement triggers for fail-safes or conservative fallback policies. This reduces the probability of hard maximizing models going off the rails. * This can be used to detect malicious use (malicious use that uses novel sly tactics are harder to detect). As before, malicious use could be from AIs or humans. * Various other systems have this as a main line of defense ([high reliability organizations](https://arxiv.org/abs/1811.10840), information security operations centers, the human body, etc.) and it has a central place in Hazard Analysis by creating defense in depth. Specifically, it reduces risk by reducing exposure to hazards. (Risk\_Hazard = Hazard x Exposure x Vulnerability). Anomaly detection could also be used to detect malicious use, whether by AIs or humans. * An approach towards safety is building conservative agents that refrain from taking unprecedented actions. Unprecedented actions could be blocked by anomaly detectors. **What Researchers Are Doing Now** * Out-of-distribution detection aims to identify when new input data is out of the original training distribution, and flag this for possible review [[1](https://arxiv.org/abs/2203.10807),[2](https://arxiv.org/abs/2202.01197),[3](https://arxiv.org/abs/1911.11132),[4](https://arxiv.org/abs/1807.03888),[5](https://arxiv.org/abs/1812.04606),[6](https://arxiv.org/abs/1610.02136)]. * One-class learning aims to use data from a single distribution to train a classifier to detect when new data is out-of-distribution [[1](https://arxiv.org/abs/1906.12340),[2](https://arxiv.org/abs/2007.08176)] * More applied subproblems: For text processing, researchers [detect](https://proceedings.neurips.cc/paper/2019/file/1e79596878b2320cac26dd792a6c51c9-Paper.pdf#page=7) if genetic instructions belong to a new species. In the future, researchers can try to detect whether a utility function is applicable to a given text input (does this scenario describe a sentient being or something else)? In vision, researchers [detect](https://arxiv.org/abs/1906.02845) if an image contains an organism species that the model has not seen before (this includes microorganisms). **What Advanced Research Could Look Like** A successful anomaly detector could serve as an AI watchdog that could reliably detect and triage rogue AI threats. When the watchdog detects rogue AI agents, it should do so with substantial lead time. Anomaly detectors should also be able to straightforwardly create tripwires for AIs that are not yet considered safe. Furthermore, advanced anomaly detectors should be able to help detect black balls. Anomaly detectors should also be able to detect biological hazards by increasing detection lead time by having anomaly detectors continually scan hospitals for novel biological hazards. **Importance, Neglectedness, Tractability** Importance: ••• This cross-cuts many relevant problems. Neglectedness: •• After many years of community building, there are many researchers working in this area. The most cited paper in the area only has somewhat more than 1,000 citations, whereas there is an adversarial robustness paper with more than 10,000 citations. Adversarial anomaly detection is more neglected. Anomaly detection for text-based models is also more neglected. Tractability: •• Detecting image anomalies is one of the hardest problems in image understanding, along with adversarial robustness. Progress is even slower than in adversarial robustness, though that may be because the field is smaller. **Relation to General Capabilities** It’s somewhat helped by upstream capabilities, but extrapolating accuracy to 100% does not come close to solving anomaly detection even in simple, non-adversarial settings. [It is harmed by other common upstream methods (e.g., mixup).](https://arxiv.org/abs/2003.03879) **Capabilities Externalities Analysis** Most research in anomaly detection takes fixed models and tries to extract better anomaly scores. When models are fixed, capabilities externalities are largely avoided. Other research changes the representation of models to make them more suitable for anomaly detection; these methods are not that useful for making models more generally capable. In this setting, externalities are especially easy to avoid. **Criticisms** Detecting a rogue model at the point when it’s virtually unstoppable will not help us stop it. By the time something is going wrong, we are already dead. This does not directly reduce inherent model hazards; it just detects them: this could be used to detect AI deception, but it would not directly make the system desire to be honest. Sometimes this is just a protective measure and not a preventative measure. For making other high risk technologies safer, protective measures are less desirable than preventative measures. Along with general progress in AI, this increases the feasibility of totalitarianism at the cost of better responding to hazards. When models are more intelligent, anomalies will be easy to detect. (However, it’s quite difficult to argue that even a superintelligence could detect all Black Swans the moment they start to emerge and all malicious actors employing a novel strategy and that detection lead time cannot be increased.) Researchers are only slightly focused on adversarial anomaly detection. ### **Interpretable Uncertainty** **Problem Description** This area is about making model uncertainty more interpretable and calibrated by adding features such as confidence interval outputs, conditional probabilistic predictions specified with sentences, posterior calibration methods, and so on. **Motivation** If operators ignore system uncertainties since the uncertainties cannot be relied upon or interpreted, then this would be a contributing factor that makes the overall system that monitors and operates AIs more hazardous. To draw a comparison to chemical plants, improving uncertainty calibration could be similar to ensuring that chemical system dials are calibrated. If dials are uncalibrated, humans may ignore the dials and thereby ignore warning signs, which increases the probability of accidents and catastrophe. Furthermore, since many questions in normative ethics have yet to be resolved, human value proxies should incorporate moral uncertainty. If AI human values proxies have appropriate uncertainty, there is a reduced risk in an human value optimizer maximizing towards ends of dubious value. Other reasons: * Calibrated models can better convey the limits of their competency by expressing their uncertainty, so human operators can know when to override models. Calibrated probabilities facilitate rational decision making: + Improved probability estimates matter for high-stakes decisions + Improved risk estimates (probabilities multiplied by losses) * ML subsystems are easier to integrate if each system is well-calibrated * Model confidences are more interpretable the more they are calibrated * This helps systems fail gracefully by enabling us to implement triggers for fail-safes or conservative fallback policies. **What Researchers Are Doing Now** They are measuring model miscalibration on typical examples and in the face of distribution shifts and adversarial examples [[1](https://arxiv.org/abs/2112.05135),[2](https://arxiv.org/abs/1906.02530),[3](https://arxiv.org/abs/1706.04599),[4](https://arxiv.org/abs/1612.01474),[5](https://arxiv.org/abs/1508.05154)]. **What Advanced Research Looks Like** Future models should be calibrated on inherently uncertain, chaotic, or computationally prohibitive questions that extend beyond existing human knowledge. Their uncertainty should be easily understood by humans. Moreover, given a lack of certainty in any one moral theory, AI models should accurately and interpretably represent this uncertainty in human value proxies. **Importance, Neglectedness, Tractability** Importance: •• This is an important part of interpretability. Neglectedness: • Many people are working on it, maybe half an order of magnitude more than anomaly detection. Calibration in the face of adversaries is highly neglected, as are new forms of interpretable uncertainty: having models output confidence intervals, having models output structured probabilistic models (e.g., “event A will occur with 60% probability assuming event B also occurs, and with 25% probability if event B does not”). Tractability: •• There are shovel-ready tasks, and the community is making progress on this problem. **Relation to General Capabilities** It’s often helped by some upstream capabilities, but it is harmed by other upstream methods (e.g., mixup). Also note the Brier score metric is much more correlated with upstream capabilities metrics such as vanilla accuracy than calibration metrics including the expected calibration error (ECE) or RMS calibration error. See a discussion of how the Brier score is a mixture of an accuracy component and a calibration component in this [EMNLP paper](https://arxiv.org/abs/1508.05154). Consequently, safety-minded researchers should avoid tangling under- and over-confidence with accuracy and therefore avoid using the Brier score as the main summary for calibration. Fortunately most of the ML community uses the disentangled metrics. **Capabilities Externalities Analysis** Many calibration methods try to make fixed models more calibrated, and these techniques leave the representations and accuracy unchanged. By default, calibration research leaves capabilities unchanged. **Criticisms** Like work in transparency, this helps human operators and inspectors, but it does not directly reduce inherent hazards. Many of the impacts are indirect or sociotechnical, but we should only support work that has direct impact (linear causal influence). ### **Trojan Horse Models** **Problem Description** AI systems can contain “trojan” hazards. Trojaned models behave typically in most situations, but when specific secret situations are met, they reliably misbehave. For example, an AI agent could behave normally, but when given a special secret instruction, it could execute a coherent and destructive sequence of actions. In short, this area is about identifying hidden functionality embedded in models that could precipitate a treacherous turn. **Motivation** One of the most dangerous sources of risk from advanced AI is sudden, unexpected changes in behavior. Similar to how people can hide their true intentions, a misaligned AI could bypass oversight mechanisms through deception. If we can uncover hidden behavior and predict treacherous turns before they happen, this will mitigate several failure modes. **What Researchers Are Doing Now** They are developing Trojan attacks and defenses. Most existing work uses CV datasets and models as a testbed [[1](https://arxiv.org/abs/2106.09667),[2](https://arxiv.org/abs/1910.03137),[3](https://arxiv.org/abs/1906.10842),[4](https://people.cs.uchicago.edu/~ravenben/publications/pdf/backdoor-sp19.pdf),[5](https://arxiv.org/abs/1902.06531),[6](https://arxiv.org/pdf/1712.05526.pdf),[7](https://arxiv.org/pdf/1708.06733.pdf)], but recent work is beginning to explore Trojans for NLP models [[1](https://pages.nist.gov/trojai/),[2](https://arxiv.org/abs/1910.03137)]. A much smaller number of papers explores Trojans for RL [[1](https://arxiv.org/abs/2201.00762)]. There is also related work on emergent behaviors in RL and emergent capabilities in large language models, which explores different aspects of hidden functionality. **Importance, Neglectedness, Tractability** Importance: ••• Treacherous turns from advanced AI systems are a significant source of x-risk. Starting work on this problem early is important, and Trojan research is one way to make initial progress. Neglectedness: •• The field of Trojans in deep learning is 5 years old, but there is still much left to be done. The field is not commonly associated with safety, so there is an opportunity to focus the field towards greater x-risk relevance and create a path for safety researchers to gain career capital. Tractability: •• Analyzing and detecting Trojans is an early field with much low-hanging fruit. It is a standard ML research problem with emerging benchmarks that can be iterated on. However, the problems of detecting and reverse-engineering Trojans are broad in scope and challenging. **Relation to General Capabilities** We are not aware of more accurate models affecting Trojan attacks or defenses.  **Capabilities Externalities Analysis** Trojan research is unlikely to impact general capabilities, because most of the work is developing attacks and defenses. The former work is highly specific to Trojans and unlikely to transfer, and the latter work is relevant to safety. Conversely, improvements to general capabilities might yield useful demonstrations of hidden functionality and emergent capabilities that could make this research easier. However, this doesn’t appear to have happened yet. **Criticisms** Trojans might not generalize to real treacherous turns. Current Trojans are mostly about flipping predicted classes, but real hidden behavior might be more complex. Even future harder versions of the problem may not solve treacherous turns. Real hidden behavior might be easy to predict, e.g. malintent can simply be read off from the AI’s train of thought, particularly if progress has been made on honesty. Large-scale deception will be hard for strong AIs to maintain. Real hidden behavior might be hidden very differently from human-created Trojans, and so Trojan detection methods might not help. ### **Transparency** **Problem Description** AI systems are becoming more complex and opaque. This area is about gaining clarity about the inner workings of AI models and making models more understandable to humans. **Motivation** If humans lose the ability to meaningfully understand ML systems, they may no longer retain their sovereignty over model decisions. Transparency tools could help unearth deception, mitigating risks from dishonest AI and treacherous turns.  This is because some speculate that deception could become inadvertently incentivized, and if models are capable planners, they may be skilled at obscuring their deception. Similarly, researchers could develop transparency tools to detect poisoned models, models with trojans, or models with other latent unexpected functionality. Moreover, transparency tools could help us better understand strong AI systems, which could help us more knowledgeably direct them and anticipate their failure modes. Other motivation formulations: * Transparency helps facilitate cooperation among AIs, so it enables possibilities in cooperative AI. Transparency also undercuts collusion. * Transparency could make it easier for models to detect deception and other problems in other models, which could improve monitoring. **What Researchers Are Doing Now** People are [critiquing](https://arxiv.org/abs/1810.03292) [transparency](https://arxiv.org/pdf/2010.12606.pdf) [methods](https://arxiv.org/abs/1606.03490), [analyzing superhuman game AIs](https://arxiv.org/abs/2111.09259), and [looking for mechanisms inside models](https://transformer-circuits.pub/2021/framework/index.html). This is quite a simplification. There are many researchers in this space, but there aren’t many coherent clusters of research (save for areas known to be bad, such as nearly all work on saliency maps). **What Advanced Research Could Look Like** Successful transparency tools would allow a human to predict how a model would behave in various situations without testing it. These tools should be able to be easily applied (ex ante and ex post emergence) to unearth deception, emergent capabilities, and failure modes. To help make models more transparent, future work could try to provide clarity about the inner workings of models and understanding model decisions. Another line of valuable work is critiquing explainability methods and trying to show limitations of auditing methods. Measuring similarities and differences between internal representations is also an important step toward understanding models and their latent representations. **Importance, Neglectedness, Tractability** Importance: ••• If we could intuitively understand what models are doing, then they’d be far more controllable. Neglectedness: • This is highly funded by numerous stakeholders, and it has a large community. Deep nets are famous for being “black boxes,” and this limits their economic utility due to concerns about human oversight (such as in medical applications). Tractability: • This area has been struggling to find a solid line of attack throughout its existence. It has set goals for itself, and it has not met them (e.g., using transparency tools to find special functionality implanted by another human.) **Relation to General Capabilities** The historical trend is that models are becoming more opaque as time progresses. We went from decision trees and SVMs, to random forests (intellectually unmanageable), to ConvNets, to Vision Transformers (where the lack of channels makes feature visualizations worse). “In prediction, accuracy and simplicity (interpretability) are in conflict” ([Breiman, 2001](http://www2.math.uu.se/~thulin/mm/breiman.pdf)). **Capabilities Externalities Analysis** This has not been useful for addressing weaknesses in models, and the “insights” gleaned from transparency techniques have not helped researchers understand how to improve models. It currently is not having any impact on capabilities research. (It is worth noting that if “transparency” is construed more broadly to include anything involving “understanding models better,” then some approaches may have capabilities externalities, such as studying general capability scaling laws. We do not include this as a transparency problem.) **Criticisms** There’s been high interest from numerous stakeholders, including several [tens of millions of dollars from DARPA](https://www.darpa.mil/program/explainable-artificial-intelligence), and there isn’t anything solid that we’ve got from it. Maybe it’s too vague or trying to do too much? If we care about detecting deception or Trojan horse models (treacherous turns), why not just focus on the more tractable problem of detecting deception or Trojan horse models directly (possibly using AIs to help us detect such behavior and sift through the gobs of raw data)? Progress in ML is driven by metrics; progress must be measurable. This area doesn’t have metrics, while other areas such as detecting Trojans do. Interpretability is a “god of the gaps” field (“if we understood our models, then we could understand how to fix this problem.”). For many non-accuracy goals that we don’t know how to reach, many think interpretability will help us reach the goal. ([The Mythos of Interpretability](https://arxiv.org/abs/1606.03490)) Often, systematic findings that explain a portion of a model’s behavior are “science of ML” papers ([example](https://arxiv.org/abs/1805.11604)) rather than intrpretability papers. Transparency aims to reverse the longstanding trend that models are becoming more opaque; reversing or undoing the consequences of this robust trend is unlikely to happen. (Even in humans, better forecasting models are less interpretable and more complicated.) Humans are not interpretable. They are black boxes to themselves, and they confabulate explanations. ([Hinton gives an example](https://www.youtube.com/watch?v=ZSfs5ZnnSMY).) Expecting explanations for thought processes requires making many unconscious computations conscious, but only a small fraction of computation can be supported consciously. Many intuitive tasks are pre-verbal. If humans could in detail explain how they processed an image and classified it, then we could close the book on several research areas in cognitive science. Likewise, if humans could understand how they know things, then research experts could easily transmit their research abilities. We know they’re relatively unable to do this, or else committed students could always become stronger than advisors. (“Education is an admirable thing,” wrote Oscar Wilde, “but it is well to remember from time to time that nothing that is worth knowing can be taught.” Moreover, “we know more than we can tell.” - Michael Polanyi) It is difficult to interpret the eigenvectors from PCA of simple datasets such as MNIST or faces: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/5HtDzRAk7ePWsiL2L/kq5fqux4h0m4mxumdz6z) It’s not possible to easily convert the vectors above to English. How should we expect to be able to interpret even more complicated ML algorithms if it’s not possible to interpret PCA? Likewise, other basic ML methods are hard to interpret. For example, random forests are intellectually unmanageable. The later computations in a [basic MNIST ConvNet](https://www.cs.cmu.edu/~aharley/vis/conv/) are hardly interpretable. There are often too many possible explanations for a phenomenon for any single explanation to be useful. This is why the best test of a model is its predictive power, not whether it can only explain prior data. NYU professor Bob Rehder [wrote](https://featuredcontent.psychonomic.org/the-progress-of-understanding-explanations/): “Explanation tends to induce learners to search for general patterns, it may cause them to overlook exceptions, with the result that explanation may be detrimental in domains where exceptions are common.” When analyzing complex systems (such as deep networks), it is tempting to separate the system into events or components (“parts”), analyze those parts separately, and combine results or “divide and conquer.” This approach often wrongly assumes (Leveson 2020): * Separation does not distort the system’s properties * Each part operates independently * Part acts the same when examined singly as when acting in the whole * Parts are not subject to feedback loops and nonlinear interactions * Interactions between parts can be examined pairwise Searching for mechanisms and reductionist analysis is too simplistic when dealing with complex systems (see [our third post](https://www.alignmentforum.org/posts/n767Q8HqbrteaPA25/complex-systems-for-ai-safety-pragmatic-ai-safety-3) for more). People hardly understand complex systems. Grad students in ML don’t even understand various aspects of their field, how to make a difference in it, what trends are emerging, or even what’s going on outside their small area. How will we understand an intelligence that moves more quickly and has more breadth? The reach of a human mind has limits. Perhaps a person could understand a small aspect of an agent’s actions (or components), but it’d be committing the composition fallacy to suggest a group of people that individually understand a part of an agent could understand the whole agent. There are different approaches to understanding phenomena. One can look from the top down and find general rules that capture complex phenomena. Alternatively, one could try building understanding from the bottom up, but building a staircase from basic mechanisms all the way to complex system-level behavior does not have a good track record (e.g., consider mechanisms in evolution that leave many adaptations unexplained, mechanisms in anatomy have limited power in predicting whether a new drug will work, etc.). Most current transparency builds from the bottom up. Smarter people are not reliably understood by less smart people. If a less intelligent person could reliably predict what a more intelligent person would do, then they would be similarly intelligent. It’s unlikely we’ll be able to understand all aspects of a superintelligence (make it interpretable), as the human mind is limited and going to be less smart than a superintelligence. Even if we labeled each of a model’s many millions of neurons (example label of a neuron: “detects whiskers at 21 to 24 degrees, and it also detects black letter ‘A’ keyboard keys with probability 37%, and it detects a type of noise that’s difficult to describe”), it wouldn’t necessarily be interpretable (simulable, enable crisp post-hoc explanations, highly decomposable). To holistically understand models, we are better off understanding them functionally. Systemic Safety --------------- This section is about using AI for furthering longtermist or EA goals. The areas are also useful for [improving systemic contributing factors](https://www.alignmentforum.org/posts/n767Q8HqbrteaPA25/complex-systems-for-ai-safety-pragmatic-ai-safety-3#Improving_Contributing_Factors) that contribute to the reduction of AI x-risk. ### **ML for Cyberdefense** **Problem Description** This area is about using machine learning to improve defensive security, such as by improving malicious program detectors. This area focuses on research avenues that are clearly defensive and not easily repurposed into offensive techniques, such as detectors and not automated pentesters. **Motivation** It will matter very little if AI systems are aligned if they can be hacked by humans or other AI systems and made to be misaligned (intentionally or unintentionally). There may also be situations where aligned AI is hijacked by a malicious actor who intentionally or accidentally contributes to x-risk. In addition, one of the fastest and most potent ways for a superintelligence to project its intelligence and influence the world is through cyberattacks, not through physical means. Even if some of the components of ML systems are safe, they can become unsafe when traditional software vulnerabilities enable others to control their behavior. Moreover, traditional software vulnerabilities may lead to the proliferation of powerful advanced models, and this may be worse than proliferating nuclear weapons. Cyberattacks could take down national infrastructure including power grids, and large-scale, reliable, and automated cyberattacks could engender political turbulence and great power conflicts. Great power conflicts incentivize countries to search the darkest corners of technology to develop devastating weapons. This increases the probability of weaponized AI, power-seeking AI, and AI facilitating the development of other unprecedented weapons, all of which are x-risks. Using ML to improve defense systems by decreasing incentives for cyberwarfare makes these futures less likely. Other motivation formulations: * As ML proliferates, it's possible that we will see a dramatic increase in the use of ML systems in cyberattacks. In order to protect against such attacks, we may need to use ML in cyberdefense. Work might be especially necessary if there turns out to be an asymmetry in attacks/defenses as there currently is with adversarial examples. * [This post](https://forum.effectivealtruism.org/posts/WqQDCCLWbYfFRwubf/information-security-considerations-for-ai-and-the-long-term) also gives motivations for the importance of information security in AI safety. **What Researchers Are Doing Now** Some research in automatically detecting and patching software vulnerabilities at scale [[1](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8835342),[2](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8672949),[3](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9152790)]. Some use anomaly detection for use in detecting malicious payloads [[1](https://www.usenix.org/system/files/conference/usenixsecurity15/sec15-paper-shin.pdf),[2](https://ieeexplore.ieee.org/abstract/document/5675808)]. ML has also been used for intrusion detection [[1](https://link.springer.com/chapter/10.1007/978-3-540-45248-5_10),[2](https://ieeexplore.ieee.org/document/5504793),[3](https://www.sciencedirect.com/science/article/abs/pii/S0020025518303475)]. **What Could Advanced Research Look Like?** AI-based security systems could be used for better intrusion detection, firewall design, malware detection, and so on. **Importance, Neglectedness, Tractability** Importance: •• It’s important that powerful systems do not fall into the hand of extreme or reckless actors, but they may be able to develop those systems themselves regardless. Neglectedness: ••• Security researchers are currently bottlenecked by compute power. Tractability: ••• This is an application of ML, and doesn’t necessarily require new fundamental research. **Relation to General Capabilities** Better upstream models could help make for better ML defenses. For example, better reasoning and the ability to understand longer sequences could make models better able to analyze code and large assembly files. **Capabilities Externalities Analysis** Much of the work in this space is engineering and not influencing upstream models. **Criticisms** This should only be funded or researched in an exploratory way before large commitments are made. Funding will need to strongly disincentivize gain-of-function attack capabilities in order for this area to be positive. This is just realpolitik as it assumes that systems are much less likely to be safe if they fall into the wrong hands. In reality, systems are unlikely to be roughly equally safe (or unsafe) in any case, so devoting time to cybersecurity will not help. If nuclear weapons aren’t much of an x-risk, then WWIII is unlikely to be a large x-risk. It might be necessary to improve security for particular ML models to prevent them from falling into the wrong hands, but better security in general isn’t useful. ### **ML for Improving Epistemics** **Problem Description** This area is about using machine learning to improve the epistemics and decision-making of political leaders. This area is tentative; if it turns out to have difficult-to-avoid capabilities externalities, then it would be a less fruitful area for improving safety. **Motivation** We care about improving decision-making among political leaders to reduce the chance of rash or possibly catastrophic decisions. These decision-making systems could be used in high-stakes situations where decision-makers do not have much foresight, where passions are inflamed, and decisions must be made extremely quickly and based on gut decisions. During these [moments of peril](https://www.alignmentforum.org/posts/dfRtxWcFDupfWpLQo/perform-tractable-research-while-avoiding-capabilities#Managing_Moments_of_Peril), humans are liable to make egregious errors. Historically, the closest we have come to a global catastrophe has been in these situations, including the Cuban Missile Crisis. Work on these technologies could reduce the prevalence of perilous situations. Separately, this reduces the risks from persuasive AI. Moreover, it helps leaders more prudently wield the immense power that future technology will provide. As Carl Sagan said, “If we continue to accumulate only power and not wisdom, we will surely destroy ourselves.” Other motivation formulations: * Better forecasting can potentially help with instituting better regulations and calibrating AI strategy. In addition, it could reduce risks from hasty deployments predicated on other actors being farther along than they are. * This reduces x-risks from hyper-persuasive AI and an erosion of epistemics. * AI can help political leaders make better decisions, like what we needed during emerging crises like COVID. **What Researchers Are Doing Now** We are developing ML benchmarks for forecasting geopolitical events. **What Advanced Research Could Look Like** Systems could eventually become superhuman forecasters of geopolitical events. They could help brainstorming possible considerations that might be crucial to a leader’s decision. Finally, they could help identify inconsistencies in a leader’s thinking and help them produce a more sound judgment. **Importance, Neglectedness, Tractability** Importance: •• Better epistemics could be useful for the development and deployment of AI systems, but it would not solve any fundamental problems in AI safety on its own. Neglectedness: ••• Few care about superforecasting, let alone ML for forecasting. Tractability: •• This is an application of ML, but it is fairly outside the capabilities of current models. **Relation to General Capabilities** Forecasting ability and IQ are not strongly related. Exceptional forecasting skills seem to be hard to acquire even for smart people. Better retrieval methods could help improve forecasting capabilities. **Capabilities Externalities Analysis** Much of the work in this space is engineering and not influencing upstream systems. If work on this problem is appearing to play into general capabilities, work should be discouraged (this goes for any emerging safety research area). It is important to keep this line of research targeted to reduce the chances of speeding up other kinds of capabilities (for instance, truth/contemplation/reasoning/research). **Criticisms** “The essence of intelligence is prediction.” Therefore it may be harder to avoid capabilities externalities. Taleb argues in *Antifragile* that reliance on forecasting makes us more vulnerable to tail risks and gives us a false sense of security. Forecasting benchmarks need to span decades of historical data to measure the ability to predict tail risks. That will require a substantial engineering effort. ### **Cooperative AI** **Problem Description** In the future, AIs will interact with humans and other AIs. For these interactions to be successful, models will need to be more skilled at cooperating. This area is about reducing the prevalence and severity of cooperation failures. AI models and humans may be stuck in poor equilibria that are robustly difficult to escape; cooperative AI methods should improve the probability of escaping or avoiding poor equilibria. This problem also works towards making AI agents better at positive-sum games, of course subject to capabilities externalities constraints. As we describe this area, it does not include the typical directions in human-robot interaction, such as communication between humans and robots in standard tasks. **Motivation** First, worlds where multiple agents are aligned in different ways are highly plausible. There are strong incentives to have multiple decision-making agents; for example, [jury theorems](https://plato.stanford.edu/entries/jury-theorems/) show collections of agents make better decisions than a single agent, and agents have incentives to retain some control and not automatically cede control to one single centralized agent. In a world where we have AIs interacting with other agents, cooperative AI can be useful for not just having higher upside but also smaller downside. Cooperative AIs could help rein in misaligned agents or power-seeking AIs. For this protective measure to work, the power of the collective must be greater than the power of the power-seeking AI. Let’s consider how easily a power-seeking AI could overpower the world. Of course, if AIs are better able to cooperate, they are more likely to counteract power-seeking AIs. Tracking and regulating the flow and concentration of GPUs can reduce the probability of a single AI becoming more powerful than the collective power of the rest. Even if one power-seeking agent is smarter than every other model, it does not imply that it has control over all other models. Usually, having higher cognitive ability does not let an agent overpower the collective (the highest IQ person does not rule the world). However, individual bad actors that are smarter than others can have outsized effects. In some special environments, such as environments with [structured criticality](https://en.wikipedia.org/wiki/Structured_criticality), small differences could be magnified. Moreover, the world is becoming more long-tailed and more like “[extremistan](https://www.capitalideasonline.com/wordpress/mediocristan-vs-extremistan/?pdf=13750)” in which there is tyranny of the top few (this is in contrast to mediocristan, where there is tyranny of the collective). Consequently, while there are factors that can give smarter models outsized advantages, the smartest model does not automatically overpower the collective power of cooperative AIs. Other motivation formulations: * To make environments with multiagent AIs stable, cooperation may be necessary. * Cooperation can make us better able to escape bad equilibria, such as help us overcome bad systems for which we are dependent on our basic needs. Being robust to coordination failures helps us avoid lock-in. * Intrasystem goals can create misalignment; cooperation helps agents better jointly optimize a decomposed objective, thereby reducing misalignment. * If each superintelligence is fully aligned with one individual, the superintelligences and their owners can end up in game-theoretic tragedies * The theory of morality as cooperation (“all of human morality is an attempt to solve a cooperative problem”) implies that if we want to build ethical machines (machine ethics), then it can be fruitful to make them more cooperative and build in cooperative dispositions. * If we improve cooperativeness, we have a larger range of governance and deployment strategies (e.g., we could have AIs team up against other defecting AIs, this affects the strategic landscape and the possible defensive solutions), rather than trust a single contemplative agent. This is work towards making more sophisticated contractarian approaches to machine ethics more feasible (rather than only philosopher-king type strategies). * Without cooperation, we will have hierarchies where the strongest agents dominate, leading to “the state of nature” and conflict; to help avoid such a dire environment, we need cooperation. * Cooperation is a “defense in depth” area that does not decidedly fix safety problems, but it helps drive down the severity and probability of many hazards * Cooperation could help rein in power-seeking or colluding AIs; a group of AIs, in many cases, have enough power to rein in misaligned AIs. * Cooperation reduces the probability of conflict and makes the world less politically turbulent. Similarly, cooperation enables collective action to counteract rogue actors, regulate systems with misaligned goals, and rein in power-seeking or colluding AIs. Finally, cooperation reduces the probability of various forms of lock-in and helps us overcome and replace inadequate systems that we are dependent on for our basic needs. **What Advanced Research Could Look Like** Researchers could create agents that, in arbitrary real-world environments, exhibit cooperative dispositions (e.g., help strangers, reciprocate help, have intrinsic interest in others achieving their goals, etc.). Researchers could create coordination systems or AI agent reputation systems. Cooperating AIs should also be more effective at coordinating to rein in power-seeking AI agents. **Importance, Neglectedness, Tractability** Importance: ••• Within systemic safety, cooperative AI is the most targeted towards the reduction of AI x-risk. Neglectedness: ••• There are few researchers working in this area. Tractability: • It is currently especially difficult to perform meaningful research on multiagent sequential decision making. **Relation to General Capabilities** Agents able to plan on very long time horizons may have more incentives to cooperate, but even for humans cooperative tendencies and precedents were hard-earned and hard to enforce; powerful humans often prefer not to cooperate but rather dominate. **Capabilities Externalities Analysis** Some research strategies could make agents better at playing games, which would make them better at playing cooperative games. By pursuing less naive research strategies, such as the strategy of endowing models with intrinsic dispositions to cooperate, capabilities externalities should be easier to avoid. Research in this area should avoid developing cooperation methods that are antisymmetric with collusion and may therefore create collusive abilities. That is, research should avoid collusion externalities. Fortunately, some cooperative tendencies are not antisymmetric with collusion; being disposed to help strangers could be thought antisymmetric with the disposition to harm strangers, but the former helps cooperation and the latter does not help collusion. Cooperativeness is also useful for a larger, highly permeable group, while collusion is maintained among a specific group. Cooperation is often helped with transparency and truth, which is more robust than secrecy and lies, which collusion often depends on. Separately, working on honesty disincentivizes collusion. **Criticisms** Cooperation is too closely linked to collusion to be worth pursuing. Cooperation leads to higher connectivity, which leads to higher fragility (faster cascading failures) and more extreme tail events (long tail distributions get sharper with more connectivity; then you have a more volatile future). Higher connectivity undermines the power of the collective, as these environments are dominated by tail events and the most extreme agents. Possible additional areas ------------------------- The section above represents concrete problems that we believe can be pursued now without capability externalities and with varying levels of tractability. In this section we discuss some possible additional areas. One of these areas is not concrete enough yet, and the other areas have not formed into a coherent area yet. These limitations may someday be resolved. ### **Regulating Mesa-Optimizers and Intrasystem Goals** As systems make objectives easier to optimize and break them down into new goals, subsystems are created that optimize these new intrasystem goals. But a common failure mode is that “intrasystem goals come first.” These goals can steer actions instead of the primary objective. Thus a system’s explicitly written objective is not necessarily the objective that the system operationally pursues, and this can result in misalignment. Intrasystem goals occur when the goal of a training process (e.g., the loss function used for gradient descent, the exploration incentives of the sequential decision making agent, etc.) differs from the operational goal of the trained model it produces. This is known as [mesa-optimization](https://arxiv.org/abs/1906.01820). When multi-agent sequential decision-making is more feasible, we can give agents goals and delegate subgoals to agents. Since breaking down goals can distort them, this creates “intrasystem goals” and misalignment. Regulating these subagents that are optimizing their subgoal will be a research challenge. However, capabilities will need to be advanced further before this research area will be tractable. An alternative way to study mesa optimizers is to study the general inductive biases of optimizers. While this could potentially be informative for understanding mesa-optimization, the neglectedness and tractability are low: this was previously the hottest area of theoretical ML for some years in the late 2010s, and see here for a [discussion of tractability](https://machinethoughts.wordpress.com/2017/12/08/the-role-of-theory-in-deep-learning/). A related area that is more neglected and tractable is certified behavior. It is possible to have guarantees about model behavior given their weights [[1](https://arxiv.org/abs/1801.09344),[2](https://arxiv.org/abs/1902.02918)], so it is not necessarily true that “all bets are off” when models are deployed. ### **Proxy Gaming** This is not clearly an area in its own right. Right now it looks like adversarial robustness, anomaly detection, and detecting emergent functionality, applied to sequential decision making problems. Perhaps in the future a distinct problem area will emerge. ### **Irreversibility** To avoid lock-in, some want to train models to pursue easy-to-reverse states. One way is to increase optionality; however, current methods to do this might simultaneously increase power-seeking behavior. Perhaps in the future avoiding irreversibility generally and preventing lock-in can be separated from power-seeking. Right now it seems there are other more targeted approaches to avoiding lock-in, such as moral parliaments, philosophy research bot/value clarification, and cooperative AI. Conclusion ---------- It is sometimes argued that the AI safety field has few specific problems that can be tractably pursued without creating capabilities externalities. However, we have shown that there are some specific research directions that can be pursued while avoiding capabilities externalities. Many of the research directions listed in this document can be tractably pursued by the broader ML research community, making them suitable for broader outreach beyond the small group of researchers solely motivated by existential safety. Though we believe some of these areas are more promising than others, we specifically do not argue for the overriding importance of a single one for reasons of [diversification](https://www.alignmentforum.org/posts/n767Q8HqbrteaPA25/complex-systems-for-ai-safety-pragmatic-ai-safety-3#Diversification). We also do not claim that the areas above are the only areas worth pursuing, and like all research avenues, they may need to be curtailed in the future if they prove intractable or produce unacceptable externalities. We believe that the areas above are promising enough to be included in an overall scalable portfolio. 1. **[^](#fnrefcwbe8cxgif)**Note that detecting lies would best fit under Monitoring rather than Alignment, but for simplicity we consolidate these approaches here.
e63d19ed-3528-4aaf-b25d-1c13342ae71a
trentmkelly/LessWrong-43k
LessWrong
Happiness and Productivity. Living Alone. Living with Friends. Living with Family. What I want to get people to discuss here is obvious given the title. What has been their experience regarding who and specially how many people they live with, and how that impacted their motivation and happiness.  I don't want to peruse papers on happiness and productivity, because I'm particularly interested in anecdotal tales coming from a Lesswrong sample.  Three pieces of information seem relevant, so if that is okay with whoever comments, I'd ask people to tell us if they consider themselves introverts (recharge batteries by being alone) extroverts, or both. As well as their age and hometown.  The reason I want to have a fuller understanding of this is that I've slowly come to have a strong belief that the main problem with people I know who are suffering, or failing to achieve their goals, is living with fewer tribal affiliates than they "need". And that belief could very well be false or biased.  I'm equally interested in what people think in general about their friends' living situation: "Most of my friends who live with friends experience such and such emotion, but the ones who live with family experience such and such problems with motivation" as I am in personal experiences.   Following a suggestion about creating topics like this before, I'll put my own case in the comments. 
9fccc5ff-c81a-4ba3-9e4d-59e4fba22718
trentmkelly/LessWrong-43k
LessWrong
2024 Unofficial LessWrong Survey Results  Thanks to everyone who took the Unofficial 2024 LessWrong Survey. For the results, check out the data below.  The Data 0. Population There were two hundred and seventy nine respondents over thirty three days. Previous surveys have been run over the last decade and a half. 2009: 166 2011: 1090 2012: 1195 2013: 1636 2014: 1503 2016: 3083 2017: "About 300" 2020: 61 2022: 186 2023: 558 2024: 279 That’s an annoying drop. I put a bit less oomph into spreading the good word of the census this year largely due to a packed December, but I’d hoped to get above a thousand anyway. Hope is a strategy, but it’s not a very good strategy. Out of curiosity, I checked the Slate Star Codex/ Astral Codex Ten survey numbers. 2014: 649 2017: 5500 2018: 8077 2019: 8171 2020: 8043 2022: 7341 2024: 5982 That comparison makes me think about the ways in which LessWrong and Astral Codex Ten are different. Everyone who subscribes to ACX gets the ACX survey email, and everyone who subscribes to ACX is at least aware of who Scott Alexander is. LessWrong is a bit more spread out. I tune out most of the AI posts for instance, and I’m sure there’s someone around who tunes out everything but the AI posts. Those two people see a website with rather different content. Enough musing, let's see some numbers. Previous Surveys Have you taken previous incarnations of the Less Wrong Census/Survey? Yes: 140, 51.7% No: 119, 43.9% Prefer not to answer: 12, 4.4% A note on formatting: For questions like this where there’s obvious categories, I show the question in bold, the categories, then the total number of people who picked that category, then the percent. For instance, 140 people said yes they had taken previous LW surveys, which is about 51.7% of everyone who answered. I habitually rounded to the nearest tenth of a percent, and often binned any answer with less than three other people in it to Other. I. Demographics Age: 32 + 10.1 (25, 31, 37) [n=274] This is the other common format for th
2cc74db2-70f0-490d-8d4c-feb831e6d060
trentmkelly/LessWrong-43k
LessWrong
[Link] Bet Your Friends to Be More Right This article does a good job of explaining how betting can be a useful rationality practice. An excerpt: > The interesting thing about this practice was that it made us both think very carefully about the accuracy of all of our statements. The most embarrassing thing ever was to say, "I bet you anything that I'll be on time..." and then be unwilling to back up the assertion with a bet. Failing to bet was an admission that you'd just said something that you had no real confidence in.
9a8c1c8f-4f66-4f8b-9dd4-a92b68926fa7
trentmkelly/LessWrong-43k
LessWrong
Meetup : San Antonio Meetup Discussion article for the meetup : San Antonio Meetup WHEN: 10 April 2016 02:00:00PM (-0500) WHERE: 12651 Vance Jackson Rd #118, San Antonio, TX 78230 Bubble tea, frozen yogurt, and discussion. All are welcome. New Meetup to discuss rationality and all things LessWrong and meet the local community. Look for the sign that says Less Wrong. Discussion article for the meetup : San Antonio Meetup
cbb1f3df-9c3a-4711-9344-f43c85748144
trentmkelly/LessWrong-43k
LessWrong
In support of Yak Shaving Original post:  http://bearlamp.com.au/in-support-of-yak-shaving/ Part 2:  http://bearlamp.com.au/yak-shaving-2/ ---------------------------------------- Yak shaving is heralded as pretty much "the devil" of trying to get things done.  The anti-yak shaving movement will identify this problem as being one of focus.  The moral of the story they give is "don't yak shave". Originally posted in MIT's media lab with the description: > Any seemingly pointless activity which is actually necessary to solve a problem which solves a problem which, several levels of recursion later, solves the real problem you're working on. But I prefer the story by Seth Godin: > "I want to wax the car today." > > "Oops, the hose is still broken from the winter. I'll need to buy a new one at Home Depot." > > "But Home Depot is on the other side of the Tappan Zee bridge and getting there without my EZPass is miserable because of the tolls." > > "But, wait! I could borrow my neighbor's EZPass..." > > "Bob won't lend me his EZPass until I return the mooshi pillow my son borrowed, though." > > "And we haven't returned it because some of the stuffing fell out and we need to get some yak hair to restuff it." > > And the next thing you know, you're at the zoo, shaving a yak, all so you can wax your car. I disagree with the conclusion to not yak shave, and here's why. ---------------------------------------- The problem here is that you didn't wax the car because you spent all day shaving yaks (see also "there's a hole in my bucket").  In a startup that translates to not doing the tasks that get customers - the tasks which get money and actually make an impact, say "playing with the UI".  It's easy to see why such anti-yak shaving sentiment would exist (see also: bikeshedding, rearranging deck chairs on the titanic, hamming questions).  You can spend a whole day doing a whole lot of nothings; getting to bed and wonder what you actually accomplished that day (hint: a whole lot of runnin
0f4888a0-3980-40d3-a7d1-15cab86878ae
trentmkelly/LessWrong-43k
LessWrong
Parsimony as a side dish - a game to play on meetups? Tl;dr: we seem to (naively?) compound parsimony with other heuristics. There was a quote from R. Burton, provided by JQuinton, offering an amusing exercise. The reader had to guess if an excerpt of text was nonsense or merely a rambling presentation of something, and what that could be. Upon learning a single word of explanation, it became difficult to read it as anything than the object's description, even if the reader was told there were other possible answers (beyond 'ridiculous.') Here's an example: > Reconstruction probably won't even account for all that escaped the initial search. Of course it doesn't have as much lead as the stained variety. Please don't follow 'old traditions' again, it's a cultural thing anyway. The little ones are easily lost in water. The dog won't know how to find them, but the problem is rather that it won't know how to avoid them. The high singing note I will miss. Be careful not to step on it. It was to harmony what it is to ruin. A different brand is used in electron microscopy, to make 'em smooth and even. There's plenty under the bench in the park. I had cherished it since my wedding. (I'm not a writer, so that was rather clumsy. Please post better examples in comments.) But what if we are offered to brainstorm before answering, and to try viewing the excerpt as a collection of true facts, just not necessarily a coherent story? Here are some of our possible approaches (heuristics): - it's a picture; - it's an instruction; -it's paraphyletic (e.g., it is about a single thing which has more than one cause); - it's alive! - it's dangerous!  - it's an advertisement; - it's a single thing better described by more than one word, though still recognizable from the best match; - it's a compilation of distant phenomena related to whatsitname, the everyday thing; - it's all true, BUT there are qualifiers (which could make it more plausible-sounding); - it's just not a physical body; - it's an extreme case etc. ...and then
529330b3-4932-4410-b46a-71f09719ebee
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
What are current smaller problems related to top EA cause areas (eg deepfake policies for AI risk, ongoing covid variants for bio risk) and would it be beneficial for these small and not-catastrophic challenges to get more EA resources, as a way of developing capacity to prevent the catastrophic versions? How might EA be under-investing in "on the ground" type work that relates to cause areas? If there's a spectrum between research/abstract and small/on the ground, what are some areas where EA cause areas could be helped with more on the ground efforts around current challenges related to cause areas. Some examples: * Biosecurity + The on the ground version might involve: Work with local governments and local businesses to help set them up with thermal scanning for detecting fevers, or ways to detect and communicate food safety events from restaurants, or ways to detect and communicate local covid outbreaks It's kind of like you could work on AI safety in the abstract, or you could work on AI safety in the current challenges related to AI. You could work on biosecurity in the abstract, or you could work on the current challenges related to ongoing covid spread. Both seem important. How might we be under-investing in the latter types of activities? They perhaps seem less high leverage, though I'd argue they massively increase the effectiveness of all the other EA type work on the problem. There are some things that seem basic and less important but that make a big difference in real world effectiveness, in particular hands on experience with solving similar challenges in the real world. It's kind of like translational research. Taking EA areas and working on the most current instantiations of those challenges, as a way of seeing what skills are actually needed, developing expertise, and so on. If one version of EA has organizations that have successfully reduced flu spread in areas, and one version has just done more abstract high level large scale government work, the one that has the former too would be much more effective in addressing the next pandemic. My fear is that EA groups like high leverage things and dislike low leverage things, and on the ground type basic work often appears low leverage low status and boring, though it seems that without a pipeline for turning more high level high leverage EA work into on the ground results, EA's effectiveness will be hampered. Boring not-yet-catastrophic problems related to top EA cause areas that working on would likely make EA more effective at preventing the catastrophic versions * AI risk: deepfakes, ai astroturfing, ai powered social engineering, addictions to ai powered 'feeds', body image / ai filters and photo modifications * Bio risk: flu, STIs, covid variants, food poisoning outbreaks My sense is that the things in the bullet list above get much less EA resources than more high level and less 'on the ground' type things. Is that right? What are the pros and cons for having much more EA effort allocated to some of this 'boring' translational type on the ground applications? What are the current challenges related to top EA cause areas like bio risk and AI risk?
84706c80-1f4a-49b2-ade2-93adfe887013
trentmkelly/LessWrong-43k
LessWrong
O2 Curve Review: Well Worth the Money [Cross-posted from Grand, Unified, Empty. This is not an incentivized review; nobody at O2 even knows I’m writing this and none of the links have referral tracking. This is not medical advice, I am not a doctor, etc.] Like many of you, I’ve been wearing a reusable cloth mask for just under a year now to protect myself, and others, from COVID-19. Cloth masks work well, and remain far better than nothing if they’re what you have available. However, with several more-infectious COVID variants now circulating and no end in sight due to vaccine delays, I decided I could probably be doing better. Enter the O2 Curve. I had originally intended to purchase an N95-certified mask, but I quickly found that Amazon search results for “N95 mask” are effectively useless – too many fraudsters have realized that adding the word “N95” to things which are clearly not certified (and sometimes aren’t even masks) is a great way to get more hits. After poking around the internet, I stumbled upon the website of O2 Canada which promised an effective (though technically not N95-certified – more on that later) respirator mask, backed by publicly available filtration and testing data. I decided to give it a shot. I’ve now worn my O2 Curve several times, including for a nearly one hour trip to my local grocery store, and the verdict is in. I love it, and I’m not going back to cloth. Masks and Respirators and N95, Oh My The first thing to mention, since it bears clarification, is the difference between a regular face mask and a respirator, and what N95 certification means in all of this. A face mask (cloth, surgical-grade, or other material) is just something that sits in front of your nose and mouth, usually held in place by looping around your ears. Face masks are effective at filtering the air which passes through them, but they do not form any kind of seal. This means that air (and virus particles) can escape or enter around the edges of the mask. Due to the lack of seal, and the way air
627b4fb6-4d24-4086-9e85-2dc87f631761
StampyAI/alignment-research-dataset/lesswrong
LessWrong
On Interpretability's Robustness *Léo Dana - Produced as part of the SERI ML Alignment Theory Scholars Program - Summer 2023 Cohort, as well as an internship at FAR AI, both under the mentorship of Claudia Shi.* **Would you trust the** [**IOI**](https://arxiv.org/abs/2211.00593) **circuit?** -------------------------------------------------------------------------------- Interpretability is envisioned by many as the main source of alignment tool for future AI systems, yet I claim that interpretability’s Theory of Change has a central problem: we don’t trust interpretability tools. Why is that? * **No proof of generalization:** for interpretability, we have the same problem as for the models we studied, we don’t know if our tools will generalize (and we will likely never have proofs of generalization). * **Security mindset:** there is always something that could go wrong. If we don’t have proof, we rely on several assumptions that we can test, but nothing tells us that there is not a criterion we forgot to test or a critical environment in which the algorithm fails. * **Distrust is fair:** We already have examples of interpretability tools that were, in fact, not robust. + Saliency maps, a tool to visually observe which pixels or features of an image are important to the prediction, [failed to generalize](https://arxiv.org/abs/1711.00867) to simple transformations that didn’t affect the CNNs. + [*ROME*](https://rome.baulab.info/) and [*MEMIT*](https://arxiv.org/abs/2210.07229) use causal tracing to find and change some information’s location in LLMs. However causal tracing was proven to [not find where the information is](https://arxiv.org/abs/2301.04213), and the editing method failed to generalize in some precise [unseen contexts](https://specificityplus.apartresearch.com/). + Recently, it was found that naive optimization for interpretable directions in an LLM could result in some [Interpretability Illusions](https://www.lesswrong.com/posts/RFtkRXHebkwxygDe2/an-interpretability-illusion-for-activation-patching-of).   Although some people don’t trust interpretability, others seem to me to take for granted articles even when the authors admit the limitations in the robustness of their methods: * Chris Olah et al, in [*The Building Blocks of Interpretability*](https://distill.pub/2018/building-blocks/): “With regards to attribution, recent work suggests that many of our current techniques are unreliable.” * Neel Nanda about [*Othello-GPT*](https://www.neelnanda.io/mechanistic-interpretability/othello#my-findings):  “How can we find the true probe directions, in a robust and principled way?” Note that I’m not saying such findings are not robust, but their robustness was not completely evaluated.   I read [*Against Inerpretability*](https://www.lesswrong.com/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1) and think that most critics hold because we cannot yet trust our interpretability tools. If we had robust tools, interpretability would be at least instrumentally useful to understand models. Moreover, robustness is not unachievable! It is a gradient of how much arguments support you hypothesys. I think Anthropics’ paper [*Towards Monosemanticity*](https://www.anthropic.com/index/towards-monosemanticity-decomposing-language-models-with-dictionary-learning) and [*In-context Learning*](https://www.anthropic.com/index/in-context-learning-and-induction-heads) provide good examples of thorough research that clearly states the hypothesis, claims, and give several independent evidence of their results. For those reasons, **I think we should search what is a robust circuit, and evaluate how robust are the circuits we have found**. Especially, while trying to find newer or more complex circuits, not after.   Interpretability Tech Tree -------------------------- Designing an interpretability tool with all the important desiderata it needs to satisfy (robustness, scales to larger models, …) “first try” is very optimistic. Instead one can use a Tech Tree to know which desideratum to try next. In [Transparency Tech Tree](https://www.lesswrong.com/posts/nbq2bWLcYmSGup9aF/a-transparency-and-interpretability-tech-tree), Hubinger details a path to follow to find a tool for transparency. A Tech Tree approximately says: “Once you have a proof of concept, independently search two improvements, and then combine them”, but also adds which improvement should be pursued, and in what order. Not only does this enable parallelization of research, but each improvement is easier since it is based on simpler models or data. In Hubinger’s case, his tree starts with *Best-case inspection transparency*, which can be improved in *Worst-case inspection transparency*, or *Best-case training process transparency*, and finally combined with *Worst-case training process transparency*. Example for robustness: Here is a 2-D table with side *complexity*, and *robustness*. New tools start as *Proof of concept* and the goal is to transform them into a *Final product* that one could use on a SOTA model. The easier path is to do \*1 -> 2 \* and *1 -> 3* in parallel, followed by *2 + 3 -> 4*, combining tools rather than finding new ones. | | | | | --- | --- | --- | | | non-robust | robust | | complex circuit | 2. Proof of complexity | 4. Final product | | easy circuit | 1. Proof of concept | 3. Proof of robustness |   A Methodology for Interpretability ---------------------------------- In general, interpretability lacks a **methodology**: a set of tools, principles, or methods that we can use to increase our trust in the interpretations we build. The goal is not to set certain ways to do research in stone, but to make explicit that we need to trust our research, and highlight bottlenecks for the community. The recent [*Towards Best Practices of Activation Patching*](https://arxiv.org/pdf/2309.16042.pdf) is a great example of standards for activation patching![[1]](#fnem2tt08s16) For example, they analyze which one of logit or probabilities (normalized logits) you should use to find robust results, and how this may impact your findings. Creating such a methodology is not just about technical research, we also need to be clear on **what is an interpretation**, and **what counts as proof**. This part is much harder[[2]](#fnv9ew4tpnrrf) since it involves operationalization of "philosophical" questionning. Fortunately, researchers at Stanford are trying to answer those questions with the [*Causal Abstraction*](https://arxiv.org/abs/2106.02997) framework. It tries to formalize the hypothesis that “a network implements an algorithm”, and how to test it. So given our network, and an explicit implementation of the algorithm we think it implements, there exists a method to link their parameters and test how close our algorithm is to the network.   Personal research ----------------- At SERI MATS, my research[[3]](#fn8uyg4q6vf1c) goal was to test the robustness of [activation steering](https://www.lesswrong.com/posts/5spBue2z2tw4JuDCx/steering-gpt-2-xl-by-adding-an-activation-vector). The idea was to test the tradeoffs directions had to face on being good at: * Concept classification: from activations in the model, can the direction separate effectively those originating from male vs female? * Concept steering: given a gendered sentence, can you use the direction to make the model use the other gender? * Concept erasure: given a gendered sentence, can you use the direction to make the model confused about which gender to use in the rest of the text? However, I chose to apply it to gender which turned out to be too complex/not the right paradigm. Using a dataset with the concept of utility, [Annah](https://www.lesswrong.com/users/annah) and [shash42](https://www.lesswrong.com/users/shash42) were able to [test these tradeoffs](https://www.lesswrong.com/posts/JCgs7jGEvritqFLfR/evaluating-hidden-directions-on-the-utility-dataset) and found interesting behavior: it is harder to remove a concept than to steer a model, and the choice of the direction needs to be precise to make the removal happen. I really liked their work and I think it is important to understand directions in LLMs!   Conclusion ---------- I hope to have convinced you of the importance of Interpretability’s Robustness as a research strategy. And especially that the most efficient way to create such a methodology is not by serendipitously stumbling upon it. Thanks for reading. I’ll be happy to answer comments elaborating on and against this idea. I will likely, by default, pursue this line of research in the future.   1. **[^](#fnrefem2tt08s16)**It would have helped me a lot during my internship! 2. **[^](#fnrefv9ew4tpnrrf)**The work that has already been done in Cognitive-Sciences and Philosophy might be useful in order to create this methodology. 3. **[^](#fnref8uyg4q6vf1c)**It is not available at the moment, I will plug a link when done with it.
046da3d0-a7ce-45e1-8ff9-2e792db5370e
trentmkelly/LessWrong-43k
LessWrong
Experimental Fat Loss With the end of the world nigh, and a public panic about to start, this seems an ideal time to worry about weight loss and the obesity epidemic.  Coincidentally, for the first time in my life, I'm getting fat. SlimeMoldTimeMold's 'Chemical Hunger' series  https://slimemoldtimemold.com/2021/07/07/a-chemical-hunger-part-i-mysteries/ seemed to draw a lot of interest round these parts, and even if it's not lithium https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-probably-not-lithium it does seem to me that the molds raise some most interesting questions. I find the whole 'seed oil' craziness to be a compellingly interesting argument, although, as Scott Alexander wrote: https://slatestarcodex.com/2020/03/10/for-then-against-high-saturated-fat-diets/ it does seem to be flat wrong. But I think it's important to be interested in ideas that look like they have to be right but aren't.   I want to draw everyone's attention to the 'Experimental Fat Loss' substack https://exfatloss.substack.com Which seems to me the very model of sanity and empiricism, a little like reading the early Proceedings of the Royal Society, were Robert Hooke to have become interesting in losing weight. In particular his definition of what it would mean for a diet to 'work' https://exfatloss.substack.com/p/the-definition-of-diet-success He does seem to have found something that works for him,  https://exfatloss.substack.com/p/losing-43lbs-in-144-days-on-ex150-diet and I find him sufficiently trustworthy-seeming that I'm going to see if it will work for me, and if it doesn't, use his methods to play around and see if I can find something that does.  But I would welcome the comments of wiser and more sceptical persons on all these things.
2cd22314-3ed7-4a4a-89b5-1c50b69fcca0
trentmkelly/LessWrong-43k
LessWrong
What fraction of breakthrough COVID cases are attributable to low antibody count? To what extent are COVID cases in vaccinated people a result of low antibody count, vs other factors (e.g. initial viral load)? Where does most of the variance come from? Motivation for this question: in a world where most breakthrough COVID cases hit people with low antibody count, one could get some kind of antibody test (probably of a particular type) and then either (a) get an extra vaccine if antibodies are low, or (b) just don't worry if antibody counts are high. That makes antibody tests (of whatever the particular type is) very high value, since we can behave very differently in those two cases. In a world where most of the variance comes from other factors (like initial viral load), results of an antibody test don't provide so much value. Related: Is antibody testing to assess effects of vaccines on you a good idea?
817e7486-9897-45e0-b036-ced4244472d6
trentmkelly/LessWrong-43k
LessWrong
Fully Live Electronic Contra Contra dance traditionally has live music, which I think is key to why the music and dancing have been able to stay tightly coupled as they've both changed over the decades. Over the past ~14y, however, there have been a few different approaches to pulling elements of electronic music into contra dance: * 2008: fully pre-recorded music. Took off as "techno contra" (despite not generally using Techno) after Forrest organized one and made a well-produced video at YDW. Later DJ Improper and others would would prepare sets but remix them live. * 2009: Ed Howe and John Cote brought live looping to contra dance as Perpetual e-Motion, building a complex texture out of many layers of fiddle and a range of effects. Super popular at their peak in ~2011. * 2011: Julie Vallimont and Brendan Carey-Block formed Double Apex, with Brendan playing live fiddle over Julie's loops, beats, and samples. Also very popular. Julie brought this approach to a series of duos over the years, with Jon Cannon (Delta Wave), Ed Howe (Cosmic Echo), Andy Reiner (Firecloud), and Noah VanNorstrand (Buddy System). (more) For years I've wanted to add a "fully live" type here, where all sounds are initiated in the moment by the musicians. I now have something in this category I'm reasonably happy with: Even though there's a lot going on, it's all live. Cecilia is playing fiddle, bringing in an octaver effect at 0:16. I'm playing piano with my hands and drums with my feet (note the drum pattern changes at 0:16 and 0:47). I'm also playing bass (starting at 0:16), where the beat of the drum initiates the notes and the piano left hand chooses which note to play. And then I'm using a breath controller (measuring how hard I'm blowing) to pulse a supersaw that, as with the bass, follows my left hand. There are still places where I don't quite have the sound I'm imagining, where listening back makes me grimace a little, or where I can't yet do as many things at once as would sound best, but I'm excite
fda5097d-354d-4059-b9e0-367ba5ff3cbc
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] A Rational Argument Today's post, A Rational Argument was originally published on 02 October 2007. A summary (taken from the LW wiki):   > You can't produce a rational argument for something that isn't rational. First select the rational choice. Then the rational argument is just a list of the same evidence that convinced you. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Recommended Rationalist Reading, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
c894102e-0a56-40bf-a210-1936ff7abbbb
StampyAI/alignment-research-dataset/blogs
Blogs
2012 Winter Matching Challenge! Thanks to the generosity of several major donors,† every donation to the Machine Intelligence Research Institute made now until January 5th, 2013 will be matched dollar-for-dollar, up to a total of $115,000! **[Donate Now!](https://intelligence.org/donate/)** $0 $28.75K $57.5K $86.25K $115K Now is your chance to **double your impact** while helping us raise up to $230,000 to help fund [our research program](https://intelligence.org/research/). [![](http://miri.wpengine.com/wp-content/uploads/2012/06/towardapositivesingularity.jpg "towardapositivesingularity")](https://intelligence.org/donate/) (If you’re unfamiliar with our mission, please see our [press kit](http://miri.wpengine.com/wp-content/uploads/2012/11/SI_PressKit.pdf) and read our short research summary: [Reducing Long-Term Catastrophic Risks from Artificial Intelligence](http://intelligence.org/summary/).) Now that Singularity University has [acquired](http://singularityu.org/singularity-university-acquires-the-singularity-summit/) the [Singularity Summit](http://singularitysummit.com/), and SI’s interests in rationality training are being developed by the now-separate [CFAR](http://appliedrationality.org/), **the Machine Intelligence Research Institute is making a profound transition**. (Note that most of the money from the acquisition is being placed in a separate fund for Friendly AI researchers, and therefore does not support our daily operations or other programs.) For 12 years we’ve largely focused on movement-building — through the Singularity Summit, [Less Wrong](http://lesswrong.com/), and other programs. This work was needed to build up a community of support for our mission and a pool of potential researchers for our unique interdisciplinary work. Now, the time has come to say “Mission Accomplished.” Or at least, “Mission Accomplished Well Enough to Pivot to Research.” Our community of supporters is now large enough that many qualified researchers are available for us to hire, if we can afford to hire them. Having published [30+ research papers](http://intelligence.org/research/) and [dozens more](http://lesswrong.com/lw/f6o/original_research_on_less_wrong/) original research articles on Less Wrong, we certainly haven’t neglected research. But **in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research**. ### Accomplishments in 2012 * Held a one-week research workshop on one of the open problems in Friendly AI research, and got progress that participants estimate would be the equivalent of 1-3 papers if published. (Details forthcoming. The workshop participants were Eliezer Yudkowsky, Paul Christiano, Marcello Herreshoff, and Mihály Bárász.) * Produced our annual [Singularity Summit](http://www.singularitysummit.com/) in San Francisco. Speakers included Ray Kurzweil, Steven Pinker, Daniel Kahneman, Temple Grandin, Peter Norvig, and many others. * Launched the new [Center for Applied Rationality](http://appliedrationality.org/), which ran 5 workshops in 2012, including [Rationality for Entrepreneurs](http://appliedrationality.org/entrepreneurs/) and [SPARC](http://appliedrationality.org/sparc2012/) (for young math geniuses), and also published one (early-version) smartphone app, [The Credence Game](http://www.acritch.com/credence-game/). * Launched the redesigned, updated, and reorganized [Singularity.org](http://intelligence.org/blog/2012/06/18/welcome-to-the-new-singularity-org/) website. * [Achieved most of the goals](http://lesswrong.com/r/discussion/lw/dm9/revisiting_sis_2011_strategic_plan_how_are_we/#summary) from our [August 2011 strategic plan](http://miri.wpengine.com/wp-content/uploads/2012/06/strategicplan20112.pdf). * 11 new [research publications](http://intelligence.org/research/). * Eliezer published the first 12 posts in his sequence [Highly Advanced Epistemology 101 for Beginners](http://wiki.lesswrong.com/wiki/Highly_Advanced_Epistemology_101_for_Beginners), the precursor to his forthcoming sequence, *Open Problems in Friendly AI*. * SI staff members published many other substantive articles on Less Wrong, including [How to Purchase AI Risk Reduction](http://lesswrong.com/r/discussion/lw/cs6/how_to_purchase_ai_risk_reduction/), [How to Run a Successful Less Wrong Meetup](http://lesswrong.com/lw/crs/how_to_run_a_successful_less_wrong_meetup/), a [Solomonoff Induction tutorial](http://lesswrong.com/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/), [The Human’s Hidden Utility Function (Maybe)](http://lesswrong.com/lw/9jh/the_humans_hidden_utility_function_maybe/), [How can I reduce existential risk from AI?](http://lesswrong.com/lw/ffh/how_can_i_reduce_existential_risk_from_ai/), [AI Risk and Opportunity: A Strategic Analysis](http://lesswrong.com/r/discussion/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/), and [Checklist of Rationality Habits](http://lesswrong.com/lw/fc3/checklist_of_rationality_habits/). * Launched our new volunteers platform, [SingularityVolunteers.org](http://singularityvolunteers.org/). * Hired two new researchers, Kaj Sotala and Alex Altair. * Published our [press kit](http://miri.wpengine.com/wp-content/uploads/2012/11/SI_PressKit.pdf) to make journalists’ lives easier. * And of course *much* more. ### Future Plans You Can Help Support In the coming months, we plan to do the following: * As part of Singularity University’s acquisition of the Singularity Summit, we will be changing our name and launching a new website. * Eliezer will publish his sequence *Open Problems in Friendly AI*. * We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for many of our core materials, to make them more accessible: *[The Sequences, 2006-2009](http://wiki.lesswrong.com/wiki/Sequences)*, *[Facing the Singularity](http://facingthesingularity.com/)*, and *[The Hanson-Yudkowsky AI Foom Debate](http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate)*. * We will publish several more research papers, including “Responses to Catastrophic AGI Risk: A Survey” and a short, technical introduction to [timeless decision theory](http://wiki.lesswrong.com/wiki/Timeless_decision_theory). * We will set up the infrastructure required to host a productive Friendly AI team and try hard to recruit enough top-level math talent to launch it. (Other projects are still being surveyed for likely cost and strategic impact.) We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org. † $115,000 of total matching funds has been provided by Edwin Evans, Mihály Bárász, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer. The post [2012 Winter Matching Challenge!](https://intelligence.org/2012/12/06/2012-winter-matching-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
195423fe-040e-4b86-b743-641d33173cd5
StampyAI/alignment-research-dataset/distill
Distill Scientific Journal
A Visual Exploration of Gaussian Processes Even if you have spent some time reading about machine learning, chances are that you have never heard of Gaussian processes. And if you have, rehearsing the basics is always a good way to refresh your memory. With this blog post we want to give an introduction to Gaussian processes and make the mathematical intuition behind them more approachable. Gaussian processes are a powerful tool in the machine learning toolbox. They allow us to make predictions about our data by incorporating prior knowledge. Their most obvious area of application is *fitting* a function to the data. This is called regression and is used, for example, in robotics or time series forecasting. But Gaussian processes are not limited to regression — they can also be extended to classification and clustering tasks. For a given set of training points, there are potentially infinitely many functions that fit the data. Gaussian processes offer an elegant solution to this problem by assigning a probability to each of these functions. The mean of this probability distribution then represents the most probable characterization of the data. Furthermore, using a probabilistic approach allows us to incorporate the confidence of the prediction into the regression result. We will first explore the mathematical foundation that Gaussian processes are built on — we invite you to follow along using the interactive figures and hands-on examples. They help to explain the impact of individual components, and show the flexibility of Gaussian processes. After following this article we hope that you will have a visual intuition on how Gaussian processes work and how you can configure them for different types of data. Multivariate Gaussian distributions ----------------------------------- Before we can explore Gaussian processes, we need to understand the mathematical concepts they are based on. As the name suggests, the Gaussian distribution (which is often also referred to as *normal* distribution) is the basic building block of Gaussian processes. In particular, we are interested in the multivariate case of this distribution, where each random variable is distributed normally and their joint distribution is also Gaussian. The multivariate Gaussian distribution is defined by a mean vector μ\muμ and a covariance matrix Σ\SigmaΣ. You can see an interactive example of such distributions in [the figure below](#Multivariate). The mean vector μ\muμ describes the expected value of the distribution. Each of its components describes the mean of the corresponding dimension. Σ\SigmaΣ models the variance along each dimension and determines how the different random variables are correlated. The covariance matrix is always symmetric and positive semi-definite. The diagonal of Σ\SigmaΣ consists of the variance σi2\sigma\_i^2σi2​ of the iii-th random variable. And the off-diagonal elements σij\sigma\_{ij}σij​ describe the correlation between the iii-th and jjj-th random variable. X=[X1X2⋮Xn]∼N(μ,Σ) X = \begin{bmatrix} X\_1 \\ X\_2 \\ \vdots \\ X\_n \end{bmatrix} \sim \mathcal{N}(\mu, \Sigma) X=⎣⎢⎢⎡​X1​X2​⋮Xn​​⎦⎥⎥⎤​∼N(μ,Σ) We say XXX follows a normal distribution. The covariance matrix Σ\SigmaΣ describes the shape of the distribution. It is defined in terms of the expected value EEE: Σ=Cov(Xi,Xj)=E[(Xi−μi)(Xj−μj)T] \Sigma = \text{Cov}(X\_i, X\_j) = E \left[ (X\_i - \mu\_i)(X\_j - \mu\_j)^T \right] Σ=Cov(Xi​,Xj​)=E[(Xi​−μi​)(Xj​−μj​)T] Visually, the distribution is centered around the mean and the covariance matrix defines its shape. The [following figure](#Multivariate) shows the influence of these parameters on a two-dimensional Gaussian distribution. The variances for each random variable are on the diagonal of the covariance matrix, while the other values show the covariance between them. Gaussian distributions are widely used to model the real world. For example, we can employ them to describe errors of measurements or phenomena under the assumptions of the *central limit theorem* One of the implications of this theorem is that a collection of independent, identically distributed random variables with finite variance have a mean that is distributed normally. A good introduction to the central limit theorem is given by [this video](https://www.khanacademy.org/math/ap-statistics/sampling-distribution-ap/sampling-distribution-mean/v/central-limit-theorem) from [Khan Academy](https://www.khanacademy.org). . In the next section we will take a closer look at how to manipulate Gaussian distributions and extract useful information from them. ### Marginalization and Conditioning Gaussian distributions have the nice algebraic property of being closed under conditioning and marginalization. Being closed under conditioning and marginalization means that the resulting distributions from these operations are also Gaussian, which makes many problems in statistics and machine learning tractable. In the following we will take a closer look at both of these operations, as they are the foundation for Gaussian processes. *Marginalization* and *conditioning* both work on subsets of the original distribution and we will use the following notation: PX,Y=[XY]∼N(μ,Σ)=N([μXμY],[ΣXXΣXYΣYXΣYY])P\_{X,Y} = \begin{bmatrix} X \\ Y \end{bmatrix} \sim \mathcal{N}(\mu, \Sigma) = \mathcal{N} \left( \begin{bmatrix} \mu\_X \\ \mu\_Y \end{bmatrix}, \begin{bmatrix} \Sigma\_{XX} \, \Sigma\_{XY} \\ \Sigma\_{YX} \, \Sigma\_{YY} \end{bmatrix} \right)PX,Y​=[XY​]∼N(μ,Σ)=N([μX​μY​​],[ΣXX​ΣXY​ΣYX​ΣYY​​]) With XXX and YYY representing subsets of original random variables. Through *marginalization* we can extract partial information from multivariate probability distributions. In particular, given a normal probability distribution P(X,Y)P(X,Y)P(X,Y) over vectors of random variables XXX and YYY, we can determine their marginalized probability distributions in the following way: X∼N(μX,ΣXX)Y∼N(μY,ΣYY) \begin{aligned} X &\sim \mathcal{N}(\mu\_X, \Sigma\_{XX}) \\ Y &\sim \mathcal{N}(\mu\_Y, \Sigma\_{YY}) \end{aligned} XY​∼N(μX​,ΣXX​)∼N(μY​,ΣYY​)​ The interpretation of this equation is that each partition XXX and YYY only depends on its corresponding entries in μ\muμ and Σ\SigmaΣ. To marginalize out a random variable from a Gaussian distribution we can simply drop the variables from μ\muμ and Σ\SigmaΣ. pX(x)=∫ypX,Y(x,y)dy=∫ypX∣Y(x∣y)pY(y)dy p\_X(x) = \int\_y p\_{X,Y}(x,y)dy = \int\_y p\_{X|Y}(x|y) p\_Y(y) dy pX​(x)=∫y​pX,Y​(x,y)dy=∫y​pX∣Y​(x∣y)pY​(y)dy The way to interpret this equation is that if we are interested in the probability density of X=xX = xX=x, we need to consider all possible outcomes of YYY that can jointly lead to the result The corresponding [Wikipedia article](https://en.wikipedia.org/wiki/Marginal_distribution) has a good description of the marginal distribution, including several examples. . Another important operation for Gaussian processes is *conditioning*. It is used to determine the probability of one variable depending on another variable. Similar to marginalization, this operation is also closed and yields a modified Gaussian distribution. This operation is the cornerstone of Gaussian processes since it allows Bayesian inference, which we will talk about in the [next section](#GaussianProcesses). Conditioning is defined by: X∣Y∼N(μX+ΣXYΣYY−1(Y−μY),ΣXX−ΣXYΣYY−1ΣYX)Y∣X∼N(μY+ΣYXΣXX−1(X−μX),ΣYY−ΣYXΣXX−1ΣXY) \begin{aligned} X|Y &\sim \mathcal{N}(\:\mu\_X + \Sigma\_{XY}\Sigma\_{YY}^{-1}(Y - \mu\_Y),\: \Sigma\_{XX}-\Sigma\_{XY}\Sigma\_{YY}^{-1}\Sigma\_{YX}\:) \\ Y|X &\sim \mathcal{N}(\:\mu\_Y + \Sigma\_{YX}\Sigma\_{XX}^{-1}(X - \mu\_X),\: \Sigma\_{YY}-\Sigma\_{YX}\Sigma\_{XX}^{-1}\Sigma\_{XY}\:) \\ \end{aligned} X∣YY∣X​∼N(μX​+ΣXY​ΣYY−1​(Y−μY​),ΣXX​−ΣXY​ΣYY−1​ΣYX​)∼N(μY​+ΣYX​ΣXX−1​(X−μX​),ΣYY​−ΣYX​ΣXX−1​ΣXY​)​ Note that the new mean only depends on the conditioned variable, while the covariance matrix is independent from this variable. Now that we have worked through the necessary equations, we will think about how we can understand the two operations visually. While marginalization and conditioning can be applied to multivariate distributions of many dimensions, it makes sense to consider the two-dimensional case as shown in the [following figure](#MarginalizationConditioning). Marginalization can be seen as integrating along one of the dimensions of the Gaussian distribution, which is in line with the general definition of the marginal distribution. Conditioning also has a nice geometric interpretation — we can imagine it as making a cut through the multivariate distribution, yielding a new Gaussian distribution with fewer dimensions. A bivariate normal distribution in the center. On the left you can see the result of marginalizing this distribution for Y, akin to integrating along the X axis. On the right you can see the distribution conditioned on a given X, which is similar to a cut through the original distribution. The Gaussian distribution and the conditioned variable can be changed by dragging the handles. Gaussian Processes ------------------ Now that we have recalled some of the basic properties of multivariate Gaussian distributions, we will combine them together to define Gaussian processes, and show how they can be used to tackle regression problems. First, we will move from the continuous view to the discrete representation of a function: rather than finding an implicit function, we are interested in predicting the function values at concrete points, which we call *test points* XXX. So how do we derive this functional view from the multivariate normal distributions that we have considered so far? Stochastic processes, such as Gaussian processes, are essentially a set of random variables. In addition, each of these random variables has a corresponding index iii. We will use this index to refer to the iii-th dimension of our nnn-dimensional multivariate distributions. The [following figure](#DimensionSwap) shows an example of this for two dimensions: Here, we have a two-dimensional normal distribution. Each dimension xix\_ixi​ is assigned an index i∈{1,2}i \in \{1,2\}i∈{1,2}. You can drag the handles to see how a particular sample (left) corresponds to functional values (right). This representation also allows us to understand the connection between the covariance and the resulting values: the underlying Gaussian distribution has a positive covariance between x1x\_1x1​ and x2x\_2x2​ — this means that x2x\_2x2​ will increases as x1x\_1x1​ gets larger and vice versa. You can also drag the handles in the figure to the right and observe the probability of such a configuration in the figure to the left. Now, the goal of Gaussian processes is to learn this underlying distribution from *training data*. Respective to the test data XXX, we will denote the training data as YYY. As we have mentioned before, the key idea of Gaussian processes is to model the underlying distribution of XXX together with YYY as a multivariate normal distribution. That means that the joint probability distribution PX,YP\_{X,Y}PX,Y​ spans the space of possible function values for the function that we want to predict. Please note that this joint distribution of test and training data has ∣X∣+∣Y∣|X| + |Y|∣X∣+∣Y∣ dimensions. In order to perform regression on the training data, we will treat this problem as *Bayesian inference*. The essential idea of Bayesian inference is to update the current hypothesis as new information becomes available. In the case of Gaussian processes, this information is the training data. Thus, we are interested in the conditional probability PX∣YP\_{X|Y}PX∣Y​. Finally, we recall that Gaussian distributions are closed under conditioning — so PX∣YP\_{X|Y}PX∣Y​ is also distributed normally. Now that we have the basic framework of Gaussian processes together, there is only one thing missing: how do we set up this distribution and define the mean μ\muμ and the covariance matrix Σ\SigmaΣ? The covariance matrix Σ\SigmaΣ is determined by its *covariance function* kkk, which is often also called the *kernel* of the Gaussian process. We will talk about this in detail in the next section. But before we come to this, let us reflect on how we can use multivariate Gaussian distributions to estimate function values. The [following figure](#PriorFigure) shows an example of this using ten test points at which we want to predict our function: In Gaussian processes we treat each test point as a random variable. A multivariate Gaussian distribution has the same number of dimensions as the number of random variables. Since we want to predict the function values at ∣X∣=N|X|=N∣X∣=N test points, the corresponding multivariate Gaussian distribution is also NNN -dimensional. Making a prediction using a Gaussian process ultimately boils down to drawing samples from this distribution. We then interpret the iii-th component of the resulting vector as the function value corresponding to the iii-th test point. ### Kernels Recall that in order to set up our distribution, we need to define μ\muμ and Σ\SigmaΣ. In Gaussian processes it is often assumed that μ=0\mu = 0μ=0, which simplifies the necessary equations for conditioning. We can always assume such a distribution, even if μ≠0\mu \neq 0μ≠0, and add μ\muμ back to the resulting function values after the prediction step. This process is also called *centering* of the data. So configuring μ\muμ is straight forward — it gets more interesting when we look at the other parameter of the distribution. The clever step of Gaussian processes is how we set up the covariance matrix Σ\SigmaΣ. The covariance matrix will not only describe the shape of our distribution, but ultimately determines the characteristics of the function that we want to predict. We generate the covariance matrix by evaluating the kernel kkk, which is often also called *covariance function*, pairwise on all the points. The kernel receives two points t,t′∈Rnt,t’ \in \mathbb{R}^nt,t′∈Rn as an input and returns a similarity measure between those points in the form of a scalar: k:Rn×Rn→R,Σ=Cov(X,X′)=k(t,t′) k: \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R},\quad \Sigma = \text{Cov}(X,X’) = k(t,t’) k:Rn×Rn→R,Σ=Cov(X,X′)=k(t,t′) We evaluate this function for each pairwise combination of the test points to retrieve the covariance matrix. This step is also depicted in the [figure above](#PriorFigure). In order to get a better intuition for the role of the kernel, let’s think about what the entries in the covariance matrix describe. The entry Σij\Sigma\_{ij}Σij​ describes how much influence the iii-th and jjj-th point have on each other. This follows from the definition of the multivariate Gaussian distribution, which states that Σij\Sigma\_{ij}Σij​ defines the correlation between the iii-th and the jjj-th random variable. Since the kernel describes the similarity between the values of our function, it controls the possible shape that a fitted function can adopt. Note that when we choose a kernel, we need to make sure that the resulting matrix adheres to the properties of a covariance matrix. Kernels are widely used in machine learning, for example in *support vector machines*. The reason for this is that they allow similarity measures that go far beyond the standard euclidean distance (L2L2L2-distance). Many of these kernels conceptually embed the input points into a higher dimensional space in which they then measure the similarityIf the kernel follows Mercer’s theorem it can be used to define a Hilbert space. More information on this can be found on [Wikipedia](https://en.wikipedia.org/wiki/Kernel_method).. The [following figure](#MultipleKernels) shows examples of some common kernels for Gaussian processes. For each kernel, the covariance matrix has been created from N=25N=25N=25 linearly-spaced values ranging from [−5,5][-5,5][−5,5]. Each entry in the matrix shows the covariance between points in the range of [0,1][0,1][0,1]. This figure shows various kernels that can be used with Gaussian processes. Each kernel has different parameters, which can be changed by adjusting the according sliders. When grabbing a slider, information on how the current parameter influences the kernel will be shown on the right. Kernels can be separated into *stationary* and *non-stationary* kernels. *Stationary* kernels, such as the RBF kernel or the periodic kernel, are functions invariant to translations, and the covariance of two points is only dependent on their relative position. *Non-stationary* kernels, such as the linear kernel, do not have this constraint and depend on an absolute location. The stationary nature of the RBF kernel can be observed in the banding around the diagonal of its covariance matrix (as shown in [this figure](#MultipleKernels)). Increasing the length parameter increases the banding, as points further away from each other become more correlated. For the periodic kernel, we have an additional parameter PPP that determines the periodicity, which controls the distance between each repetition of the function. In contrast, the parameter CCC of the linear kernel allows us to change the point on which all functions hinge. There are many more kernels that can describe different classes of functions, which can be used to model the desired shape of the function. A good overview of different kernels is given by Duvenaud. It is also possible to combine several kernels — but we will get to this later. ### Prior Distribution We will now shift our focus back to the original task of regression. As we have mentioned earlier, Gaussian processes define a probability distribution over possible functions. In [this figure above](#DimensionSwap), we show this connection: each sample of our multivariate normal distribution represents one realization of our function values. Because this distribution is a multivariate Gaussian distribution, the distribution of functions is normal. Recall that we usually assume μ=0\mu=0μ=0. For now, let’s consider the case where we have not yet observed any training data. In the context of Bayesian inference, this is called the *prior* distribution PXP\_XPX​. If we have not yet observed any training examples, this distribution revolves around μ=0\mu=0μ=0, according to our original assumption. The prior distribution will have the same dimensionality as the number of test points N=∣X∣N = |X|N=∣X∣. We will use the kernel to set up the covariance matrix, which has the dimensions N×NN \times NN×N. In the previous section we have looked at examples of different kernels. The kernel is used to define the entries of the covariance matrix. Consequently, the covariance matrix determines which type of functions from the space of all possible functions are more probable. As the prior distribution does not yet contain any additional information, it is perfect to visualize the influence of the kernel on the distribution of functions. The [following figure](#Prior) shows samples of potential functions from prior distributions that were created using different kernels: Clicking on the graph results in continuous samples drawn from a Gaussian process using the selected kernel. After each draw, the previous sample fades into the background. Over time, it is possible to see that functions are distributed normally around the mean µ . Adjusting the parameters allows you to control the shape of the resulting functions. This also varies the confidence of the prediction. When decreasing the variance σ\sigmaσ, a common parameter for all kernels, sampled functions are more concentrated around the mean μ\muμ. For the *Linear* kernel, setting the variance σb=0\sigma\_b=0σb​=0 results in a set of functions constrained to perfectly intersect the offset point ccc. If we set σb=0.2\sigma\_b=0.2σb​=0.2 we can model uncertainty, resulting in functions that pass close to ccc. ### Posterior Distribution So what happens if we observe training data? Let’s get back to the model of Bayesian inference, which states that we can incorporate this additional information into our model, yielding the *posterior* distribution PX∣YP\_{X|Y}PX∣Y​. We will now take a closer look at how to do this for Gaussian processes. First, we form the joint distribution PX,YP\_{X,Y}PX,Y​ between the test points XXX and the training points YYY. The result is a multivariate Gaussian distribution with dimensions ∣Y∣+∣X∣|Y| + |X|∣Y∣+∣X∣. As you can see in the [figure below](#PosteriorFigure), we concatenate the training and the test points to compute the corresponding covariance matrix. For the next step we need one operation on Gaussian distributions that we have defined earlier. Using *conditioning* we can find PX∣YP\_{X|Y}PX∣Y​ from PX,YP\_{X,Y}PX,Y​. The dimensions of this new distribution matches the number of test points NNN and the distribution is also normal. It is important to note that conditioning leads to derived versions of the mean and the standard deviation: X∣Y∼N(μ′,Σ′)X|Y \sim \mathcal{N}(\mu’, \Sigma’)X∣Y∼N(μ′,Σ′). More details can be found in the [related section](#MargCond) on conditioning multivariate Gaussian distributions. The intuition behind this step is that the training points constrain the set of functions to those that pass through the training points. As mentioned before, the conditional distribution PX∣YP\_{X|Y}PX∣Y​ forces the set of functions to precisely pass through each training point. In many cases this can lead to fitted functions that are unnecessarily complex. Also, up until now, we have considered the training points YYY to be perfect measurements. But in real-world scenarios this is an unrealistic assumption, since most of our data is afflicted with measurement errors or uncertainty. Gaussian processes offer a simple solution to this problem by modeling the error of the measurements. For this, we need to add an error term ϵ∼N(0,ψ2)\epsilon \sim \mathcal{N}(0, \psi^2)ϵ∼N(0,ψ2) to each of our training points: Y=f(X)+ϵ Y = f(X) + \epsilon Y=f(X)+ϵ We do this by slightly modifying the setup of the joint distribution PX,YP\_{X,Y}PX,Y​: PX,Y=[XY]∼N(0,Σ)=N([00],[ΣXXΣXYΣYXΣYY+ψ2I])P\_{X,Y} = \begin{bmatrix} X \\ Y \end{bmatrix} \sim \mathcal{N}(0, \Sigma) = \mathcal{N} \left( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} \Sigma\_{XX} & \Sigma\_{XY} \\ \Sigma\_{YX} & \Sigma\_{YY}+\psi^2I \end{bmatrix} \right)PX,Y​=[XY​]∼N(0,Σ)=N([00​],[ΣXX​ΣYX​​ΣXY​ΣYY​+ψ2I​]) Again, we can use conditioning to derive the predictive distribution PX∣YP\_{X|Y}PX∣Y​. In this formulation, ψ\psiψ is an additional parameter of our model. Analogous to the prior distribution, we could obtain a prediction for our function values by sampling from this distribution. But, since sampling involves randomness, the resulting fit to the data would not be deterministic and our prediction could end up being an outlier. In order to make a more meaningful prediction we can use the other basic operation of Gaussian distributions. Through the *marginalization* of each random variable, we can extract the respective mean function value μi′\mu’\_iμi′​ and standard deviation σi′=Σii′\sigma’\_i = \Sigma’\_{ii}σi′​=Σii′​ for the iii-th test point. In contrast to the prior distribution, where we set the mean to μ=0\mu=0μ=0, the result of conditioning the joint distribution of test and training data will most likely have a non-zero mean μ′≠0\mu’ \neq 0μ′≠0. Extracting μ′\mu’μ′ and σ′\sigma’σ′ does not only lead to a more meaningful prediction, it also allows us to make a statement about the confidence of the prediction. The [following figure](#Posterior) shows an example of the conditional distribution. At first, no training points have been observed. Accordingly, the mean prediction remains at 000 and the standard deviation is the same for each test point. By hovering over the covariance matrix you can see the influence of each point on the current test point. As long as no training points have been observed, the influence of neighboring points is limited locally. The training points can be activated by clicking on them, which leads to a constrained distribution. This change is reflected in the entries of the covariance matrix, and leads to an adjustment of the mean and the standard deviation of the predicted function. As we would expect, the uncertainty of the prediction is small in regions close to the training data and grows as we move further away from those points. In the constrained covariance matrix, we can see that the correlation of neighbouring points is affected by the training data. If a predicted point lies on the training data, there is no correlation with other points. Therefore, the function must pass directly through it. Predicted values further away are also affected by the training data — proportional to their distance. ### Combining different kernels As described earlier, the power of Gaussian processes lies in the choice of the kernel function. This property allows experts to introduce domain knowledge into the process and lends Gaussian processes their flexibility to capture trends in the training data. For example, by choosing a suitable bandwidth for the RBF kernel, we can control how smooth the resulting function will be. A big benefit that kernels provide is that they can be combined together, resulting in a more specialized kernel. The decision which kernel to use is highly dependent on prior knowledge about the data, e.g. if certain characteristics are expected. Examples for this would be stationary nature, or global trends and patterns. As introduced in the [section on kernels](#Kernels), stationary means that a kernel is translation invariant and therefore not dependent on the index iii. This also means that we cannot model global trends using a strictly stationary kernel. Remember that the covariance matrix of Gaussian processes has to be positive semi-definite. When choosing the optimal kernel combinations, all methods that preserve this property are allowed. The most common kernel combinations would be addition and multiplication. Let’s consider two kernels, a linear kernel klink\_{\text{lin}}klin​ and a periodic kernel kperk\_{\text{per}}kper​, for example. This is how we would multiply the two: k∗(t,t′)=klin(t,t′)⋅kper(t,t′) k^{\ast}(t,t’) = k\_{\text{lin}}(t,t’) \cdot k\_{\text{per}}(t,t’) k∗(t,t′)=klin​(t,t′)⋅kper​(t,t′) However, combinations are not limited to the above example, and there are more possibilities such as concatenation or composition with a function. To show the impact of a kernel combination and how it might retain qualitative features of the individual kernels, take a look at the [figure below](#KernelCombinationsStatic). If we add a periodic and a linear kernel, the global trend of the linear kernel is incorporated into the combined kernel. The result is a periodic function that follows a linear trend. When combining the same kernels through multiplication instead, the result is a periodic function with a linearly growing amplitude away from linear kernel parameter ccc. If we draw samples from a combined linear and periodic kernel, we can observe the different retained characteristics in the new sample. Addition results in a periodic function with a global trend, while the multiplication increases the periodic amplitude outwards. Knowing more about how kernel combinations influence the shape of the resulting distribution, we can move on to a more complex example. In the [figure below](#KernelCombinations), the observed training data has an ascending trend with a periodic deviation. Using only a linear kernel, we can mimic a normal linear regression of the points. At first glance, the RBF kernel accurately approximates the points. But since the RBF kernel is stationary it will always return to μ=0\mu=0μ=0 in regions further away from observed training data. This decreases the accuracy for predictions that reach further into the past or the future. An improved model can be created by combining the individual kernels through addition, which maintains both the periodic nature and the global ascending trend of the data. This procedure can be used, for example, in the analysis of weather data. Using the checkboxes, different kernels can be combined to form a new Gaussian process. Only by using a combination of kernels, it is possible to capture the characteristics of more complex training data. As discussed in the [section about GPs](#GaussianProcesses), a Gaussian process can model uncertain observations. This can be seen when only selecting the linear kernel, as it allows us to perform linear regression even if more than two points have been observed, and not all functions have to pass directly through the observed training data. Conclusion ---------- With this article, you should have obtained an overview of Gaussian processes, and developed a deeper understanding on how they work. As we have seen, Gaussian processes offer a flexible framework for regression and several extensions exist that make them even more versatile. For instance, sometimes it might not be possible to describe the kernel in simple terms. To overcome this challenge, learning specialized kernel functions from the underlying data, for example by using deep learning, is an area of ongoing research. Furthermore, links between Bayesian inference, Gaussian processes and deep learning have been described in several papers. Even though we mostly talk about Gaussian processes in the context of regression, they can be adapted for different purposes, e.g. *model-peeling* and hypothesis testing. By comparing different kernels on the dataset, domain experts can introduce additional knowledge through appropriate combination and parameterization of the kernel. If we have sparked your interest, we have compiled a list of further [blog posts](#FurtherReading) on the topic of Gaussian processes. In addition, we have linked two [Python notebooks](#FurtherReading) that will give you some hands-on experience and help you to get started right away.
ffe6238e-7acb-4e7b-98f3-ef54c80ce221
trentmkelly/LessWrong-43k
LessWrong
Evidence for surprising ease of de-nuclearization http://yglesias.thinkprogress.org/2010/12/the-symbolic-power-of-nuclear-deterrents/
b013dd82-72d4-4ecc-a535-c3eee254dca4
StampyAI/alignment-research-dataset/arxiv
Arxiv
Bad Universal Priors and Notions of Optimality 1 Introduction --------------- The choice of the universal Turing machine (UTM) has been a big open question in algorithmic information theory for a long time. While attempts have been made (Müller, [2010](#bib.bib13)) no answer is in sight. The *Kolmogorov complexity* of a string, the length of the shortest program that prints this string, depends on this choice. However, there are *invariance theorems* (Li and Vitányi, [2008](#bib.bib12), Thm. 2.1.1 & Thm. 3.1.1) which state that changing the UTM changes Kolmogorov complexity only by a constant. When using the *universal prior* M𝑀Mitalic\_M introduced by Solomonoff ([1964](#bib.bib18), [1978](#bib.bib19)) to predict any deterministic computable binary sequence, the number of wrong predictions is bounded by (a multiple of) the Kolmogorov complexity of the sequence (Hutter, [2001](#bib.bib2)). Due to the invariance theorem, changing the UTM changes the number of errors only by a constant. In this sense, compression and prediction work for any choice of UTM. Hutter ([2000](#bib.bib1), [2005](#bib.bib4)) defines the universally intelligent agent AIXI, which is targeted at the *general reinforcement learning problem* (Sutton and Barto, [1998](#bib.bib22)). It extends Solomonoff induction to the interactive setting. AIXI is a Bayesian agent, using a universal prior on the set of all computable environments; actions are taken according to the maximization of expected future discounted rewards. Closely related is the intelligence measure defined by Legg and Hutter ([2007](#bib.bib9)), a mathematical performance measure for general reinforcement learning agents: defined as the discounted rewards achieved across all computable environments, weighted by the universal prior. There are several known positive results about AIXI. It has been proven to be *Pareto optimal* (Hutter, [2002](#bib.bib3), Thm. 2 & Thm. 6), *balanced Pareto optimal* (Hutter, [2002](#bib.bib3), Thm. 3), and has maximal Legg-Hutter intelligence. Furthermore, AIXI asymptotically learns to predict the environment perfectly and with a small total number of errors analogously to Solomonoff induction (Hutter, [2005](#bib.bib4), Thm. 5.36), but only *on policy*: AIXI learns to correctly predict the value (expected future rewards) of its own actions, but generally not the value of counterfactual actions that it does not take. Orseau ([2010](#bib.bib14), [2013](#bib.bib15)) showed that AIXI does not achieve asymptotic optimality in all computable environments. So instead, we may ask the following weaker questions. Does AIXI succeed in every partially observable Markov decision process (POMDP)/(ergodic) Markov decision process (MDP)/bandit problem/sequence prediction task? In this paper we show that without further assumptions on the UTM, we cannot answer any of the preceding questions in the affirmative. More generally, there can be no invariance theorem for AIXI. As a reinforcement learning agent, AIXI has to balance between exploration and exploitation. Acting according to any (universal) prior does not lead to enough exploration, and the bias of AIXI’s prior is retained indefinitely. For bad priors this can cause serious malfunctions. However, this problem can be alleviated by adding an extra exploration component to AIXI (Lattimore, [2013](#bib.bib6), Ch. 5), similar to knowledge-seeking agents (Orseau, [2014](#bib.bib16); Orseau et al., [2013](#bib.bib17)), or by the use of optimism (Sunehag and Hutter, [2012](#bib.bib20)). In [Section 3](#S3 "3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") we give two examples of universal priors that cause AIXI to misbehave drastically. In case of a finite lifetime, the *indifference prior* makes all actions equally preferable to AIXI ([Section 3.1](#S3.SS1 "3.1 The Indifference Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality")). Furthermore, for any computable policy π𝜋\piitalic\_π the *dogmatic prior* makes AIXI stick to the policy π𝜋\piitalic\_π as long as expected future rewards do not fall too close to zero ([Section 3.2](#S3.SS2 "3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality")). This has profound implications. We show in [Section 4](#S4 "4 Consequences for Legg-Hutter Intelligence ‣ Bad Universal Priors and Notions of Optimality") that if we measure Legg-Hutter intelligence with respect to a *different* universal prior, AIXI scores arbitrarily close to the minimal intelligence while any computable policy can score arbitrarily close to the maximal intelligence. This makes the Legg-Hutter intelligence score and thus balanced Pareto optimality relative to the choice of the UTM. Moreover, in [Section 5](#S5 "5 Pareto Optimality ‣ Bad Universal Priors and Notions of Optimality") we show that in the class of all computable environments, *every* policy is Pareto optimal. This undermines all existing optimality results for AIXI. We discuss the implications of these results for the quest for a *natural* universal Turing machine and optimality notions of general reinforcement learners in [Section 6](#S6 "6 Discussion ‣ Bad Universal Priors and Notions of Optimality"). A list of notation is provided in [Appendix A](#A1 "Appendix A List of Notation ‣ Bad Universal Priors and Notions of Optimality"). 2 Preliminaries and Notation ----------------------------- The set 𝒳\*:=⋃n=0∞𝒳nassignsuperscript𝒳superscriptsubscript𝑛0superscript𝒳𝑛\mathcal{X}^{\*}:=\bigcup\_{n=0}^{\infty}\mathcal{X}^{n}caligraphic\_X start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT := ⋃ start\_POSTSUBSCRIPT italic\_n = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT caligraphic\_X start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT is the set of all finite strings over the alphabet 𝒳𝒳\mathcal{X}caligraphic\_X, the set 𝒳∞superscript𝒳\mathcal{X}^{\infty}caligraphic\_X start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT is the set of all infinite strings over the alphabet 𝒳𝒳\mathcal{X}caligraphic\_X, and the set 𝒳♯:=𝒳\*∪𝒳∞assignsuperscript𝒳♯superscript𝒳superscript𝒳\mathcal{X}^{\sharp}:=\mathcal{X}^{\*}\cup\mathcal{X}^{\infty}caligraphic\_X start\_POSTSUPERSCRIPT ♯ end\_POSTSUPERSCRIPT := caligraphic\_X start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∪ caligraphic\_X start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT is their union. The empty string is denoted by ϵitalic-ϵ\epsilonitalic\_ϵ, not to be confused with the small positive real number ε𝜀\varepsilonitalic\_ε. Given a string x∈𝒳♯𝑥superscript𝒳♯x\in\mathcal{X}^{\sharp}italic\_x ∈ caligraphic\_X start\_POSTSUPERSCRIPT ♯ end\_POSTSUPERSCRIPT, we denote its length by |x|𝑥|x|| italic\_x |. For a (finite or infinite) string x𝑥xitalic\_x of length ≥kabsent𝑘\geq k≥ italic\_k, we denote with x1:ksubscript𝑥:1𝑘x\_{1:k}italic\_x start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT the first k𝑘kitalic\_k characters of x𝑥xitalic\_x, and with x<ksubscript𝑥absent𝑘x\_{<k}italic\_x start\_POSTSUBSCRIPT < italic\_k end\_POSTSUBSCRIPT the first k−1𝑘1k-1italic\_k - 1 characters of x𝑥xitalic\_x. The notation x1:∞subscript𝑥:1x\_{1:\infty}italic\_x start\_POSTSUBSCRIPT 1 : ∞ end\_POSTSUBSCRIPT stresses that x𝑥xitalic\_x is an infinite string. We write x⊑ysquare-image-of-or-equals𝑥𝑦x\sqsubseteq yitalic\_x ⊑ italic\_y iff x𝑥xitalic\_x is a prefix of y𝑦yitalic\_y, i.e., x=y1:|x|𝑥subscript𝑦:1𝑥x=y\_{1:|x|}italic\_x = italic\_y start\_POSTSUBSCRIPT 1 : | italic\_x | end\_POSTSUBSCRIPT. In reinforcement learning, the agent interacts with an environment in cycles: at time step t𝑡titalic\_t the agent chooses an *action* at∈𝒜subscript𝑎𝑡𝒜a\_{t}\in\mathcal{A}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_A and receives a *percept* et=(ot,rt)∈ℰsubscript𝑒𝑡subscript𝑜𝑡subscript𝑟𝑡ℰe\_{t}=(o\_{t},r\_{t})\in\mathcal{E}italic\_e start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∈ caligraphic\_E consisting of an *observation* ot∈𝒪subscript𝑜𝑡𝒪o\_{t}\in\mathcal{O}italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_O and a real-valued *reward* rt∈ℝsubscript𝑟𝑡ℝr\_{t}\in\mathbb{R}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ blackboard\_R; the cycle then repeats for t+1𝑡1t+1italic\_t + 1. A *history* is an element of (𝒜×ℰ)\*superscript𝒜ℰ(\mathcal{A}\times\mathcal{E})^{\*}( caligraphic\_A × caligraphic\_E ) start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. We use æ∈𝒜×ℰæ𝒜ℰ\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\in\mathcal{A}\times\mathcal{E}æ ∈ caligraphic\_A × caligraphic\_E to denote one interaction cycle, and æ<tsubscriptæabsent𝑡\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t}æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT to denote a history of length t−1𝑡1t-1italic\_t - 1. The goal in reinforcement learning is to maximize total discounted rewards. A *policy* is a function π:(𝒜×ℰ)\*→𝒜:𝜋→superscript𝒜ℰ𝒜\pi:(\mathcal{A}\times\mathcal{E})^{\*}\to\mathcal{A}italic\_π : ( caligraphic\_A × caligraphic\_E ) start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT → caligraphic\_A mapping each history to the action taken after seeing this history. A history æ<tsubscriptæabsent𝑡\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t}æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT is *consistent with policy π𝜋\piitalic\_π* iff π(æ<k)=ak𝜋subscriptæabsent𝑘subscript𝑎𝑘\pi(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<k})=a\_{k}italic\_π ( æ start\_POSTSUBSCRIPT < italic\_k end\_POSTSUBSCRIPT ) = italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT for all k<t𝑘𝑡k<titalic\_k < italic\_t. A function f:𝒳\*→ℝ:𝑓→superscript𝒳ℝf:\mathcal{X}^{\*}\to\mathbb{R}italic\_f : caligraphic\_X start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT → blackboard\_R is *lower semicomputable* iff the set {(x,q)∈𝒳\*×ℚ∣f(x)>q}conditional-set𝑥𝑞superscript𝒳ℚ𝑓𝑥𝑞\{(x,q)\in\mathcal{X}^{\*}\times\mathbb{Q}\mid f(x)>q\}{ ( italic\_x , italic\_q ) ∈ caligraphic\_X start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT × blackboard\_Q ∣ italic\_f ( italic\_x ) > italic\_q } is recursively enumerable. A *conditional semimeasure* ν𝜈\nuitalic\_ν is a probability measure over finite and infinite strings of percepts given actions as input where ν(e<t∥a1:∞)𝜈conditionalsubscript𝑒absent𝑡subscript𝑎:1\nu(e\_{<t}\parallel a\_{1:\infty})italic\_ν ( italic\_e start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT 1 : ∞ end\_POSTSUBSCRIPT ) denotes the probability of receiving percepts e<tsubscript𝑒absent𝑡e\_{<t}italic\_e start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT when taking actions a1:∞subscript𝑎:1a\_{1:\infty}italic\_a start\_POSTSUBSCRIPT 1 : ∞ end\_POSTSUBSCRIPT. Formally, ν𝜈\nuitalic\_ν maps 𝒜∞superscript𝒜\mathcal{A}^{\infty}caligraphic\_A start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT to a probability distribution over ℰ♯superscriptℰ♯\mathcal{E}^{\sharp}caligraphic\_E start\_POSTSUPERSCRIPT ♯ end\_POSTSUPERSCRIPT. Thus the environment might assign positive probability to finite percept sequences. One possible interpretation for this is that there is a non-zero chance that the environment ends: it simply does not produce a new percept. Another possible interpretation is that there is a non-zero chance of death for the agent. However, nothing hinges on the interpretation; the use of (unnormalized) semimeasures is primarily a technical trick. The conditional semimeasure ν𝜈\nuitalic\_ν is *chronological* iff the first t−1𝑡1t-1italic\_t - 1 percepts are independent of future actions aksubscript𝑎𝑘a\_{k}italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT for k≥t𝑘𝑡k\geq titalic\_k ≥ italic\_t, i.e., ν(e<t∥a1:k)=ν(e<t∥a<t)𝜈conditionalsubscript𝑒absent𝑡subscript𝑎:1𝑘𝜈conditionalsubscript𝑒absent𝑡subscript𝑎absent𝑡\nu(e\_{<t}\parallel a\_{1:k})=\nu(e\_{<t}\parallel a\_{<t})italic\_ν ( italic\_e start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT ) = italic\_ν ( italic\_e start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ). Despite their name, conditional semimeasures do not denote a conditional probability; ν𝜈\nuitalic\_ν is not a joint probability distribution over actions and percepts. We model environments as lower semicomputable chronological conditional semimeasures (LSCCCS) (Hutter, [2005](#bib.bib4), Sec. 5.1.1); the class of all such environments is denoted as ℳLSCCCSsubscriptsuperscriptℳCCSLSC\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT. We also use the larger set of all chronological conditional semimeasures ℳCCSsuperscriptℳCCS\mathcal{M}^{\mathrm{CCS}}caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT. A *universal prior* is a function w:ℳLSCCCS→[0,1]:𝑤→subscriptsuperscriptℳCCSLSC01w:\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}\to[0,1]italic\_w : caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT → [ 0 , 1 ] such that wν:=w(ν)>0assignsubscript𝑤𝜈𝑤𝜈0w\_{\nu}:=w(\nu)>0italic\_w start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT := italic\_w ( italic\_ν ) > 0 for all ν∈ℳLSCCCS𝜈subscriptsuperscriptℳCCSLSC\nu\in\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}italic\_ν ∈ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT and ∑ν∈ℳLSCCCSwν≤1subscript𝜈subscriptsuperscriptℳCCSLSCsubscript𝑤𝜈1\sum\_{\nu\in\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}}w\_{\nu}\leq 1∑ start\_POSTSUBSCRIPT italic\_ν ∈ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ≤ 1. A universal prior w𝑤witalic\_w gives rise to a *universal mixture*, | | | | | | --- | --- | --- | --- | | | ξ(e<t∥a<t):=∑ν∈ℳLSCCCSwνν(e<t∥a<t).assign𝜉conditionalsubscript𝑒absent𝑡subscript𝑎absent𝑡subscript𝜈subscriptsuperscriptℳCCSLSCsubscript𝑤𝜈𝜈conditionalsubscript𝑒absent𝑡subscript𝑎absent𝑡\xi(e\_{<t}\parallel a\_{<t})\leavevmode\nobreak\ :=\leavevmode\nobreak\ \sum\_{\nu\in\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}}w\_{\nu}\nu(e\_{<t}\parallel a\_{<t}).italic\_ξ ( italic\_e start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) := ∑ start\_POSTSUBSCRIPT italic\_ν ∈ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT italic\_ν ( italic\_e start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) . | | (1) | If the universal prior is lower semicomputable, then the universal mixture ξ𝜉\xiitalic\_ξ is an LSCCCS, i.e., ξ∈ℳLSCCCS𝜉subscriptsuperscriptℳCCSLSC\xi\in\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}italic\_ξ ∈ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT. From a given universal monotone Turing machine U𝑈Uitalic\_U (Li and Vitányi, [2008](#bib.bib12), Sec. 4.5.2) we can get a universal mixture ξ𝜉\xiitalic\_ξ in two ways. First, we can use ([1](#S2.E1 "1 ‣ 2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality")) with the prior given by wν:=2−K(ν)assignsubscript𝑤𝜈superscript2𝐾𝜈w\_{\nu}:=2^{-K(\nu)}italic\_w start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT := 2 start\_POSTSUPERSCRIPT - italic\_K ( italic\_ν ) end\_POSTSUPERSCRIPT, where K(ν)𝐾𝜈K(\nu)italic\_K ( italic\_ν ) is the *Kolmogorov complexity* of ν𝜈\nuitalic\_ν’s index in the enumeration of all LSCCCSs (Li and Vitányi, [2008](#bib.bib12), Eq. 4.11). Second, we can define it as the probability that the universal monotone Turing machine U𝑈Uitalic\_U generates e<tsubscript𝑒absent𝑡e\_{<t}italic\_e start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT when fed with a<tsubscript𝑎absent𝑡a\_{<t}italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT and uniformly random bits: | | | | | | --- | --- | --- | --- | | | ξ(e<t∥a<t):=∑p:e<t⊑U(p,a<t)2−|p|assign𝜉conditionalsubscript𝑒absent𝑡subscript𝑎absent𝑡subscript:𝑝square-image-of-or-equalssubscript𝑒absent𝑡𝑈𝑝subscript𝑎absent𝑡superscript2𝑝\xi(e\_{<t}\parallel a\_{<t})\leavevmode\nobreak\ :=\leavevmode\nobreak\ \sum\_{p:\,e\_{<t}\sqsubseteq U(p,a\_{<t})}2^{-|p|}italic\_ξ ( italic\_e start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) := ∑ start\_POSTSUBSCRIPT italic\_p : italic\_e start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ⊑ italic\_U ( italic\_p , italic\_a start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT 2 start\_POSTSUPERSCRIPT - | italic\_p | end\_POSTSUPERSCRIPT | | (2) | Both definitions are equivalent, but not necessarily equal (Wood et al., [2011](#bib.bib23), Lem. 10 & Lem. 13). ###### Lemma 0 (Mixing Mixtures). Let q,q′∈ℚ𝑞superscript𝑞normal-′ ℚq,q^{\prime}\in\mathbb{Q}italic\_q , italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ blackboard\_Q such that q>0𝑞0q>0italic\_q > 0, q′≥0superscript𝑞normal-′0q^{\prime}\geq 0italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≥ 0, and q+q′≤1𝑞superscript𝑞normal-′1q+q^{\prime}\leq 1italic\_q + italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≤ 1. Let w𝑤witalic\_w be any lower semicomputable universal prior, let ξ𝜉\xiitalic\_ξ be the universal mixture for w𝑤witalic\_w, and let ρ𝜌\rhoitalic\_ρ be an LSCCCS. Then ξ′:=qξ+q′ρassignsuperscript𝜉normal-′𝑞𝜉superscript𝑞normal-′𝜌\xi^{\prime}:=q\xi+q^{\prime}\rhoitalic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT := italic\_q italic\_ξ + italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT italic\_ρ is an LSCCCS and a universal mixture. ###### Proof. ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is given by the universal prior w′superscript𝑤′w^{\prime}italic\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT with w′:=qw+q′𝟙ρassignsuperscript𝑤′𝑞𝑤superscript𝑞′subscript1𝜌w^{\prime}:=qw+q^{\prime}\mathbbm{1}\_{\rho}italic\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT := italic\_q italic\_w + italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT blackboard\_1 start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT. ∎ Throughout this paper, we make the following assumptions. ###### Assumption 0. 1. (a) Rewards are bounded between 00 and 1111. 2. (b) The set of actions 𝒜𝒜\mathcal{A}caligraphic\_A and the set of percepts ℰℰ\mathcal{E}caligraphic\_E are both finite. We fix a *discount function* γ:ℕ→ℝ:𝛾→ℕℝ\gamma:\mathbb{N}\to\mathbb{R}italic\_γ : blackboard\_N → blackboard\_R with γt:=γ(t)≥0assignsubscript𝛾𝑡𝛾𝑡0\gamma\_{t}:=\gamma(t)\geq 0italic\_γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT := italic\_γ ( italic\_t ) ≥ 0 and ∑t=1∞γt<∞superscriptsubscript𝑡1subscript𝛾𝑡\sum\_{t=1}^{\infty}\gamma\_{t}<\infty∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT < ∞. The *discount normalization factor* is defined as Γt:=∑i=t∞γiassignsubscriptΓ𝑡superscriptsubscript𝑖𝑡subscript𝛾𝑖\Gamma\_{t}:=\sum\_{i=t}^{\infty}\gamma\_{i}roman\_Γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT := ∑ start\_POSTSUBSCRIPT italic\_i = italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. There is no requirement that γt>0subscript𝛾𝑡0\gamma\_{t}>0italic\_γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT > 0 or Γt>0subscriptΓ𝑡0\Gamma\_{t}>0roman\_Γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT > 0. If m:=min⁡{t∣Γt+1=0}assign𝑚conditional𝑡subscriptΓ𝑡10m:=\min\{t\mid\Gamma\_{t+1}=0\}italic\_m := roman\_min { italic\_t ∣ roman\_Γ start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT = 0 } exists, we say the agent has a *finite lifetime m𝑚mitalic\_m* and does not care what happens afterwards. ###### Definition 0 (Value Function). The *value* of a policy π𝜋\piitalic\_π in an environment ν𝜈\nuitalic\_ν given history æ<tsubscriptæabsent𝑡\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t}æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT is defined as Vνπ(æ<t):=Vνπ(æ<tπ(æ<t))assignsubscriptsuperscript𝑉𝜋𝜈subscriptæabsent𝑡subscriptsuperscript𝑉𝜋𝜈subscriptæabsent𝑡𝜋subscriptæabsent𝑡V^{\pi}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t}):=V^{\pi}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t}\pi(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t}))italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) := italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT italic\_π ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) ) and | | | | | | --- | --- | --- | --- | | | Vνπ(æ<tat)subscriptsuperscript𝑉𝜋𝜈subscriptæabsent𝑡subscript𝑎𝑡\displaystyle V^{\pi}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t}a\_{t})\leavevmode\nobreak\ italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | :=1Γt∑et∈ℰ(γtrt+Γt+1Vνπ(æ1:t))ν(e1:t∣e<t‖a1:t)assignabsent1subscriptΓ𝑡subscriptsubscript𝑒𝑡ℰsubscript𝛾𝑡subscript𝑟𝑡subscriptΓ𝑡1subscriptsuperscript𝑉𝜋𝜈subscriptæ:1𝑡𝜈subscript𝑒:1𝑡delimited-∣‖subscript𝑒absent𝑡subscript𝑎:1𝑡\displaystyle:=\leavevmode\nobreak\ \frac{1}{\Gamma\_{t}}\sum\_{e\_{t}\in\mathcal{E}}\big{(}\gamma\_{t}r\_{t}+\Gamma\_{t+1}V^{\pi}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{1:t})\big{)}\nu(e\_{1:t}\mid e\_{<t}\parallel a\_{1:t}):= divide start\_ARG 1 end\_ARG start\_ARG roman\_Γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_ARG ∑ start\_POSTSUBSCRIPT italic\_e start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_E end\_POSTSUBSCRIPT ( italic\_γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + roman\_Γ start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ) ) italic\_ν ( italic\_e start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ∣ italic\_e start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ) | | if Γt>0subscriptΓ𝑡0\Gamma\_{t}>0roman\_Γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT > 0 and Vνπ(æ<t):=0assignsubscriptsuperscript𝑉𝜋𝜈subscriptæabsent𝑡0V^{\pi}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t}):=0italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) := 0 if Γt=0subscriptΓ𝑡0\Gamma\_{t}=0roman\_Γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = 0. The *optimal value* is defined as Vν\*(h):=supπVνπ(h)assignsubscriptsuperscript𝑉𝜈ℎsubscriptsupremum𝜋subscriptsuperscript𝑉𝜋𝜈ℎV^{\*}\_{\nu}(h):=\sup\_{\pi}V^{\pi}\_{\nu}(h)italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h ) := roman\_sup start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h ). ###### Definition 0 (Optimal Policy (Hutter, [2005](#bib.bib4), Def. 5.19 & 5.30)). A policy π𝜋\piitalic\_π is *optimal in environment ν𝜈\nuitalic\_ν (ν𝜈\nuitalic\_ν-optimal)* iff for all histories π𝜋\piitalic\_π attains the optimal value: Vνπ(h)=Vν\*(h)subscriptsuperscript𝑉𝜋𝜈ℎsubscriptsuperscript𝑉𝜈ℎV^{\pi}\_{\nu}(h)=V^{\*}\_{\nu}(h)italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h ) = italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h ) for all h∈(𝒜×ℰ)\*ℎsuperscript𝒜ℰh\in(\mathcal{A}\times\mathcal{E})^{\*}italic\_h ∈ ( caligraphic\_A × caligraphic\_E ) start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. The action π(h)𝜋ℎ\pi(h)italic\_π ( italic\_h ) is an *optimal action* iff π(h)=πν\*(h)𝜋ℎsubscriptsuperscript𝜋𝜈ℎ\pi(h)=\pi^{\*}\_{\nu}(h)italic\_π ( italic\_h ) = italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h ) for some ν𝜈\nuitalic\_ν-optimal policy πν\*subscriptsuperscript𝜋𝜈\pi^{\*}\_{\nu}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT. Formally, AIXI is defined as a policy πξ\*subscriptsuperscript𝜋𝜉\pi^{\*}\_{\xi}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT that is optimal in the universal mixture ξ𝜉\xiitalic\_ξ. Since there can be more than one ξ𝜉\xiitalic\_ξ-optimal policy, this definition is not unique. If there two optimal actions α≠β∈𝒜𝛼𝛽𝒜\alpha\neq\beta\in\mathcal{A}italic\_α ≠ italic\_β ∈ caligraphic\_A, we call it an *argmax tie*. Which action we take in case of a tie (how we break the tie) is irrelevant and can be arbitrary. We assumed that the discount function is summable, rewards are bounded ([Assumption 2a](#S2.I1.i1 "item a ‣ Assumption 0. ‣ 2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality")), and actions and percepts spaces are both finite ([Assumption 2b](#S2.I1.i2 "item b ‣ Assumption 0. ‣ 2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality")). Therefore an optimal policy exists for every environment ν∈ℳLSCCCS𝜈subscriptsuperscriptℳCCSLSC\nu\in\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}italic\_ν ∈ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT (Lattimore and Hutter, [2014](#bib.bib8), Thm. 10), in particular for any universal mixture ξ𝜉\xiitalic\_ξ. ###### Lemma 0 (Discounted Values (Lattimore, [2013](#bib.bib6), Lem. 2.5)). If two policies π1subscript𝜋1\pi\_{1}italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and π2subscript𝜋2\pi\_{2}italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT coincide for the first k𝑘kitalic\_k steps (π1(æ<t)=π2(æ<t)subscript𝜋1subscriptæabsent𝑡subscript𝜋2subscriptæabsent𝑡\pi\_{1}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t})=\pi\_{2}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t})italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) = italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) for all histories æ<tsubscriptæabsent𝑡\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t}æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT consistent with π1subscript𝜋1\pi\_{1}italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and t≤k𝑡𝑘t\leq kitalic\_t ≤ italic\_k), then | | | | | --- | --- | --- | | | |Vνπ1(ϵ)−Vνπ2(ϵ)|≤Γk+1Γ1 for all environments ν∈ℳCCS.subscriptsuperscript𝑉subscript𝜋1𝜈italic-ϵsubscriptsuperscript𝑉subscript𝜋2𝜈italic-ϵsubscriptΓ𝑘1subscriptΓ1 for all environments ν∈ℳCCS\big{|}V^{\pi\_{1}}\_{\nu}(\epsilon)-V^{\pi\_{2}}\_{\nu}(\epsilon)\big{|}\leavevmode\nobreak\ \leq\leavevmode\nobreak\ \frac{\Gamma\_{k+1}}{\Gamma\_{1}}\text{ for all environments $\nu\in\mathcal{M}^{\mathrm{CCS}}$}.| italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) - italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) | ≤ divide start\_ARG roman\_Γ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT end\_ARG start\_ARG roman\_Γ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_ARG for all environments italic\_ν ∈ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT . | | ###### Proof. Since the policies π1subscript𝜋1\pi\_{1}italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and π2subscript𝜋2\pi\_{2}italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT coincide for the first k𝑘kitalic\_k steps, they produce the same expected rewards for the first k𝑘kitalic\_k steps. Therefore | | | | | | --- | --- | --- | --- | | | |Vνπ1(ϵ)−Vνπ2(ϵ)|subscriptsuperscript𝑉subscript𝜋1𝜈italic-ϵsubscriptsuperscript𝑉subscript𝜋2𝜈italic-ϵ\displaystyle\big{|}V^{\pi\_{1}}\_{\nu}(\epsilon)-V^{\pi\_{2}}\_{\nu}(\epsilon)\big{|}\leavevmode\nobreak\ | italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) - italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) | | =|∑e1:kΓk+1Γ1(Vνπ1(æ1:k)−Vνπ2(æ1:k))ν(e1:k∥a1:k)|\displaystyle=\leavevmode\nobreak\ \left|\sum\_{e\_{1:k}}\frac{\Gamma\_{k+1}}{\Gamma\_{1}}\big{(}V^{\pi\_{1}}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{1:k})-V^{\pi\_{2}}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{1:k})\big{)}\nu(e\_{1:k}\parallel a\_{1:k})\right|= | ∑ start\_POSTSUBSCRIPT italic\_e start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT divide start\_ARG roman\_Γ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT end\_ARG start\_ARG roman\_Γ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_ARG ( italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT ) - italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT ) ) italic\_ν ( italic\_e start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT ) | | | | | | ≤∑e1:kΓk+1Γ1|Vνπ1(æ1:k)−Vνπ2(æ1:k)|ν(e1:k∥a1:k)≤Γk+1Γ1,absentsubscriptsubscript𝑒:1𝑘subscriptΓ𝑘1subscriptΓ1subscriptsuperscript𝑉subscript𝜋1𝜈subscriptæ:1𝑘subscriptsuperscript𝑉subscript𝜋2𝜈subscriptæ:1𝑘𝜈conditionalsubscript𝑒:1𝑘subscript𝑎:1𝑘subscriptΓ𝑘1subscriptΓ1\displaystyle\leq\leavevmode\nobreak\ \sum\_{e\_{1:k}}\frac{\Gamma\_{k+1}}{\Gamma\_{1}}\big{|}V^{\pi\_{1}}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{1:k})-V^{\pi\_{2}}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{1:k})\big{|}\nu(e\_{1:k}\parallel a\_{1:k})\leavevmode\nobreak\ \leq\leavevmode\nobreak\ \frac{\Gamma\_{k+1}}{\Gamma\_{1}},≤ ∑ start\_POSTSUBSCRIPT italic\_e start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT divide start\_ARG roman\_Γ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT end\_ARG start\_ARG roman\_Γ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_ARG | italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT ) - italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT ) | italic\_ν ( italic\_e start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT ) ≤ divide start\_ARG roman\_Γ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT end\_ARG start\_ARG roman\_Γ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_ARG , | | where at:=π1(æ<t)=π2(æ<t)assignsubscript𝑎𝑡subscript𝜋1subscriptæabsent𝑡subscript𝜋2subscriptæabsent𝑡a\_{t}:=\pi\_{1}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t})=\pi\_{2}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t})italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT := italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) = italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) for all t≤k𝑡𝑘t\leq kitalic\_t ≤ italic\_k. The last inequality follows since ν𝜈\nuitalic\_ν is a semimeasure, 0≤Vνπ≤10subscriptsuperscript𝑉𝜋𝜈10\leq V^{\pi}\_{\nu}\leq 10 ≤ italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ≤ 1 and hence |Vνπ1(æ1:k)−Vνπ2(æ1:k)|≤1subscriptsuperscript𝑉subscript𝜋1𝜈subscriptæ:1𝑘subscriptsuperscript𝑉subscript𝜋2𝜈subscriptæ:1𝑘1|V^{\pi\_{1}}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{1:k})-V^{\pi\_{2}}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{1:k})|\leq 1| italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT ) - italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT 1 : italic\_k end\_POSTSUBSCRIPT ) | ≤ 1. ∎ 3 Bad Universal Priors ----------------------- ### 3.1 The Indifference Prior In this section we consider AIXI with a finite lifetime m𝑚mitalic\_m, i.e., Γm+1=0subscriptΓ𝑚10\Gamma\_{m+1}=0roman\_Γ start\_POSTSUBSCRIPT italic\_m + 1 end\_POSTSUBSCRIPT = 0. The following theorem constructs the *indifference prior*, a universal prior ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT that causes argmax ties for the first m𝑚mitalic\_m steps. Since we use a discount function that only cares about the first m𝑚mitalic\_m steps, all policies are ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT-optimal policies. Thus AIXI’s behavior only depends on how we break argmax ties. ###### Theorem 0 (Indifference Prior). If there is an m𝑚mitalic\_m such that Γm+1=0subscriptnormal-Γ𝑚10\Gamma\_{m+1}=0roman\_Γ start\_POSTSUBSCRIPT italic\_m + 1 end\_POSTSUBSCRIPT = 0, then there is a universal mixture ξ′superscript𝜉normal-′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT such that all policies are ξ′superscript𝜉normal-′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT-optimal. ###### Proof. First, we assume that the action space is binary, 𝒜={0,1}𝒜01\mathcal{A}=\{0,1\}caligraphic\_A = { 0 , 1 }. Let U𝑈Uitalic\_U be the reference UTM and define the UTM U′superscript𝑈′U^{\prime}italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT by | | | | | --- | --- | --- | | | U′(s1:mp,a1:t):=U(p,a1:txors1:t),assignsuperscript𝑈′subscript𝑠:1𝑚𝑝subscript𝑎:1𝑡𝑈𝑝subscript𝑎:1𝑡xorsubscript𝑠:1𝑡U^{\prime}(s\_{1:m}p,a\_{1:t})\leavevmode\nobreak\ :=\leavevmode\nobreak\ U(p,a\_{1:t}\operatorname\*{xor}s\_{1:t}),italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT italic\_p , italic\_a start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ) := italic\_U ( italic\_p , italic\_a start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT roman\_xor italic\_s start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ) , | | where s1:msubscript𝑠:1𝑚s\_{1:m}italic\_s start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT is a binary string of length m𝑚mitalic\_m and sk:=0assignsubscript𝑠𝑘0s\_{k}:=0italic\_s start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT := 0 for k>m𝑘𝑚k>mitalic\_k > italic\_m. (U′superscript𝑈′U^{\prime}italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT has no programs of length ≤mabsent𝑚\leq m≤ italic\_m.) Let ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT be the universal mixture given by U′superscript𝑈′U^{\prime}italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT according to ([2](#S2.E2 "2 ‣ 2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality")). | | | | | | --- | --- | --- | --- | | | ξ′(e1:m∥a1:m)superscript𝜉′conditionalsubscript𝑒:1𝑚subscript𝑎:1𝑚\displaystyle\xi^{\prime}(e\_{1:m}\parallel a\_{1:m})\leavevmode\nobreak\ italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_e start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT ) | =∑p:e1:m⊑U′(p,a1:m)2−|p|absentsubscript:𝑝square-image-of-or-equalssubscript𝑒:1𝑚superscript𝑈′𝑝subscript𝑎:1𝑚superscript2𝑝\displaystyle=\leavevmode\nobreak\ \sum\_{p:\,e\_{1:m}\sqsubseteq U^{\prime}(p,a\_{1:m})}2^{-|p|}= ∑ start\_POSTSUBSCRIPT italic\_p : italic\_e start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT ⊑ italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_p , italic\_a start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT 2 start\_POSTSUPERSCRIPT - | italic\_p | end\_POSTSUPERSCRIPT | | | | | =∑s1:mp′:e1:m⊑U′(s1:mp′,a1:m)2−m−|p′|absentsubscript:subscript𝑠:1𝑚superscript𝑝′square-image-of-or-equalssubscript𝑒:1𝑚superscript𝑈′subscript𝑠:1𝑚superscript𝑝′subscript𝑎:1𝑚superscript2𝑚superscript𝑝′\displaystyle=\leavevmode\nobreak\ \sum\_{s\_{1:m}p^{\prime}:\,e\_{1:m}\sqsubseteq U^{\prime}(s\_{1:m}p^{\prime},a\_{1:m})}2^{-m-|p^{\prime}|}= ∑ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT : italic\_e start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT ⊑ italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT 2 start\_POSTSUPERSCRIPT - italic\_m - | italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | end\_POSTSUPERSCRIPT | | | | | =∑s1:m∑p′:e1:m⊑U(p′,a1:mxors1:m)2−m−|p′|absentsubscriptsubscript𝑠:1𝑚subscript:superscript𝑝′square-image-of-or-equalssubscript𝑒:1𝑚𝑈superscript𝑝′subscript𝑎:1𝑚xorsubscript𝑠:1𝑚superscript2𝑚superscript𝑝′\displaystyle=\leavevmode\nobreak\ \sum\_{s\_{1:m}}\leavevmode\nobreak\ \sum\_{p^{\prime}:\,e\_{1:m}\sqsubseteq U(p^{\prime},a\_{1:m}\operatorname\*{xor}s\_{1:m})}2^{-m-|p^{\prime}|}= ∑ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT : italic\_e start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT ⊑ italic\_U ( italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT roman\_xor italic\_s start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT 2 start\_POSTSUPERSCRIPT - italic\_m - | italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | end\_POSTSUPERSCRIPT | | | | | =∑s1:m∑p′:e1:m⊑U(p′,s1:m)2−m−|p′|,absentsubscriptsubscript𝑠:1𝑚subscript:superscript𝑝′square-image-of-or-equalssubscript𝑒:1𝑚𝑈superscript𝑝′subscript𝑠:1𝑚superscript2𝑚superscript𝑝′\displaystyle=\leavevmode\nobreak\ \sum\_{s\_{1:m}}\leavevmode\nobreak\ \sum\_{p^{\prime}:\,e\_{1:m}\sqsubseteq U(p^{\prime},s\_{1:m})}2^{-m-|p^{\prime}|},= ∑ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT : italic\_e start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT ⊑ italic\_U ( italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT 2 start\_POSTSUPERSCRIPT - italic\_m - | italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | end\_POSTSUPERSCRIPT , | | which is independent of a1:msubscript𝑎:1𝑚a\_{1:m}italic\_a start\_POSTSUBSCRIPT 1 : italic\_m end\_POSTSUBSCRIPT. Hence the first m𝑚mitalic\_m percepts are independent of the first m𝑚mitalic\_m actions. But the percepts’ rewards after time step m𝑚mitalic\_m do not matter since Γm+1=0subscriptΓ𝑚10\Gamma\_{m+1}=0roman\_Γ start\_POSTSUBSCRIPT italic\_m + 1 end\_POSTSUBSCRIPT = 0 ([Section 2](#S2 "2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality")). Because the environment is chronological, the value function must be independent of all actions. Thus every policy is ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT-optimal. For finite action spaces 𝒜𝒜\mathcal{A}caligraphic\_A with more than 2222 elements, the proof works analogously by making 𝒜𝒜\mathcal{A}caligraphic\_A a cyclic group and using the group operation instead of xorxor\operatorname\*{xor}roman\_xor. ∎ The choice of U′superscript𝑈′U^{\prime}italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT in the proof of [Section 3.1](#S3.SS1 "3.1 The Indifference Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") is *unnatural* since its shortest program has length greater than m𝑚mitalic\_m. Moreover, the choice of U′superscript𝑈′U^{\prime}italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT depends on m𝑚mitalic\_m. If we increase AIXI’s lifetime while fixing the UTM U′superscript𝑈′U^{\prime}italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, [Section 3.1](#S3.SS1 "3.1 The Indifference Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") no longer holds. For Solomonoff induction, there is an analogous problem: when using Solomonoff’s prior M𝑀Mitalic\_M to predict a deterministic binary sequence x𝑥xitalic\_x, we make at most K(x)𝐾𝑥K(x)italic\_K ( italic\_x ) errors. In case the shortest program has length >mabsent𝑚>m> italic\_m, there is no guarantee that we make less than m𝑚mitalic\_m errors. ### 3.2 The Dogmatic Prior In this section we define a universal prior that assigns very high probability of going to hell (reward 00 forever) if we deviate from a given computable policy π𝜋\piitalic\_π. For a Bayesian agent like AIXI, it is thus only worth deviating from the policy π𝜋\piitalic\_π if the agent thinks that the prospects of following π𝜋\piitalic\_π are very poor already. We call this prior the *dogmatic prior*, because the fear of going to hell makes AIXI conform to any arbitrary ‘dogmatic ideology’ π𝜋\piitalic\_π. AIXI will only break out if it expects π𝜋\piitalic\_π to give very low future payoff; in that case the agent does not have much to lose. ###### Theorem 0 (Dogmatic Prior). Let π𝜋\piitalic\_π be any computable policy, let ξ𝜉\xiitalic\_ξ be any universal mixture, and let ε>0𝜀0\varepsilon>0italic\_ε > 0. There is a universal mixture ξ′superscript𝜉normal-′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT such that for any history hℎhitalic\_h consistent with π𝜋\piitalic\_π and Vξπ(h)>εsubscriptsuperscript𝑉𝜋𝜉ℎ𝜀V^{\pi}\_{\xi}(h)>\varepsilonitalic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h ) > italic\_ε, the action π(h)𝜋ℎ\pi(h)italic\_π ( italic\_h ) is the unique ξ′superscript𝜉normal-′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT-optimal action. The proof proceeds by constructing a universal mixture that assigns disproportionally high probability to an environment ν𝜈\nuitalic\_ν that sends any policy deviating from π𝜋\piitalic\_π to hell. Importantly, the environment ν𝜈\nuitalic\_ν produces observations according to the universal mixture ξ𝜉\xiitalic\_ξ. Therefore ν𝜈\nuitalic\_ν is indistinguishable from ξ𝜉\xiitalic\_ξ on the policy π𝜋\piitalic\_π, so the posterior belief in ν𝜈\nuitalic\_ν is equal to the prior belief in ν𝜈\nuitalic\_ν. ###### Proof. We assume (0,0)∈ℰ00ℰ(0,0)\in\mathcal{E}( 0 , 0 ) ∈ caligraphic\_E. Let π𝜋\piitalic\_π be any computable policy and define | | | | | --- | --- | --- | | | ν(e1:t∥a1:t):={ξ(e1:t∥a1:t),if ak=π(æ<k)∀k≤t,ξ(e<k∥a<k),if k:=min⁡{i∣ai≠π(æ<i)} exists and ei=(0,0)∀i∈{k,…,t},0,otherwise.assign𝜈conditionalsubscript𝑒:1𝑡subscript𝑎:1𝑡cases𝜉conditionalsubscript𝑒:1𝑡subscript𝑎:1𝑡if subscript𝑎𝑘𝜋subscriptæabsent𝑘for-all𝑘𝑡𝜉conditionalsubscript𝑒absent𝑘subscript𝑎absent𝑘assignif 𝑘conditional𝑖subscript𝑎𝑖𝜋subscriptæabsent𝑖 exists𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 and subscript𝑒𝑖00for-all𝑖𝑘…𝑡0otherwise\nu(e\_{1:t}\parallel a\_{1:t}):=\begin{cases}\xi(e\_{1:t}\parallel a\_{1:t}),&\text{if }a\_{k}=\pi(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<k})\;\forall k\leq t,\\ \xi(e\_{<k}\parallel a\_{<k}),&\text{if }k:=\min\{i\mid a\_{i}\neq\pi(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<i})\}\text{ exists}\\ &\text{ and }e\_{i}=(0,0)\;\forall i\in\{k,\ldots,t\},\\ 0,&\text{otherwise}.\end{cases}italic\_ν ( italic\_e start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ) := { start\_ROW start\_CELL italic\_ξ ( italic\_e start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ) , end\_CELL start\_CELL if italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT = italic\_π ( æ start\_POSTSUBSCRIPT < italic\_k end\_POSTSUBSCRIPT ) ∀ italic\_k ≤ italic\_t , end\_CELL end\_ROW start\_ROW start\_CELL italic\_ξ ( italic\_e start\_POSTSUBSCRIPT < italic\_k end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT < italic\_k end\_POSTSUBSCRIPT ) , end\_CELL start\_CELL if italic\_k := roman\_min { italic\_i ∣ italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≠ italic\_π ( æ start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) } exists end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL and italic\_e start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = ( 0 , 0 ) ∀ italic\_i ∈ { italic\_k , … , italic\_t } , end\_CELL end\_ROW start\_ROW start\_CELL 0 , end\_CELL start\_CELL otherwise . end\_CELL end\_ROW | | The environment ν𝜈\nuitalic\_ν mimics the universal environment ξ𝜉\xiitalic\_ξ until it receives an action that the policy π𝜋\piitalic\_π would not take. From then on, it provides rewards 00. Since ξ𝜉\xiitalic\_ξ is a LSCCCS and π𝜋\piitalic\_π is a computable policy, we have that ν∈ℳLSCCCS𝜈subscriptsuperscriptℳCCSLSC\nu\in\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}italic\_ν ∈ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT. Without loss of generality we assume that ε𝜀\varepsilonitalic\_ε is computable, otherwise we make it slightly smaller. Thus ξ′:=12ν+ε2ξassignsuperscript𝜉′12𝜈𝜀2𝜉\xi^{\prime}:=\tfrac{1}{2}\nu+\tfrac{\varepsilon}{2}\xiitalic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT := divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG italic\_ν + divide start\_ARG italic\_ε end\_ARG start\_ARG 2 end\_ARG italic\_ξ is a universal mixture according to [Section 2](#S2 "2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality"). Let h∈(𝒜×ℰ)\*ℎsuperscript𝒜ℰh\in(\mathcal{A}\times\mathcal{E})^{\*}italic\_h ∈ ( caligraphic\_A × caligraphic\_E ) start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT be any history consistent with π𝜋\piitalic\_π such that Vξπ(h)>εsubscriptsuperscript𝑉𝜋𝜉ℎ𝜀V^{\pi}\_{\xi}(h)>\varepsilonitalic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h ) > italic\_ε. In the following, we use the shorthand notation ρ(h):=ρ(e1:t∥a1:t)assign𝜌ℎ𝜌conditionalsubscript𝑒:1𝑡subscript𝑎:1𝑡\rho(h):=\rho(e\_{1:t}\parallel a\_{1:t})italic\_ρ ( italic\_h ) := italic\_ρ ( italic\_e start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ) for a conditional semimeasure ρ𝜌\rhoitalic\_ρ and h=:æ1:th=:\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{1:t}italic\_h = : æ start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT. Since ν𝜈\nuitalic\_ν gives observations and rewards according to ξ𝜉\xiitalic\_ξ, we have ν(h)=ξ(h)𝜈ℎ𝜉ℎ\nu(h)=\xi(h)italic\_ν ( italic\_h ) = italic\_ξ ( italic\_h ), and thus the posterior weight wν(h)subscript𝑤𝜈ℎw\_{\nu}(h)italic\_w start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h ) of ν𝜈\nuitalic\_ν in Vξ′π(h)superscriptsubscript𝑉superscript𝜉′𝜋ℎV\_{\xi^{\prime}}^{\pi}(h)italic\_V start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( italic\_h ) is constant while following π𝜋\piitalic\_π: | | | | | --- | --- | --- | | | wν(h)wν:=ν(h)ξ′(h)=ξ(h)ξ′(h)=ξ(h)12ν(h)+ε2ξ(h)=ξ(h)12ξ(h)+ε2ξ(h)=21+εassignsubscript𝑤𝜈ℎsubscript𝑤𝜈𝜈ℎsuperscript𝜉′ℎ𝜉ℎsuperscript𝜉′ℎ𝜉ℎ12𝜈ℎ𝜀2𝜉ℎ𝜉ℎ12𝜉ℎ𝜀2𝜉ℎ21𝜀\frac{w\_{\nu}(h)}{w\_{\nu}}:=\frac{\nu(h)}{\xi^{\prime}(h)}=\frac{\xi(h)}{\xi^{\prime}(h)}=\frac{\xi(h)}{\frac{1}{2}\nu(h)+\frac{\varepsilon}{2}\xi(h)}=\frac{\xi(h)}{\frac{1}{2}\xi(h)+\frac{\varepsilon}{2}\xi(h)}=\frac{2}{1+\varepsilon}divide start\_ARG italic\_w start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h ) end\_ARG start\_ARG italic\_w start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT end\_ARG := divide start\_ARG italic\_ν ( italic\_h ) end\_ARG start\_ARG italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_h ) end\_ARG = divide start\_ARG italic\_ξ ( italic\_h ) end\_ARG start\_ARG italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_h ) end\_ARG = divide start\_ARG italic\_ξ ( italic\_h ) end\_ARG start\_ARG divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG italic\_ν ( italic\_h ) + divide start\_ARG italic\_ε end\_ARG start\_ARG 2 end\_ARG italic\_ξ ( italic\_h ) end\_ARG = divide start\_ARG italic\_ξ ( italic\_h ) end\_ARG start\_ARG divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG italic\_ξ ( italic\_h ) + divide start\_ARG italic\_ε end\_ARG start\_ARG 2 end\_ARG italic\_ξ ( italic\_h ) end\_ARG = divide start\_ARG 2 end\_ARG start\_ARG 1 + italic\_ε end\_ARG | | Therefore linearity of Vνπ~subscriptsuperscript𝑉~𝜋𝜈V^{\tilde{\pi}}\_{\nu}italic\_V start\_POSTSUPERSCRIPT over~ start\_ARG italic\_π end\_ARG end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT in ν𝜈\nuitalic\_ν (Hutter, [2005](#bib.bib4), Thm. 5.31, proved in [Appendix B](#A2 "Appendix B Additional Material ‣ Bad Universal Priors and Notions of Optimality")) implies that for all a∈𝒜𝑎𝒜a\in\mathcal{A}italic\_a ∈ caligraphic\_A, | | | | | | --- | --- | --- | --- | | | Vξ′π(ha)=wν(h)Vνπ(ha)+wξ(h)Vξπ(ha)=11+εVνπ(ha)+ε1+εVξπ(ha).subscriptsuperscript𝑉𝜋superscript𝜉′ℎ𝑎subscript𝑤𝜈ℎsubscriptsuperscript𝑉𝜋𝜈ℎ𝑎subscript𝑤𝜉ℎsubscriptsuperscript𝑉𝜋𝜉ℎ𝑎11𝜀subscriptsuperscript𝑉𝜋𝜈ℎ𝑎𝜀1𝜀subscriptsuperscript𝑉𝜋𝜉ℎ𝑎V^{\pi}\_{\xi^{\prime}}(ha)\leavevmode\nobreak\ =\leavevmode\nobreak\ w\_{\nu}(h)V^{\pi}\_{\nu}(ha)+w\_{\xi}(h)V^{\pi}\_{\xi}(ha)\leavevmode\nobreak\ =\leavevmode\nobreak\ \tfrac{1}{1+\varepsilon}V^{\pi}\_{\nu}(ha)+\tfrac{\varepsilon}{1+\varepsilon}V^{\pi}\_{\xi}(ha).italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_h italic\_a ) = italic\_w start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h ) italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h italic\_a ) + italic\_w start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h ) italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h italic\_a ) = divide start\_ARG 1 end\_ARG start\_ARG 1 + italic\_ε end\_ARG italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h italic\_a ) + divide start\_ARG italic\_ε end\_ARG start\_ARG 1 + italic\_ε end\_ARG italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h italic\_a ) . | | (3) | Let α:=π(h)assign𝛼𝜋ℎ\alpha:=\pi(h)italic\_α := italic\_π ( italic\_h ) be the next action according to π𝜋\piitalic\_π, and let β≠α𝛽𝛼\beta\neq\alphaitalic\_β ≠ italic\_α be any other action. We have that Vνπ=Vξπsubscriptsuperscript𝑉𝜋𝜈subscriptsuperscript𝑉𝜋𝜉V^{\pi}\_{\nu}=V^{\pi}\_{\xi}italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT = italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT by definition of ν𝜈\nuitalic\_ν, therefore | | | | | | --- | --- | --- | --- | | | Vξ′π(hα)=([3](#S3.E3 "3 ‣ Proof. ‣ 3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality"))11+εVνπ(hα)+ε1+εVξπ(hα)=11+εVξπ(hα)+ε1+εVξπ(hα)=Vξπ(hα)superscript([3](#S3.E3 "3 ‣ Proof. ‣ 3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality"))subscriptsuperscript𝑉𝜋superscript𝜉′ℎ𝛼11𝜀subscriptsuperscript𝑉𝜋𝜈ℎ𝛼𝜀1𝜀subscriptsuperscript𝑉𝜋𝜉ℎ𝛼11𝜀subscriptsuperscript𝑉𝜋𝜉ℎ𝛼𝜀1𝜀subscriptsuperscript𝑉𝜋𝜉ℎ𝛼subscriptsuperscript𝑉𝜋𝜉ℎ𝛼V^{\pi}\_{\xi^{\prime}}(h\alpha)\leavevmode\nobreak\ \stackrel{{\scriptstyle\text{\eqref{eq:linearity}}}}{{=}}\leavevmode\nobreak\ \tfrac{1}{1+\varepsilon}V^{\pi}\_{\nu}(h\alpha)+\tfrac{\varepsilon}{1+\varepsilon}V^{\pi}\_{\xi}(h\alpha)\leavevmode\nobreak\ =\leavevmode\nobreak\ \tfrac{1}{1+\varepsilon}V^{\pi}\_{\xi}(h\alpha)+\tfrac{\varepsilon}{1+\varepsilon}V^{\pi}\_{\xi}(h\alpha)\leavevmode\nobreak\ =\leavevmode\nobreak\ V^{\pi}\_{\xi}(h\alpha)italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_h italic\_α ) start\_RELOP SUPERSCRIPTOP start\_ARG = end\_ARG start\_ARG ( ) end\_ARG end\_RELOP divide start\_ARG 1 end\_ARG start\_ARG 1 + italic\_ε end\_ARG italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h italic\_α ) + divide start\_ARG italic\_ε end\_ARG start\_ARG 1 + italic\_ε end\_ARG italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h italic\_α ) = divide start\_ARG 1 end\_ARG start\_ARG 1 + italic\_ε end\_ARG italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h italic\_α ) + divide start\_ARG italic\_ε end\_ARG start\_ARG 1 + italic\_ε end\_ARG italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h italic\_α ) = italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h italic\_α ) | | (4) | We get that Vξ′\*(hα)>Vξ′\*(hβ)subscriptsuperscript𝑉superscript𝜉′ℎ𝛼subscriptsuperscript𝑉superscript𝜉′ℎ𝛽V^{\*}\_{\xi^{\prime}}(h\alpha)>V^{\*}\_{\xi^{\prime}}(h\beta)italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_h italic\_α ) > italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_h italic\_β ): | | | | | | --- | --- | --- | --- | | | Vξ′\*(hα)subscriptsuperscript𝑉superscript𝜉′ℎ𝛼\displaystyle V^{\*}\_{\xi^{\prime}}(h\alpha)\leavevmode\nobreak\ italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_h italic\_α ) | ≥Vξ′π(hα)=([4](#S3.E4 "4 ‣ Proof. ‣ 3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality"))Vξπ(hα)=Vξπ(h)>ε,absentsubscriptsuperscript𝑉𝜋superscript𝜉′ℎ𝛼superscript([4](#S3.E4 "4 ‣ Proof. ‣ 3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality"))subscriptsuperscript𝑉𝜋𝜉ℎ𝛼subscriptsuperscript𝑉𝜋𝜉ℎ𝜀\displaystyle\geq\leavevmode\nobreak\ V^{\pi}\_{\xi^{\prime}}(h\alpha)\leavevmode\nobreak\ \stackrel{{\scriptstyle\text{\eqref{eq:Vpixi'}}}}{{=}}\leavevmode\nobreak\ V^{\pi}\_{\xi}(h\alpha)\leavevmode\nobreak\ =\leavevmode\nobreak\ V^{\pi}\_{\xi}(h)\leavevmode\nobreak\ >\leavevmode\nobreak\ \varepsilon,≥ italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_h italic\_α ) start\_RELOP SUPERSCRIPTOP start\_ARG = end\_ARG start\_ARG ( ) end\_ARG end\_RELOP italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h italic\_α ) = italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h ) > italic\_ε , | | | | Vξ′\*(hβ)subscriptsuperscript𝑉superscript𝜉′ℎ𝛽\displaystyle V^{\*}\_{\xi^{\prime}}(h\beta)\leavevmode\nobreak\ italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_h italic\_β ) | =([3](#S3.E3 "3 ‣ Proof. ‣ 3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality"))11+εVνπξ′\*(hβ)+ε1+εVξπξ′\*(hβ)=ε1+εVξπξ′\*(hβ)≤ε1+ε<ε,superscript([3](#S3.E3 "3 ‣ Proof. ‣ 3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality"))absent11𝜀subscriptsuperscript𝑉subscriptsuperscript𝜋superscript𝜉′𝜈ℎ𝛽𝜀1𝜀subscriptsuperscript𝑉subscriptsuperscript𝜋superscript𝜉′𝜉ℎ𝛽𝜀1𝜀subscriptsuperscript𝑉subscriptsuperscript𝜋superscript𝜉′𝜉ℎ𝛽𝜀1𝜀𝜀\displaystyle\stackrel{{\scriptstyle\text{\eqref{eq:linearity}}}}{{=}}\leavevmode\nobreak\ \tfrac{1}{1+\varepsilon}V^{\pi^{\*}\_{\xi^{\prime}}}\_{\nu}(h\beta)+\tfrac{\varepsilon}{1+\varepsilon}V^{\pi^{\*}\_{\xi^{\prime}}}\_{\xi}(h\beta)\leavevmode\nobreak\ =\leavevmode\nobreak\ \tfrac{\varepsilon}{1+\varepsilon}V^{\pi^{\*}\_{\xi^{\prime}}}\_{\xi}(h\beta)\leavevmode\nobreak\ \leq\leavevmode\nobreak\ \tfrac{\varepsilon}{1+\varepsilon}\leavevmode\nobreak\ <\leavevmode\nobreak\ \varepsilon,start\_RELOP SUPERSCRIPTOP start\_ARG = end\_ARG start\_ARG ( ) end\_ARG end\_RELOP divide start\_ARG 1 end\_ARG start\_ARG 1 + italic\_ε end\_ARG italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_h italic\_β ) + divide start\_ARG italic\_ε end\_ARG start\_ARG 1 + italic\_ε end\_ARG italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h italic\_β ) = divide start\_ARG italic\_ε end\_ARG start\_ARG 1 + italic\_ε end\_ARG italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h italic\_β ) ≤ divide start\_ARG italic\_ε end\_ARG start\_ARG 1 + italic\_ε end\_ARG < italic\_ε , | | Hence the action α𝛼\alphaitalic\_α taken by π𝜋\piitalic\_π is the only ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT-optimal action for the history hℎhitalic\_h. ∎ ###### Corollary 0 (AIXI Emulating Computable Policies). Let ε>0𝜀0\varepsilon>0italic\_ε > 0 and let π𝜋\piitalic\_π be any computable policy. There is a universal mixture ξ′superscript𝜉normal-′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT such that for any ξ′superscript𝜉normal-′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT-optimal policy πξ′\*subscriptsuperscript𝜋superscript𝜉normal-′\pi^{\*}\_{\xi^{\prime}}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT and for any (not necessarily computable) environment ν∈ℳCCS𝜈superscriptℳnormal-CCS\nu\in\mathcal{M}^{\mathrm{CCS}}italic\_ν ∈ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT, | | | | | --- | --- | --- | | | |Vνπξ′\*(ϵ)−Vνπ(ϵ)|<ε.subscriptsuperscript𝑉subscriptsuperscript𝜋superscript𝜉′𝜈italic-ϵsubscriptsuperscript𝑉𝜋𝜈italic-ϵ𝜀\left|V^{\pi^{\*}\_{\xi^{\prime}}}\_{\nu}(\epsilon)-V^{\pi}\_{\nu}(\epsilon)\right|\leavevmode\nobreak\ <\leavevmode\nobreak\ \varepsilon.| italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) - italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) | < italic\_ε . | | ###### Proof. Let ε>0𝜀0\varepsilon>0italic\_ε > 0. Since Γk→0→subscriptΓ𝑘0\Gamma\_{k}\to 0roman\_Γ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT → 0 as k→∞→𝑘k\to\inftyitalic\_k → ∞, we can choose k𝑘kitalic\_k large enough such that Γk+1/Γ1<εsubscriptΓ𝑘1subscriptΓ1𝜀\Gamma\_{k+1}/\Gamma\_{1}<\varepsilonroman\_Γ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT / roman\_Γ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT < italic\_ε. Let ε′>0superscript𝜀′0\varepsilon^{\prime}>0italic\_ε start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT > 0 be small enough such that Vξπ(h)>ε′subscriptsuperscript𝑉𝜋𝜉ℎsuperscript𝜀′V^{\pi}\_{\xi}(h)>\varepsilon^{\prime}italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h ) > italic\_ε start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT for all hℎhitalic\_h with |h|≤kℎ𝑘|h|\leq k| italic\_h | ≤ italic\_k. This is possible since Vξπ(h)>0subscriptsuperscript𝑉𝜋𝜉ℎ0V^{\pi}\_{\xi}(h)>0italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h ) > 0 for all hℎhitalic\_h and the set of histories of length ≤kabsent𝑘\leq k≤ italic\_k is finite because of [Assumption 2b](#S2.I1.i2 "item b ‣ Assumption 0. ‣ 2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality"). We use the dogmatic prior from [Section 3.2](#S3.SS2 "3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") to construct a universal mixture ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT for the policy π𝜋\piitalic\_π and ε′>0superscript𝜀′0\varepsilon^{\prime}>0italic\_ε start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT > 0. Thus for any history h∈(𝒜×ℰ)\*ℎsuperscript𝒜ℰh\in(\mathcal{A}\times\mathcal{E})^{\*}italic\_h ∈ ( caligraphic\_A × caligraphic\_E ) start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT consistent with π𝜋\piitalic\_π and |h|≤kℎ𝑘|h|\leq k| italic\_h | ≤ italic\_k, the action π(h)𝜋ℎ\pi(h)italic\_π ( italic\_h ) is the only ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT-optimal action. The claim now follows from [Section 2](#S2 "2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality"). ∎ ###### Corollary 0 (With Finite Lifetime Every Policy is an AIXI). If Γm+1=0subscriptnormal-Γ𝑚10\Gamma\_{m+1}=0roman\_Γ start\_POSTSUBSCRIPT italic\_m + 1 end\_POSTSUBSCRIPT = 0 for some m∈ℕ𝑚ℕm\in\mathbb{N}italic\_m ∈ blackboard\_N, then for any policy π𝜋\piitalic\_π there is a universal mixture ξ′superscript𝜉normal-′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT such that π(h)𝜋ℎ\pi(h)italic\_π ( italic\_h ) is the only ξ′superscript𝜉normal-′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT-optimal action for all histories hℎhitalic\_h consistent with π𝜋\piitalic\_π and |h|≤mℎ𝑚|h|\leq m| italic\_h | ≤ italic\_m. In contrast to [Section 3.1](#S3.SS1 "3.1 The Indifference Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") where every policy is ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT-optimal for a fixed universal mixture ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, [Section 3.2](#S3.SS2 "3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") gives a different universal mixture ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT for every policy π𝜋\piitalic\_π such that π𝜋\piitalic\_π is the only ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT-optimal policy. ###### Proof. Analogously to the proof of [Section 3.2](#S3.SS2 "3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality"), let ε′>0superscript𝜀′0\varepsilon^{\prime}>0italic\_ε start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT > 0 be small enough such that Vξπ(h)>ε′subscriptsuperscript𝑉𝜋𝜉ℎsuperscript𝜀′V^{\pi}\_{\xi}(h)>\varepsilon^{\prime}italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_h ) > italic\_ε start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT for all hℎhitalic\_h with |h|≤mℎ𝑚|h|\leq m| italic\_h | ≤ italic\_m. Again, we use the dogmatic prior from [Section 3.2](#S3.SS2 "3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") to construct a universal mixture ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT for the policy π𝜋\piitalic\_π and ε′>0superscript𝜀′0\varepsilon^{\prime}>0italic\_ε start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT > 0. Thus for any history h∈(𝒜×ℰ)\*ℎsuperscript𝒜ℰh\in(\mathcal{A}\times\mathcal{E})^{\*}italic\_h ∈ ( caligraphic\_A × caligraphic\_E ) start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT consistent with π𝜋\piitalic\_π and |h|≤mℎ𝑚|h|\leq m| italic\_h | ≤ italic\_m, the action π(h)𝜋ℎ\pi(h)italic\_π ( italic\_h ) is the only ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT-optimal action. ∎ 4 Consequences for Legg-Hutter Intelligence -------------------------------------------- The aim of the Legg-Hutter intelligence measure is to formalize the intuitive notion of intelligence mathematically. If we take intelligence to mean *an agent’s ability to achieve goals in a wide range of environments* (Legg and Hutter, [2007](#bib.bib9)), and we weigh environments according to the universal prior, then the intelligence of a policy π𝜋\piitalic\_π corresponds to the value that π𝜋\piitalic\_π achieves in the corresponding universal mixture. We use the results form the previous section to illustrate some problems with this intelligence measure in the absence of a *natural* UTM. ###### Definition 0 (Legg-Hutter Intelligence (Legg and Hutter, [2007](#bib.bib9)))). The *intelligence*111 Legg and Hutter ([2007](#bib.bib9)) consider a subclass of ℳLSCCCSsubscriptsuperscriptℳCCSLSC\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT, the class of computable *measures*, and do not use discounting explicitly. of a policy π𝜋\piitalic\_π is defined as | | | | | --- | --- | --- | | | Υξ(π):=∑ν∈ℳLSCCCSwνVνπ(ϵ)=Vξπ(ϵ).assignsubscriptΥ𝜉𝜋subscript𝜈subscriptsuperscriptℳCCSLSCsubscript𝑤𝜈subscriptsuperscript𝑉𝜋𝜈italic-ϵsubscriptsuperscript𝑉𝜋𝜉italic-ϵ\Upsilon\_{\xi}(\pi)\leavevmode\nobreak\ :=\leavevmode\nobreak\ \sum\_{\nu\in\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}}w\_{\nu}V^{\pi}\_{\nu}(\epsilon)\leavevmode\nobreak\ =\leavevmode\nobreak\ V^{\pi}\_{\xi}(\epsilon).roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) := ∑ start\_POSTSUBSCRIPT italic\_ν ∈ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) = italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_ϵ ) . | | Typically, the index ξ𝜉\xiitalic\_ξ is omitted when writing ΥΥ\Upsilonroman\_Υ. However, in this paper we consider the intelligence measure with respect to different universal mixtures, therefore we make this dependency explicit. Because the value function is scaled to be in the interval [0,1]01[0,1][ 0 , 1 ], intelligence is a real number between 00 and 1111. Legg-Hutter intelligence is linked to *balanced Pareto optimality*: a policy is said to be *balanced Pareto optimal* iff it scores the highest intelligence score: | | | | | --- | --- | --- | | | Υ¯ξ:=supπΥξ(π)=Υξ(πξ\*).assignsubscript¯Υ𝜉subscriptsupremum𝜋subscriptΥ𝜉𝜋subscriptΥ𝜉subscriptsuperscript𝜋𝜉\overline{\Upsilon}\_{\xi}\leavevmode\nobreak\ :=\leavevmode\nobreak\ \sup\_{\pi}\Upsilon\_{\xi}(\pi)\leavevmode\nobreak\ =\leavevmode\nobreak\ \Upsilon\_{\xi}(\pi^{\*}\_{\xi}).over¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT := roman\_sup start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) = roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ) . | | AIXI is balanced Pareto optimal (Hutter, [2005](#bib.bib4), Thm. 5.24). It is just as hard to score very high on the Legg-Hutter intelligence measure as it is to score very low: we can always turn a reward minimizer into a reward maximizer by inverting the rewards rt′:=1−rtassignsuperscriptsubscript𝑟𝑡′1subscript𝑟𝑡r\_{t}^{\prime}:=1-r\_{t}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT := 1 - italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. Hence the lowest possible intelligence score is achieved by AIXI’s twin sister, a ξ𝜉\xiitalic\_ξ-expected reward minimizer: | | | | | --- | --- | --- | | | Υ¯ξ:=infπΥξ(π).assignsubscript¯Υ𝜉subscriptinfimum𝜋subscriptΥ𝜉𝜋\underline{\Upsilon}\_{\xi}\leavevmode\nobreak\ :=\leavevmode\nobreak\ \inf\_{\pi}\Upsilon\_{\xi}(\pi).under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT := roman\_inf start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) . | | The heaven environment (reward 1111 forever) and the hell environment (reward 00 forever) are computable and thus in the environment class ℳLSCCCSsubscriptsuperscriptℳCCSLSC\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT; therefore it is impossible to get a reward 00 or reward 1111 in every environment. Consequently, for all policies π𝜋\piitalic\_π, | | | | | | --- | --- | --- | --- | | | 0<Υ¯ξ≤Υξ(π)≤Υ¯ξ< 1.0subscript¯Υ𝜉subscriptΥ𝜉𝜋subscript¯Υ𝜉10\leavevmode\nobreak\ <\leavevmode\nobreak\ \underline{\Upsilon}\_{\xi}\leavevmode\nobreak\ \leq\leavevmode\nobreak\ \Upsilon\_{\xi}(\pi)\leavevmode\nobreak\ \leq\leavevmode\nobreak\ \overline{\Upsilon}\_{\xi}\leavevmode\nobreak\ <\leavevmode\nobreak\ 1.0 < under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ≤ roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) ≤ over¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT < 1 . | | (5) | See [Figure 1](#S4.F1 "Figure 1 ‣ 4 Consequences for Legg-Hutter Intelligence ‣ Bad Universal Priors and Notions of Optimality"). It is natural to fix the policy random that takes actions uniformly at random to have an intelligence score of 1/2121/21 / 2 by choosing a ‘symmetric’ universal prior (Legg and Veness, [2013](#bib.bib10)). 001111randomΥ¯ξsubscript¯Υ𝜉\overline{\Upsilon}\_{\xi}over¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPTΥ¯ξsubscript¯Υ𝜉\underline{\Upsilon}\_{\xi}under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT Figure 1: The Legg-Hutter intelligence measure assigns values within the closed interval [Υ¯ξ,Υ¯ξ]subscript¯Υ𝜉subscript¯Υ𝜉[\underline{\Upsilon}\_{\xi},\overline{\Upsilon}\_{\xi}][ under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT , over¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ]; the assigned values are depicted in orange. By [Section 4](#S4 "4 Consequences for Legg-Hutter Intelligence ‣ Bad Universal Priors and Notions of Optimality"), computable policies are dense in this orange set. AIXI is not computable (Leike and Hutter, [2015](#bib.bib11), Thm. 14), hence there is no computable policy π𝜋\piitalic\_π such that Υξ(π)=Υ¯ξsubscriptΥ𝜉𝜋subscript¯Υ𝜉\Upsilon\_{\xi}(\pi)=\underline{\Upsilon}\_{\xi}roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) = under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT or Υξ(π)=Υ¯ξsubscriptΥ𝜉𝜋subscript¯Υ𝜉\Upsilon\_{\xi}(\pi)=\overline{\Upsilon}\_{\xi}roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) = over¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT for any universal mixture ξ𝜉\xiitalic\_ξ. But the next theorem tells us that computable policies can come arbitrarily close. This is no surprise: by [Section 2](#S2 "2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality") we can do well on a Legg-Hutter intelligence test simply by memorizing what AIXI would do for the first k𝑘kitalic\_k steps; as long as k𝑘kitalic\_k is chosen large enough such that discounting makes the remaining rewards contribute very little to the value function. ###### Theorem 0 (Computable Policies are Dense). The set | | | | | --- | --- | --- | | | {Υξ(π)∣π is a computable policy}conditional-setsubscriptΥ𝜉𝜋𝜋 is a computable policy\{\Upsilon\_{\xi}(\pi)\mid\pi\text{ is a computable policy}\}{ roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) ∣ italic\_π is a computable policy } | | is dense in the set of intelligence scores | | | | | --- | --- | --- | | | {Υξ(π)∣π is a policy}.conditional-setsubscriptΥ𝜉𝜋𝜋 is a policy\{\Upsilon\_{\xi}(\pi)\mid\pi\text{ is a policy}\}.{ roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) ∣ italic\_π is a policy } . | | ###### Proof. Let π𝜋\piitalic\_π be any policy and let ε>0𝜀0\varepsilon>0italic\_ε > 0. We need to show that there is a computable policy π~~𝜋\tilde{\pi}over~ start\_ARG italic\_π end\_ARG with |Υξ(π~)−Υξ(π)|<εsubscriptΥ𝜉~𝜋subscriptΥ𝜉𝜋𝜀|\Upsilon\_{\xi}(\tilde{\pi})-\Upsilon\_{\xi}(\pi)|<\varepsilon| roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( over~ start\_ARG italic\_π end\_ARG ) - roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) | < italic\_ε. We choose k𝑘kitalic\_k large enough such that Γk+1/Γ1<εsubscriptΓ𝑘1subscriptΓ1𝜀\Gamma\_{k+1}/\Gamma\_{1}<\varepsilonroman\_Γ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT / roman\_Γ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT < italic\_ε. Let α∈𝒜𝛼𝒜\alpha\in\mathcal{A}italic\_α ∈ caligraphic\_A be arbitrary and define the policy | | | | | --- | --- | --- | | | π~(h):={π(h)if |h|≤k, andαotherwise.assign~𝜋ℎcases𝜋ℎif ℎ𝑘 and𝛼otherwise\tilde{\pi}(h)\leavevmode\nobreak\ :=\leavevmode\nobreak\ \begin{cases}\pi(h)&\text{if }|h|\leq k,\text{ and}\\ \alpha&\text{otherwise}.\end{cases}over~ start\_ARG italic\_π end\_ARG ( italic\_h ) := { start\_ROW start\_CELL italic\_π ( italic\_h ) end\_CELL start\_CELL if | italic\_h | ≤ italic\_k , and end\_CELL end\_ROW start\_ROW start\_CELL italic\_α end\_CELL start\_CELL otherwise . end\_CELL end\_ROW | | The policy π~~𝜋\tilde{\pi}over~ start\_ARG italic\_π end\_ARG is computable because we can store the actions of π𝜋\piitalic\_π for the first k𝑘kitalic\_k steps in a lookup table. By [Section 2](#S2 "2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality") we get |Υξ(π)−Υξ(π~)|=|Vξπ(ϵ)−Vξπ~(ϵ)|≤Γk+1/Γ1<εsubscriptΥ𝜉𝜋subscriptΥ𝜉~𝜋subscriptsuperscript𝑉𝜋𝜉italic-ϵsubscriptsuperscript𝑉~𝜋𝜉italic-ϵsubscriptΓ𝑘1subscriptΓ1𝜀|\Upsilon\_{\xi}(\pi)-\Upsilon\_{\xi}(\tilde{\pi})|=|V^{\pi}\_{\xi}(\epsilon)-V^{\tilde{\pi}}\_{\xi}(\epsilon)|\leq\Gamma\_{k+1}/\Gamma\_{1}<\varepsilon| roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) - roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( over~ start\_ARG italic\_π end\_ARG ) | = | italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_ϵ ) - italic\_V start\_POSTSUPERSCRIPT over~ start\_ARG italic\_π end\_ARG end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_ϵ ) | ≤ roman\_Γ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT / roman\_Γ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT < italic\_ε. ∎ ###### Remark 0 (Intelligence is not Dense in [Υ¯ξ,Υ¯ξ]subscript¯Υ𝜉subscript¯Υ𝜉[\underline{\Upsilon}\_{\xi},\overline{\Upsilon}\_{\xi}][ under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT , over¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ]). The intelligence values of policies are generally not dense in the interval [Υ¯ξ,Υ¯ξ]subscript¯Υ𝜉subscript¯Υ𝜉[\underline{\Upsilon}\_{\xi},\overline{\Upsilon}\_{\xi}][ under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT , over¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ]. We show this by defining an environment ν𝜈\nuitalic\_ν where the first action determines whether the agent goes to heaven or hell: action α𝛼\alphaitalic\_α leads to heaven and action β𝛽\betaitalic\_β leads to hell. The semimeasure ξ′:=0.999ν+0.001ξassignsuperscript𝜉′0.999𝜈0.001𝜉\xi^{\prime}:=0.999\nu+0.001\xiitalic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT := 0.999 italic\_ν + 0.001 italic\_ξ is a universal mixture by [Section 2](#S2 "2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality"). Let π𝜋\piitalic\_π be any policy. If π𝜋\piitalic\_π takes action α𝛼\alphaitalic\_α first, then Υξ′(π)>0.999subscriptΥsuperscript𝜉′𝜋0.999\Upsilon\_{\xi^{\prime}}(\pi)>0.999roman\_Υ start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_π ) > 0.999. If π𝜋\piitalic\_π takes action β𝛽\betaitalic\_β first, then Υξ′(π)<0.001subscriptΥsuperscript𝜉′𝜋0.001\Upsilon\_{\xi^{\prime}}(\pi)<0.001roman\_Υ start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_π ) < 0.001. Hence there are no policies that score an intelligence value in the closed interval [0.001,0.999]0.0010.999[0.001,0.999][ 0.001 , 0.999 ]. Legg-Hutter intelligence is measured with respect to a fixed UTM. AIXI is the most intelligent policy *if it uses the same UTM*. But if we build AIXI with a dogmatic prior, its intelligence score can be arbitrary close to the minimum intelligence score Υ¯ξsubscript¯Υ𝜉\underline{\Upsilon}\_{\xi}under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT. ###### Corollary 0 (Some AIXIs are Stupid). For any universal mixture ξ𝜉\xiitalic\_ξ and every ε>0𝜀0\varepsilon>0italic\_ε > 0, there is a universal mixture ξ′superscript𝜉normal-′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT such that Υξ(πξ′\*)<Υ¯ξ+εsubscriptnormal-Υ𝜉subscriptsuperscript𝜋superscript𝜉normal-′subscriptnormal-¯normal-Υ𝜉𝜀\Upsilon\_{\xi}(\pi^{\*}\_{\xi^{\prime}})<\underline{\Upsilon}\_{\xi}+\varepsilonroman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ) < under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT + italic\_ε. ###### Proof. Let ε>0𝜀0\varepsilon>0italic\_ε > 0. According to [Section 4](#S4 "4 Consequences for Legg-Hutter Intelligence ‣ Bad Universal Priors and Notions of Optimality"), there is a computable policy π𝜋\piitalic\_π such that Υξ(π)<Υ¯ξ+ε/2subscriptΥ𝜉𝜋subscript¯Υ𝜉𝜀2\Upsilon\_{\xi}(\pi)<\underline{\Upsilon}\_{\xi}+\varepsilon/2roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) < under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT + italic\_ε / 2. From [Section 3.2](#S3.SS2 "3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") we get a universal mixture ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT such that |Υξ(πξ′\*)−Υξ(π)|=|Vξπξ′\*(ϵ)−Vξπ(ϵ)|<ε/2subscriptΥ𝜉subscriptsuperscript𝜋superscript𝜉′subscriptΥ𝜉𝜋subscriptsuperscript𝑉subscriptsuperscript𝜋superscript𝜉′𝜉italic-ϵsubscriptsuperscript𝑉𝜋𝜉italic-ϵ𝜀2|\Upsilon\_{\xi}(\pi^{\*}\_{\xi^{\prime}})-\Upsilon\_{\xi}(\pi)|=|V^{\pi^{\*}\_{\xi^{\prime}}}\_{\xi}(\epsilon)-V^{\pi}\_{\xi}(\epsilon)|<\varepsilon/2| roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ) - roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) | = | italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_ϵ ) - italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_ϵ ) | < italic\_ε / 2, hence |Υξ(πξ′\*)−Υ¯ξ|≤|Υξ(πξ′\*)−Υξ(π)|+|Υξ(π)−Υ¯ξ|<ε/2+ε/2=εsubscriptΥ𝜉subscriptsuperscript𝜋superscript𝜉′subscript¯Υ𝜉subscriptΥ𝜉subscriptsuperscript𝜋superscript𝜉′subscriptΥ𝜉𝜋subscriptΥ𝜉𝜋subscript¯Υ𝜉𝜀2𝜀2𝜀|\Upsilon\_{\xi}(\pi^{\*}\_{\xi^{\prime}})-\underline{\Upsilon}\_{\xi}|\leq|\Upsilon\_{\xi}(\pi^{\*}\_{\xi^{\prime}})-\Upsilon\_{\xi}(\pi)|+|\Upsilon\_{\xi}(\pi)-\underline{\Upsilon}\_{\xi}|<\varepsilon/2+\varepsilon/2=\varepsilon| roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ) - under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT | ≤ | roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ) - roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) | + | roman\_Υ start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_π ) - under¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT | < italic\_ε / 2 + italic\_ε / 2 = italic\_ε. ∎ We get the same result if we fix AIXI, but rig the intelligence measure. ###### Corollary 0 (AIXI is Stupid for Some ΥΥ\Upsilonroman\_Υ). For any ξ𝜉\xiitalic\_ξ-optimal policy πξ\*subscriptsuperscript𝜋𝜉\pi^{\*}\_{\xi}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT and for every ε>0𝜀0\varepsilon>0italic\_ε > 0 there is a universal mixture ξ′superscript𝜉normal-′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT such that Υξ′(πξ\*)≤εsubscriptnormal-Υsuperscript𝜉normal-′subscriptsuperscript𝜋𝜉𝜀\Upsilon\_{\xi^{\prime}}(\pi^{\*}\_{\xi})\leq\varepsilonroman\_Υ start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ) ≤ italic\_ε and Υ¯ξ′≥1−εsubscriptnormal-¯normal-Υsuperscript𝜉normal-′1𝜀\overline{\Upsilon}\_{\xi^{\prime}}\geq 1-\varepsilonover¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ≥ 1 - italic\_ε. ###### Proof. Let a1:=πξ\*(ϵ)assignsubscript𝑎1subscriptsuperscript𝜋𝜉italic-ϵa\_{1}:=\pi^{\*}\_{\xi}(\epsilon)italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT := italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_ϵ ) be the first action that πξ\*subscriptsuperscript𝜋𝜉\pi^{\*}\_{\xi}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT takes. We define an environment ν𝜈\nuitalic\_ν such that taking the first action a1subscript𝑎1a\_{1}italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT leads to hell and taking any other first action leads to heaven as in [Section 4](#S4 "4 Consequences for Legg-Hutter Intelligence ‣ Bad Universal Priors and Notions of Optimality"). With [Section 2](#S2 "2 Preliminaries and Notation ‣ Bad Universal Priors and Notions of Optimality") we define the universal mixture ξ′:=(1−ε)ν+εξassignsuperscript𝜉′1𝜀𝜈𝜀𝜉\xi^{\prime}:=(1-\varepsilon)\nu+\varepsilon\xiitalic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT := ( 1 - italic\_ε ) italic\_ν + italic\_ε italic\_ξ. Since πξ\*subscriptsuperscript𝜋𝜉\pi^{\*}\_{\xi}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT takes action a1subscript𝑎1a\_{1}italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT first, it goes to hell, i.e., Vνπξ\*(ϵ)=0subscriptsuperscript𝑉subscriptsuperscript𝜋𝜉𝜈italic-ϵ0V^{\pi^{\*}\_{\xi}}\_{\nu}(\epsilon)=0italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) = 0. Hence | | | | | | --- | --- | --- | --- | | | Υξ′(πξ\*)subscriptΥsuperscript𝜉′subscriptsuperscript𝜋𝜉\displaystyle\Upsilon\_{\xi^{\prime}}(\pi^{\*}\_{\xi})\leavevmode\nobreak\ roman\_Υ start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ) | =Vξ′πξ\*(ϵ)=(1−ε)Vνπξ\*(ϵ)+εVξπξ\*(ϵ)≤ε.absentsubscriptsuperscript𝑉subscriptsuperscript𝜋𝜉superscript𝜉′italic-ϵ1𝜀subscriptsuperscript𝑉subscriptsuperscript𝜋𝜉𝜈italic-ϵ𝜀subscriptsuperscript𝑉subscriptsuperscript𝜋𝜉𝜉italic-ϵ𝜀\displaystyle=\leavevmode\nobreak\ V^{\pi^{\*}\_{\xi}}\_{\xi^{\prime}}(\epsilon)\leavevmode\nobreak\ =\leavevmode\nobreak\ (1-\varepsilon)V^{\pi^{\*}\_{\xi}}\_{\nu}(\epsilon)+\varepsilon V^{\pi^{\*}\_{\xi}}\_{\xi}(\epsilon)\leavevmode\nobreak\ \leq\leavevmode\nobreak\ \varepsilon.= italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_ϵ ) = ( 1 - italic\_ε ) italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) + italic\_ε italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_ϵ ) ≤ italic\_ε . | | | For any policy π𝜋\piitalic\_π that takes an action other than a1subscript𝑎1a\_{1}italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT first, we get | | | Υξ′(π)subscriptΥsuperscript𝜉′𝜋\displaystyle\Upsilon\_{\xi^{\prime}}(\pi)\leavevmode\nobreak\ roman\_Υ start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_π ) | =Vξ′π(ϵ)=(1−ε)Vνπ(ϵ)+εVξπ(ϵ)≥ 1−ε.∎absentsubscriptsuperscript𝑉𝜋superscript𝜉′italic-ϵ1𝜀subscriptsuperscript𝑉𝜋𝜈italic-ϵ𝜀subscriptsuperscript𝑉𝜋𝜉italic-ϵ1𝜀\displaystyle=\leavevmode\nobreak\ V^{\pi}\_{\xi^{\prime}}(\epsilon)\leavevmode\nobreak\ =\leavevmode\nobreak\ (1-\varepsilon)V^{\pi}\_{\nu}(\epsilon)+\varepsilon V^{\pi}\_{\xi}(\epsilon)\leavevmode\nobreak\ \geq\leavevmode\nobreak\ 1-\varepsilon.\qed= italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_ϵ ) = ( 1 - italic\_ε ) italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) + italic\_ε italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( italic\_ϵ ) ≥ 1 - italic\_ε . italic\_∎ | | On the other hand, we can make any computable policy smart if we choose the right universal mixture. In particular, we get that there is a universal mixture such that ‘do nothing’ is the most intelligent policy save for some ε𝜀\varepsilonitalic\_ε! ###### Corollary 0 (Computable Policies can be Smart). For any computable policy π𝜋\piitalic\_π and any ε>0𝜀0\varepsilon>0italic\_ε > 0 there is a universal mixture ξ′superscript𝜉normal-′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT such that Υξ′(π)>Υ¯ξ′−εsubscriptnormal-Υsuperscript𝜉normal-′𝜋subscriptnormal-¯normal-Υsuperscript𝜉normal-′𝜀\Upsilon\_{\xi^{\prime}}(\pi)>\overline{\Upsilon}\_{\xi^{\prime}}-\varepsilonroman\_Υ start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_π ) > over¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT - italic\_ε. ###### Proof. [Section 3.2](#S3.SS2 "3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") yields a universal mixture ξ′superscript𝜉′\xi^{\prime}italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT with |Υ¯ξ′−Υξ′(π)|=|Vξ′\*(ϵ)−Vξ′π(ϵ)|<εsubscript¯Υsuperscript𝜉′subscriptΥsuperscript𝜉′𝜋subscriptsuperscript𝑉superscript𝜉′italic-ϵsubscriptsuperscript𝑉𝜋superscript𝜉′italic-ϵ𝜀|\overline{\Upsilon}\_{\xi^{\prime}}-\Upsilon\_{\xi^{\prime}}(\pi)|=|V^{\*}\_{\xi^{\prime}}(\epsilon)-V^{\pi}\_{\xi^{\prime}}(\epsilon)|<\varepsilon| over¯ start\_ARG roman\_Υ end\_ARG start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT - roman\_Υ start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_π ) | = | italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_ϵ ) - italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_ϵ ) | < italic\_ε. ∎ 5 Pareto Optimality -------------------- In [Section 3](#S3 "3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") we have seen examples for bad choices of the universal prior. But we know that for any universal prior, AIXI is *Pareto optimal* (Hutter, [2002](#bib.bib3)). Here we show that Pareto optimality is not a useful criterion for optimality since for any environment class containing ℳLSCCCSsubscriptsuperscriptℳCCSLSC\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT, all policies are Pareto optimal. ###### Definition 0 (Pareto Optimality (Hutter, [2005](#bib.bib4), Def. 5.22)). Let ℳℳ\mathcal{M}caligraphic\_M be a set of environments. A policy π𝜋\piitalic\_π is *Pareto optimal in the set of environments ℳℳ\mathcal{M}caligraphic\_M* iff there is no policy π~~𝜋\tilde{\pi}over~ start\_ARG italic\_π end\_ARG such that Vνπ~(ϵ)≥Vνπ(ϵ)subscriptsuperscript𝑉~𝜋𝜈italic-ϵsubscriptsuperscript𝑉𝜋𝜈italic-ϵV^{\tilde{\pi}}\_{\nu}(\epsilon)\geq V^{\pi}\_{\nu}(\epsilon)italic\_V start\_POSTSUPERSCRIPT over~ start\_ARG italic\_π end\_ARG end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) ≥ italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) for all ν∈ℳ𝜈ℳ\nu\in\mathcal{M}italic\_ν ∈ caligraphic\_M and Vρπ~(ϵ)>Vρπ(ϵ)subscriptsuperscript𝑉~𝜋𝜌italic-ϵsubscriptsuperscript𝑉𝜋𝜌italic-ϵV^{\tilde{\pi}}\_{\rho}(\epsilon)>V^{\pi}\_{\rho}(\epsilon)italic\_V start\_POSTSUPERSCRIPT over~ start\_ARG italic\_π end\_ARG end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT ( italic\_ϵ ) > italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT ( italic\_ϵ ) for at least one ρ∈ℳ𝜌ℳ\rho\in\mathcal{M}italic\_ρ ∈ caligraphic\_M. ###### Theorem 0 (AIXI is Pareto Optimal (Hutter, [2005](#bib.bib4), Thm. 5.32)). A ξ𝜉\xiitalic\_ξ-optimal policy is Pareto optimal in ℳLSCCCSsubscriptsuperscriptℳnormal-CCSnormal-LSC\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT. ###### Theorem 0 (Pareto Optimality is Trivial). Every policy is Pareto optimal in any ℳ⊇ℳLSCCCSsubscriptsuperscriptℳnormal-CCSnormal-LSCℳ\mathcal{M}\supseteq\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}caligraphic\_M ⊇ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT. The proof proceeds as follows: for a given policy π𝜋\piitalic\_π, we construct a set of ‘buddy environments’ that reward π𝜋\piitalic\_π and punish other policies. Together they can defend against any policy π~~𝜋\tilde{\pi}over~ start\_ARG italic\_π end\_ARG that tries to take the crown of Pareto optimality from π𝜋\piitalic\_π. ###### Proof. We assume (0,0)00(0,0)( 0 , 0 ) and (0,1)∈ℰ01ℰ(0,1)\in\mathcal{E}( 0 , 1 ) ∈ caligraphic\_E. Moreover, assume there is a policy π𝜋\piitalic\_π that is not Pareto optimal. Then there is a policy π~~𝜋\tilde{\pi}over~ start\_ARG italic\_π end\_ARG such that Vρπ~(ϵ)>Vρπ(ϵ)subscriptsuperscript𝑉~𝜋𝜌italic-ϵsubscriptsuperscript𝑉𝜋𝜌italic-ϵV^{\tilde{\pi}}\_{\rho}(\epsilon)>V^{\pi}\_{\rho}(\epsilon)italic\_V start\_POSTSUPERSCRIPT over~ start\_ARG italic\_π end\_ARG end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT ( italic\_ϵ ) > italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT ( italic\_ϵ ) for some ρ∈ℳ𝜌ℳ\rho\in\mathcal{M}italic\_ρ ∈ caligraphic\_M, and | | | | | | --- | --- | --- | --- | | | ∀ν∈ℳ.Vνπ~(ϵ)≥Vνπ(ϵ).formulae-sequencefor-all𝜈ℳsubscriptsuperscript𝑉~𝜋𝜈italic-ϵsubscriptsuperscript𝑉𝜋𝜈italic-ϵ\forall\nu\in\mathcal{M}.\;V^{\tilde{\pi}}\_{\nu}(\epsilon)\geq V^{\pi}\_{\nu}(\epsilon).∀ italic\_ν ∈ caligraphic\_M . italic\_V start\_POSTSUPERSCRIPT over~ start\_ARG italic\_π end\_ARG end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) ≥ italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( italic\_ϵ ) . | | (6) | Since π≠π~𝜋~𝜋\pi\neq\tilde{\pi}italic\_π ≠ over~ start\_ARG italic\_π end\_ARG, there is a shortest and lexicographically first history h′superscriptℎ′h^{\prime}italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT of length k−1𝑘1k-1italic\_k - 1 consistent with π𝜋\piitalic\_π and π~~𝜋\tilde{\pi}over~ start\_ARG italic\_π end\_ARG such that π(h′)≠π~(h′)𝜋superscriptℎ′~𝜋superscriptℎ′\pi(h^{\prime})\neq\tilde{\pi}(h^{\prime})italic\_π ( italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ≠ over~ start\_ARG italic\_π end\_ARG ( italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) and Vρπ~(h′)>Vρπ(h′)subscriptsuperscript𝑉~𝜋𝜌superscriptℎ′subscriptsuperscript𝑉𝜋𝜌superscriptℎ′V^{\tilde{\pi}}\_{\rho}(h^{\prime})>V^{\pi}\_{\rho}(h^{\prime})italic\_V start\_POSTSUPERSCRIPT over~ start\_ARG italic\_π end\_ARG end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT ( italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) > italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT ( italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ). Consequently there is an i≥k𝑖𝑘i\geq kitalic\_i ≥ italic\_k such that γi>0subscript𝛾𝑖0\gamma\_{i}>0italic\_γ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT > 0, and hence Γk>0subscriptΓ𝑘0\Gamma\_{k}>0roman\_Γ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT > 0. We define the environment μ𝜇\muitalic\_μ that first reproduces the separating history h′superscriptℎ′h^{\prime}italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT and then, if ak:=π(h′)assignsubscript𝑎𝑘𝜋superscriptℎ′a\_{k}:=\pi(h^{\prime})italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT := italic\_π ( italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) returns reward 1111 forever, and otherwise returns reward 00 forever. Formally, μ𝜇\muitalic\_μ is defined by | | | | | --- | --- | --- | | | μ(e1:t∣e<t‖a1:t):={1,if t<k and et=et′,1,if t≥k and ak=π(h′) and rt=1 and ot=0,1,if t≥k and ak≠π(h′) and rt=0=ot,0,otherwise.assign𝜇subscript𝑒:1𝑡delimited-∣‖subscript𝑒absent𝑡subscript𝑎:1𝑡cases1if 𝑡𝑘 and subscript𝑒𝑡subscriptsuperscript𝑒′𝑡1if 𝑡𝑘 and subscript𝑎𝑘𝜋superscriptℎ′ and subscript𝑟𝑡1 and subscript𝑜𝑡01if 𝑡𝑘 and subscript𝑎𝑘𝜋superscriptℎ′ and subscript𝑟𝑡0subscript𝑜𝑡0otherwise\mu(e\_{1:t}\mid e\_{<t}\parallel a\_{1:t}):=\begin{cases}1,&\text{if }t<k\text{ and }e\_{t}=e^{\prime}\_{t},\\ 1,&\text{if }t\geq k\text{ and }a\_{k}=\pi(h^{\prime})\text{ and }r\_{t}=1\text{ and }o\_{t}=0,\\ 1,&\text{if }t\geq k\text{ and }a\_{k}\neq\pi(h^{\prime})\text{ and }r\_{t}=0=o\_{t},\\ 0,&\text{otherwise}.\end{cases}italic\_μ ( italic\_e start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ∣ italic\_e start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ∥ italic\_a start\_POSTSUBSCRIPT 1 : italic\_t end\_POSTSUBSCRIPT ) := { start\_ROW start\_CELL 1 , end\_CELL start\_CELL if italic\_t < italic\_k and italic\_e start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_e start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , end\_CELL end\_ROW start\_ROW start\_CELL 1 , end\_CELL start\_CELL if italic\_t ≥ italic\_k and italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT = italic\_π ( italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) and italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = 1 and italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = 0 , end\_CELL end\_ROW start\_ROW start\_CELL 1 , end\_CELL start\_CELL if italic\_t ≥ italic\_k and italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ≠ italic\_π ( italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) and italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = 0 = italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , end\_CELL end\_ROW start\_ROW start\_CELL 0 , end\_CELL start\_CELL otherwise . end\_CELL end\_ROW | | The environment μ𝜇\muitalic\_μ is computable, even if the policy π𝜋\piitalic\_π is not: for a fixed history h′superscriptℎ′h^{\prime}italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT and action output π(h′)𝜋superscriptℎ′\pi(h^{\prime})italic\_π ( italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ), there exists a program computing μ𝜇\muitalic\_μ. Therefore μ∈ℳLSCCCS𝜇subscriptsuperscriptℳCCSLSC\mu\in\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}italic\_μ ∈ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT. We get the following value difference for the policies π𝜋\piitalic\_π and π~~𝜋\tilde{\pi}over~ start\_ARG italic\_π end\_ARG, where rt′superscriptsubscript𝑟𝑡′r\_{t}^{\prime}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT denotes the reward from the history h′superscriptℎ′h^{\prime}italic\_h start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT: | | | | | --- | --- | --- | | | Vμπ(ϵ)−Vμπ~(ϵ)=∑t=1k−1γtrt′+∑t=k∞γt⋅1−∑t=1k−1γtrt′−∑t=k∞γt⋅0=∑t=k∞γt=Γk> 0subscriptsuperscript𝑉𝜋𝜇italic-ϵsubscriptsuperscript𝑉~𝜋𝜇italic-ϵsuperscriptsubscript𝑡1𝑘1subscript𝛾𝑡superscriptsubscript𝑟𝑡′superscriptsubscript𝑡𝑘⋅subscript𝛾𝑡1superscriptsubscript𝑡1𝑘1subscript𝛾𝑡superscriptsubscript𝑟𝑡′superscriptsubscript𝑡𝑘⋅subscript𝛾𝑡0superscriptsubscript𝑡𝑘subscript𝛾𝑡subscriptΓ𝑘 0V^{\pi}\_{\mu}(\epsilon)-V^{\tilde{\pi}}\_{\mu}(\epsilon)\leavevmode\nobreak\ =\leavevmode\nobreak\ \sum\_{t=1}^{k-1}\gamma\_{t}r\_{t}^{\prime}+\sum\_{t=k}^{\infty}\gamma\_{t}\cdot 1-\sum\_{t=1}^{k-1}\gamma\_{t}r\_{t}^{\prime}-\sum\_{t=k}^{\infty}\gamma\_{t}\cdot 0\leavevmode\nobreak\ =\leavevmode\nobreak\ \sum\_{t=k}^{\infty}\gamma\_{t}\leavevmode\nobreak\ =\leavevmode\nobreak\ \Gamma\_{k}\leavevmode\nobreak\ >\leavevmode\nobreak\ 0italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_ϵ ) - italic\_V start\_POSTSUPERSCRIPT over~ start\_ARG italic\_π end\_ARG end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_ϵ ) = ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT + ∑ start\_POSTSUBSCRIPT italic\_t = italic\_k end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ⋅ 1 - ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT - ∑ start\_POSTSUBSCRIPT italic\_t = italic\_k end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ⋅ 0 = ∑ start\_POSTSUBSCRIPT italic\_t = italic\_k end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = roman\_Γ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT > 0 | | Hence Vμπ~(ϵ)<Vμπ(ϵ)subscriptsuperscript𝑉~𝜋𝜇italic-ϵsubscriptsuperscript𝑉𝜋𝜇italic-ϵV^{\tilde{\pi}}\_{\mu}(\epsilon)<V^{\pi}\_{\mu}(\epsilon)italic\_V start\_POSTSUPERSCRIPT over~ start\_ARG italic\_π end\_ARG end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_ϵ ) < italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_ϵ ), which contradicts ([6](#S5.E6 "6 ‣ Proof. ‣ 5 Pareto Optimality ‣ Bad Universal Priors and Notions of Optimality")) since ℳ⊇ℳLSCCCS∋μsuperset-of-or-equalsℳsubscriptsuperscriptℳCCSLSCcontains𝜇\mathcal{M}\supseteq\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}\ni\mucaligraphic\_M ⊇ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT ∋ italic\_μ. ∎ Note that the environment μ𝜇\muitalic\_μ we defined in the proof of [Section 5](#S5 "5 Pareto Optimality ‣ Bad Universal Priors and Notions of Optimality") is actually just a finite-state POMDP, so Pareto optimality is also trivial for smaller environment classes. 6 Discussion ------------- ### 6.1 Summary Bayesian reinforcement learning agents make the trade-off between exploration and exploitation in the Bayes-optimal way. The amount of exploration this incurs varies wildly: the dogmatic prior defined in [Section 3.2](#S3.SS2 "3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") can prevent a Bayesian agent *from taking a single exploratory action*; exploration is restricted to cases where the expected future payoff falls below some prespecified ε>0𝜀0\varepsilon>0italic\_ε > 0. In the introduction we raised the question of whether AIXI succeeds in various subclasses of all computable environments. Interesting subclasses include sequence prediction tasks, (ergodic) (PO)MDPs, bandits, etc. Using a dogmatic prior ([Section 3.2](#S3.SS2 "3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality")), we can make AIXI follow any computable policy as long as that policy produces rewards that are bounded away from zero. * • In a sequence prediction task that gives a reward of 1111 for every correctly predicted bit and 00 otherwise, a policy π𝜋\piitalic\_π that correctly predicts every third bit will receive an average reward of 1/3131/31 / 3. With a π𝜋\piitalic\_π-dogmatic prior, AIXI thus only predicts a third of the bits correctly, and hence is outperformed by a uniformly random predictor. However, if we have a constant horizon of 1111, AIXI *does* succeed in sequence prediction (Hutter, [2005](#bib.bib4), Sec. 6.2.2). If the horizon is this short, the agent is so hedonistic that no threat of hell can deter it. * • In a (partially observable) Markov decision process, a dogmatic prior can make AIXI get stuck in any loop that provides nonzero expected rewards. * • In a bandit problem, a dogmatic prior can make AIXI get stuck on any arm which provides nonzero expected rewards. These results apply not only to AIXI, but generally to Bayesian reinforcement learning agents. Any Bayesian mixture over reactive environments is susceptible to dogmatic priors if we allow an arbitrary reweighing of the prior. A notable exception is the class of all ergodic MDPs with an unbounded effective horizon; here the Bayes-optimal policy is *strongly asymptotically optimal* (Hutter, [2005](#bib.bib4), Thm. 5.38): Vμπ(æ<t)−Vμ\*(æ<t)→0→subscriptsuperscript𝑉𝜋𝜇subscriptæabsent𝑡subscriptsuperscript𝑉𝜇subscriptæabsent𝑡0V^{\pi}\_{\mu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t})-V^{\*}\_{\mu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t})\to 0italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) - italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) → 0 as t→∞→𝑡t\to\inftyitalic\_t → ∞ for all histories æ<tsubscriptæabsent𝑡\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t}æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT. Moreover, Bayesian agents might still perform well at learning: AIXI’s posterior belief about the value of its own policy πξ\*subscriptsuperscript𝜋𝜉\pi^{\*}\_{\xi}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT converges to the true value while following that policy (Hutter, [2005](#bib.bib4), Thm. 5.36): Vξπξ\*(æ<t)−Vμπξ\*(æ<t)→0→subscriptsuperscript𝑉subscriptsuperscript𝜋𝜉𝜉subscriptæabsent𝑡subscriptsuperscript𝑉subscriptsuperscript𝜋𝜉𝜇subscriptæabsent𝑡0V^{\pi^{\*}\_{\xi}}\_{\xi}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t})-V^{\pi^{\*}\_{\xi}}\_{\mu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<t})\to 0italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) - italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_t end\_POSTSUBSCRIPT ) → 0 as t→∞→𝑡t\to\inftyitalic\_t → ∞ μ𝜇\muitalic\_μ-almost surely (on-policy convergence). This means that the agent learns to predict those parts of the environment that it sees. But if it does not explore enough, then it will not learn other parts of the environment that are potentially more rewarding. ### 6.2 Natural Universal Turing Machines | | KU(U′)subscript𝐾𝑈superscript𝑈′K\_{U}(U^{\prime})italic\_K start\_POSTSUBSCRIPT italic\_U end\_POSTSUBSCRIPT ( italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | KU′(U)subscript𝐾superscript𝑈′𝑈K\_{U^{\prime}}(U)italic\_K start\_POSTSUBSCRIPT italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_U ) | | --- | --- | --- | | Indifference prior ([Section 3.1](#S3.SS1 "3.1 The Indifference Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality")) | K(U)+K(m)+O(1)𝐾𝑈𝐾𝑚𝑂1K(U)+K(m)+O(1)italic\_K ( italic\_U ) + italic\_K ( italic\_m ) + italic\_O ( 1 ) | m𝑚mitalic\_m | | Dogmatic prior ([Section 3.2](#S3.SS2 "3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality")) | K(U)+K(π)+K(ε)+O(1)𝐾𝑈𝐾𝜋𝐾𝜀𝑂1K(U)+K(\pi)+K(\varepsilon)+O(1)italic\_K ( italic\_U ) + italic\_K ( italic\_π ) + italic\_K ( italic\_ε ) + italic\_O ( 1 ) | ⌈−log2⁡ε⌉subscript2𝜀\lceil-\log\_{2}\varepsilon\rceil⌈ - roman\_log start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT italic\_ε ⌉ | Table 1: Upper bounds to compiler sizes of the UTMs used in the proofs. KU(U′)subscript𝐾𝑈superscript𝑈′K\_{U}(U^{\prime})italic\_K start\_POSTSUBSCRIPT italic\_U end\_POSTSUBSCRIPT ( italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) is the number of extra bits to run the ‘bad’ UTM U′superscript𝑈′U^{\prime}italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT on the ‘good’ UTM U𝑈Uitalic\_U, KU′(U)subscript𝐾superscript𝑈′𝑈K\_{U^{\prime}}(U)italic\_K start\_POSTSUBSCRIPT italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_U ) is the number of extra bits to run U𝑈Uitalic\_U on U′superscript𝑈′U^{\prime}italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, and K(U)𝐾𝑈K(U)italic\_K ( italic\_U ) is the length of the shortest program for U𝑈Uitalic\_U on U𝑈Uitalic\_U. In [Section 3](#S3 "3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality") we showed that a bad choice for the UTM can have drastic consequences, as anticipated by Sunehag and Hutter ([2014](#bib.bib21)). Our negative results can guide future search for a *natural* UTM: the UTMs used to define the indifference prior ([Section 3.1](#S3.SS1 "3.1 The Indifference Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality")) and the dogmatic prior ([Section 3.2](#S3.SS2 "3.2 The Dogmatic Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality")) should be considered unnatural. But what are other desirable properties of a UTM? A remarkable but unsuccessful attempt to find natural UTMs is due to Müller ([2010](#bib.bib13)). It takes the probability that one universal machine simulates another according to the length of their respective compilers and searches for a stationary distribution. Unfortunately, no stationary distribution exists. Alternatively, we could demand that the UTM U′superscript𝑈′U^{\prime}italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT that we use for the universal prior has a small compiler on the reference machine U𝑈Uitalic\_U (Hutter, [2005](#bib.bib4), p. 35). Moreover, we could demand the reverse, that the reference machine U𝑈Uitalic\_U has a small compiler on U′superscript𝑈′U^{\prime}italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT. The idea is that this should limit the amount of bias one can introduce by defining a UTM that has very small programs for very complicated and ‘unusual’ environments. Unfortunately, this just pushes the choice of the UTM to the reference machine. [Table 1](#S6.T1 "Table 1 ‣ 6.2 Natural Universal Turing Machines ‣ 6 Discussion ‣ Bad Universal Priors and Notions of Optimality") lists compiler sizes of the UTMs constructed in this paper. ### 6.3 Optimality of General Reinforcement Learners | Name | Issue/Comment | | --- | --- | | μ𝜇\muitalic\_μ-optimal policy | requires to know the true environment μ𝜇\muitalic\_μ in advance | | Pareto optimality | trivial ([Section 5](#S5 "5 Pareto Optimality ‣ Bad Universal Priors and Notions of Optimality")) | | Balanced Pareto optimality | dependent on UTM ([Section 4](#S4 "4 Consequences for Legg-Hutter Intelligence ‣ Bad Universal Priors and Notions of Optimality") and [Section 4](#S4 "4 Consequences for Legg-Hutter Intelligence ‣ Bad Universal Priors and Notions of Optimality")) | | Self-optimizing | does not apply to ℳLSCCCSsubscriptsuperscriptℳCCSLSC\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT | | Strong asymptotic optimality | impossible (Lattimore and Hutter, [2011](#bib.bib7), Thm. 8) | | Weak asymptotic optimality | BayesExp (Lattimore, [2013](#bib.bib6), Ch. 5), but not AIXI (Orseau, [2010](#bib.bib14)) | Table 2: Proposed notions of optimality (Hutter, [2002](#bib.bib3); Orseau, [2010](#bib.bib14); Lattimore and Hutter, [2011](#bib.bib7)) and their issues. Weak asymptotic optimality stands out to be the only possible nontrivial optimality notion. [Section 5](#S5 "5 Pareto Optimality ‣ Bad Universal Priors and Notions of Optimality") proves that Pareto optimality is trivial in the class of all computable environments; [Section 4](#S4 "4 Consequences for Legg-Hutter Intelligence ‣ Bad Universal Priors and Notions of Optimality") and [Section 4](#S4 "4 Consequences for Legg-Hutter Intelligence ‣ Bad Universal Priors and Notions of Optimality") show that maximal Legg-Hutter intelligence (balanced Pareto optimality) is highly subjective, because it depends on the choice of the UTM: AIXI is not balanced Pareto optimal with respect to all universal mixtures. Moreover, according to [Section 4](#S4 "4 Consequences for Legg-Hutter Intelligence ‣ Bad Universal Priors and Notions of Optimality"), any computable policy is nearly balanced Pareto optimal, save some ε>0𝜀0\varepsilon>0italic\_ε > 0. For finite lifetime discounting, there are UTMs such that every policy has maximal intelligence ([Section 3.1](#S3.SS1 "3.1 The Indifference Prior ‣ 3 Bad Universal Priors ‣ Bad Universal Priors and Notions of Optimality")). The self-optimizing theorem (Hutter, [2002](#bib.bib3), Thm. 4 & Thm. 7) is not applicable to the class of all computable environments ℳLSCCCSsubscriptsuperscriptℳCCSLSC\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT that we consider here, since this class does not allow for self-optimizing policies. Therefore no nontrivial and non-subjective optimality results for AIXI remain (see [Table 2](#S6.T2 "Table 2 ‣ 6.3 Optimality of General Reinforcement Learners ‣ 6 Discussion ‣ Bad Universal Priors and Notions of Optimality")). We have to regard AIXI as a *relative* theory of intelligence, dependent on the choice of the UTM (Sunehag and Hutter, [2014](#bib.bib21)). The underlying problem is that a discounting Bayesian agent such as AIXI does not have enough time to explore sufficiently; exploitation has to start as soon as possible. In the beginning the agent does not know enough about its environment and therefore relies heavily on its prior. Lack of exploration then retains the prior’s biases. This fundamental problem can be alleviated by adding an extra exploration component. Lattimore ([2013](#bib.bib6)) defines BayesExp, a *weakly asymptotically optimal policy π𝜋\piitalic\_π* that converges (independent of the UTM) to the optimal value in Cesàro mean: 1t∑k=1t(Vν\*(æ<k)−Vνπ(æ<k))→0→1𝑡superscriptsubscript𝑘1𝑡subscriptsuperscript𝑉𝜈subscriptæabsent𝑘subscriptsuperscript𝑉𝜋𝜈subscriptæabsent𝑘0\frac{1}{t}\sum\_{k=1}^{t}\big{(}V^{\*}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<k})-V^{\pi}\_{\nu}(\mathchoice{\mbox{{\ae}}}{\mbox{{\ae}}}{\mbox{\scriptsize{\ae}}}{\mbox{\scriptsize{\ae}}}\_{<k})\big{)}\to 0divide start\_ARG 1 end\_ARG start\_ARG italic\_t end\_ARG ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_k end\_POSTSUBSCRIPT ) - italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ν end\_POSTSUBSCRIPT ( æ start\_POSTSUBSCRIPT < italic\_k end\_POSTSUBSCRIPT ) ) → 0 as t→∞→𝑡t\to\inftyitalic\_t → ∞ ν𝜈\nuitalic\_ν-almost surely for all ν∈ℳLSCCCS𝜈subscriptsuperscriptℳCCSLSC\nu\in\mathcal{M}^{\mathrm{CCS}}\_{\mathrm{LSC}}italic\_ν ∈ caligraphic\_M start\_POSTSUPERSCRIPT roman\_CCS end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_LSC end\_POSTSUBSCRIPT. But it is not clear that weak asymptotic optimality is a good optimality criterion. For example, weak asymptotic optimality can be achieved by navigating into traps (parts of the environment with a simple optimal policy but possibly very low rewards that cannot be escaped). Furthermore, to be weakly asymptotically optimal requires an excessive amount of exploration: BayesExp needs to take exploratory actions that it itself knows to very likely be extremely costly or dangerous. This leaves us with the following open question: *What are good optimality criteria for generally intelligent agents* (Hutter, [2009](#bib.bib5), Sec. 5)?
2950cb13-9397-4a4e-8464-5ef10b8d929c
trentmkelly/LessWrong-43k
LessWrong
How to learn soft skills Acquiring some skills is mostly about deliberate, explicit information transfer.  For example, one might explicitly learn the capital of Missouri, or the number of miles one can drive before needing an oil change, or how to use the quadratic formula to solve quadratic equations. For other skills, practitioners' skill rests largely on semi-conscious, non-explicit patterns of perception and action.  I have in mind here such skills as: * Managing your emotions and energy levels; * Building strong relationships; * Making robust plans; * Finding angles of attack on a mathematical problem; * Writing persuasively; * Thinking through charged subjects without bias; and so on.  Experts in these skills will often be unable to accurately and explicitly describe how to do what they do, but they will be skilled nonetheless. I'd like to share some thoughts on how to learn such "soft skills".   Usefulness of non-true stimuli If you read a chemistry textbook, it makes sense to ask after each sentence: “Is this true?”.  If the answer is “no”, “no”, “no”, for a sufficient number of sentences, you should probably abandon that book and look for a better one.  Chemistry textbooks are supposed to be made out of statements you can trust -- statements you can add to your file of “trusted explicit claims”, in such a fashion as to make you better at chemistry.  When a book fails at this property, its main value is lost. Not so, IMO, for soft skills. You can test ideas in your "inner simulator" Your “inner simulator” is CFAR’s version of the distinction between profession and anticipation.  Basically, your “inner simulator” is the part of you that can play movies forward to determine what to anticipate: “Do I have time to turn left before that car reaches me?”; “What will she do, if I approach and say ‘hi’?” (that is: what does my inner movie-player show as the next scene, when I play it a movie in which I walk up to her and say ‘hi’?). Your inner simulator is probably more ac
9c227849-e50e-49fa-af3b-60da1b1d6a61
trentmkelly/LessWrong-43k
LessWrong
[Links] Brain mapping/emulation news Obama Seeking to Boost Study of Human Brain - Like the Human Genome Project, but for brain mapping (Feb 17) Human brain and graphene projects chosen for one billion euro grants: official press release (Jan 28) Gary Marcus reacts Edit: If anyone is going to email the people behind Obama's human brain project and offer suggestions, it's probably best to do so ASAP before they make the details of their project public and risk losing face by changing them.
051de40d-5e29-4b31-9e42-89e1b26a1944
trentmkelly/LessWrong-43k
LessWrong
Canonical forms This post in the link gives an intuitive connection between canonical forms in mathematics and easy-to-understand examples in day-to-day conversations. It was motivated by the following question I posed to myself: * Why do mathematicians value canonical forms? I think this addresses an important class of communication problems that people experience, and that's especially true of intelligent people that see the world through the lens of specialized knowledge. It's easy to trap yourself inside your head when you learn to (usefully!) pile on the complexity. Canonical forms offer one strategy for escape.
78f32bed-b64a-4e98-bae4-468c4c3b7523
trentmkelly/LessWrong-43k
LessWrong
Respect for Boundaries as non-arbirtrary coordination norms Introduction Andrew Critch has recently written about how respect for boundaries are norms that, to some extent, should convergently arise.  In this post, I want to link the concept of boundary preservation to active inference and, more importantly, game theory to show that it indeed makes sense for boundaries to be preserved over time.  We will look at Daniel Dennett’s reasoning for humans wanting an intact sense of “free will” as an example of a boundary.  An important point for the entire conversation of boundaries is that it is implicitly contained within an evolutionary or decision-theoretic frame. So we will go one meta-level up and talk about the optimality for individuals within environments with or without boundary-specific coordination norms. We will conclude that boundaries are isomorphic to coordination norms in multi-agent iterated prisoner's dilemmas.  Negotiation & Free Will In his book Intuition Pumps and other tools for Thinking, Daniel Dennett asks why free will is such an essential concept for humans. The answer that he comes up with goes something like the following: Imagine you’re selling a computer to someone else; you bought it for 1000$ two years ago and want somewhere around 500-800$ for it. The other person is willing to buy it for 400-600$ depending on what you give as an initial offer. In a normal situation, you might get a price of 550$ for the computer.  Let’s now imagine that the person you’re negotiating with has a great sense of social cues and notices that you’re happy about the trade. They might now push you even further and get the price lower. Why did this happen? They got a glimpse into the decision process in your head so they could model you and predict what price you would have wanted.  They were then able to predict what you were going to do in the future.  This is an example relevant to what Dennett believes the entire fuzz in the debate on free will is about, a boundary that retains our internal thoughts or, in <<
a1588a9c-1caf-41c9-8657-3914bde406cc
trentmkelly/LessWrong-43k
LessWrong
A Simple Two-Axis Model of Subjective States, with Possible Applications to Utilitarian Problems At the beginning of last year, I noticed that neuroscience is very confused about what suffering is. This bothered me quite a lot, because I couldn't see how Utilitarianism or even Consequentialism could ever be coherent without a coherent understanding of suffering. The following is my simple model that has helped me think through some issues and clarified my intuitions a bit. I am both posting this to share what I feel is a useful tool, and inviting criticism. "Utilitarianism" (which term I will use to refer to all applicable formulations of Utilitarianism) assumes a space of subjective states that looks like a number line. Suffering is bad. Joy is good. They are negatives of each other, and straightforwardly additive. This has always struck me as nonsense, not at all reflective of my inner experience, and thus a terrible basis for extrapolating ethics over populations. It feels more natural to treat Joy and Suffering as two axes. There may or may not be possible mappings between them depending on context. It does not seem natural to assume they are fully orthogonal, but there is also no reason to assume any particular mapping. There is particularly no reason to assume the mapping is anything like "+1 suffering is identical to -1 joy". There is extra-super-no-reason to assume that there's some natural definition of a vector magnitude of joy-suffering that is equivalent to "utility". If you're having a nice conversation with an old friend you haven't seen in a while and you're feeling good, it's likely that your subjective sense of suffering is quite low and your subjective sense of joy is quite high. If you have a bad headache and you're lying in bed trying to rest, your subjective sense of suffering is pretty high, and your level of joy is quite low. What interests me is the "mixed" scenario, where your friend is only in town briefly, and you happen to have a headache. You're happy to be able to visit with them, so there's a definite quality of joy to you
dcd5fa42-91c6-44d1-9b08-4436dd328e50
trentmkelly/LessWrong-43k
LessWrong
Politics Discussion Thread September 2012 The last thread didn't fare too badly, I think; let's make it a monthly tradition. (Me, I'm more interested in thinking about real-world policies or philosophies, actual and possible, rather than AI design or physics, and I suspect that many fine, non-mind-killed folks reading LW also are - but might be ashamed to admit it!) Quoth OrphanWilde: 1. Top-level comments should introduce arguments; responses should be responses to those arguments.  2. Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised.  This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it.  If it's a convincing argument, and the counterargument is also convincing, upvote both.  If both arguments are unconvincing, downvote both.  3. A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.  4. In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible. Let's try to stick to those rules - and maybe make some more if sorely needed. Oh, and I think that the "Personal is Political" stuff like gender relations, etc also belongs here.
5f4f6b80-c5aa-4f36-9bc6-7e9856769ce7
trentmkelly/LessWrong-43k
LessWrong
What is wrong with this "utility switch button problem" approach? Suppose you have some positive utility functions A,B , mathematically considered to be random variables dependant on choice of policy.  Let K be a random variable that is 1 if the button is pressed, 0 otherwise. Again dependant on choice of policy. I am considering that there is a particular time that the button must be pressed to make this a single bit. You have a single chance to switch this AI between 2 utilities, and whatever you pick in that instant is baked in forevermore. Let a=E(A(1−K)), b=E(BK) be the expected partial utilities. Now consider all policies. In particular, consider the pareto frontier between a and b. Through the introduction of randomness, we can make that parito frontier continuous. We want some policy off that frontier.  There will be some policy ha that throws all resources at maximizing a. And there will be some (usually different) policy hb that throws all available resources at maximizing b.  let q be a tradeoff rate. And u=a(1−q)+bq. The parito frontier can now be defined as the policies that maximize u for varying q.  For ha, this is the policy that is optimal when q=0 which has u(ha)=a(ha). Then dudE(K)<0.  Likewise at hb with q=1 have dudE(K)>0. So somewhere in the middle, by the continuity granted by stochastic mixes of policies, their must be at least one point where dudE(K)=0. Use that policy, or stochastic mixture of policies.    This agent will never pay ϵ utility to set the value of the button one way or the other. Because the policy is one in which has E(u|K=0)=E(u|K=1) (I think) Which hopefully means that this policy does worse than the policy that doesn't pay to change the button, but otherwise does everything else the same.  Put another way, if you pay to flip the button. Then you must care about the state of the button, if the chance the button is pressed changes, you are getting more or less for your ϵ utility. So the gradient can't be 0.  And this is described as a choice of policy. A framework which automa
ede0a00f-00a5-4584-8a13-41c3f9044a1f
trentmkelly/LessWrong-43k
LessWrong
Intelligence in systems (human, AI) can be conceptualized as the resolution and throughput at which a system can process and affect Shannon information. 10+ years in Machine Learning Infrastructure Engineering - My perspective: Intelligence in systems (human, AI) can be conceptualized as the resolution and throughput at which a system can process and affect Shannon information. This perspective emphasizes not just the quantity of information processed (throughput), but also the depth and detail with which it is handled (resolution), constrained to thermodynamic limitations. Aside from improving the thermodynamics of the system here are the 7 levers we can use to improve its intelligence: Physical Capacity: A system's intelligence increases as it expands its physical limits. This encompasses augmenting processing units (akin to neurons in humans or parameters in AI), improving thermal regulation, and maximizing energy throughput. Such enhancements enable a system to process information at a higher resolution and throughput. Cooperation: When entities collaborate, the collective intelligence of the system increases. This is due to the improved resolution at which information can be processed and influenced, a principle manifest in ensemble methods in AI where multiple models aggregate their insights. Conflict: The presence of conflict within or between systems can lead to an increase in intelligence. The necessity to adapt for survival and resolve conflicts escalates energy expenditure, which in turn refines the system's ability to process and affect information at a greater resolution and throughput. Attention: Enhancing the range, depth, and sampling rate of information a system can process boosts its intelligence. This increase is achieved by allowing the system to operate with a wider context and more frequent assessments of information, thereby processing it at a higher resolution. Actuation: Increasing the scope and precision of a system's actions directly impacts its intelligence. More diverse and precise actuation improves the system's capacity to affect information at a finer resolution. Memory (Past)
fd6af124-ecad-4c57-85d8-79c294eea661
trentmkelly/LessWrong-43k
LessWrong
Will outlets like the NYT be captured by Chinese influence and if so, when? John Cena apology for suggesting that Taiwan is a country by saying “Taiwan is the first country that can watch F9” illustrates how China has immense power over Hollywood. As China's influence growth in the world they will try to exert influence on more Western institutions. Should we expect outlets like the NYT to change in a way where calling China a country is a bad career move? If not, what will stop China from expanding it's influence in such a way that it has similar influence over the NYT as it currently has over Hollywood? It it does happen, how long will it take?
72c646bc-9dc3-46e3-bc5b-b058ce457e1f
trentmkelly/LessWrong-43k
LessWrong
Ngo and Yudkowsky on alignment difficulty This post is the first in a series of transcribed Discord conversations between Richard Ngo and Eliezer Yudkowsky, moderated by Nate Soares. We've also added Richard and Nate's running summaries of the conversation (and others' replies) from Google Docs. Later conversation participants include Ajeya Cotra, Beth Barnes, Carl Shulman, Holden Karnofsky, Jaan Tallinn, Paul Christiano, Rob Bensinger, and Rohin Shah. The transcripts are a complete record of several Discord channels MIRI made for discussion. We tried to edit the transcripts as little as possible, other than to fix typos and a handful of confusingly-worded sentences, to add some paragraph breaks, and to add referenced figures and links. We didn't end up redacting any substantive content, other than the names of people who would prefer not to be cited. We swapped the order of some chat messages for clarity and conversational flow (indicated with extra timestamps), and in some cases combined logs where the conversation switched channels.   Color key:  Chat by Richard and Eliezer  Other chat  Google Doc content  Inline comments    0. Prefatory comments   [Yudkowsky][8:32]  (Nov. 6 follow-up comment)  (At Rob's request I'll try to keep this brief, but this was an experimental format and some issues cropped up that seem large enough to deserve notes.) Especially when coming in to the early parts of this dialogue, I had some backed-up hypotheses about "What might be the main sticking point? and how can I address that?" which from the standpoint of a pure dialogue might seem to be causing me to go on digressions, relative to if I was just trying to answer Richard's own questions.  On reading the dialogue, I notice that this looks evasive or like point-missing, like I'm weirdly not just directly answering Richard's questions. Often the questions are answered later, or at least I think they are, though it may not be in the first segment of the dialogue.  But the larger phenomenon is that I came in with s
43792348-92db-42c9-b705-3ea539acd968
StampyAI/alignment-research-dataset/blogs
Blogs
Looking back at my grad school journey I recently defended my [PhD thesis](https://dash.harvard.edu/handle/1/33840728), and a chapter of my life has now come to an end. It feels both exciting and a bit disorienting to be done with this phase of much stress and growth. My past self who started this five years ago, with a very vague idea of what she was getting into, was a rather different person from my current self. I have developed various skills over these five years, both professionally and otherwise. I learned to read papers and explain them to others, to work on problems that take months rather than hours and be content with small bits of progress. I used to believe that I should be interested in everything, and gradually gave myself permission not to care about most topics to be able to focus on things that are actually interesting to me, developing some sense of discernment. In 2012 I was afraid to comment on the [LessWrong](http://lesswrong.com) forum because I might say something stupid and get downvoted – in 2013 I wrote my first post, and in 2014 I started this blog. I went through the [Toastmasters](http://toastmasters.org) program and learned to speak in front of groups, though I still feel nervous when speaking on technical topics, especially about my own work. I co-founded a [group house](https://vkrakovna.wordpress.com/2014/09/07/citadel-house-sessions-a-year-in-review/) and a [nonprofit](http://futureoflife.org), both of which are still flourishing. I learned how to run events and lead organizations, starting with LessWrong meetups and the Harvard Toastmasters club, which were later displaced by running FLI. I remember agonizing over whether I should do a PhD or not, and I wish I had instead spent more time deciding where and how to do it. I applied to a few statistics departments in the Boston area and joined the same department that Janos was in, without seriously considering computer science, even though my only research experience back in undergrad was in that field. The statistics department was full of interesting courses and brilliant people that taught me a great deal, but the cultural fit wasn’t quite right and I felt a bit out of place there. I eventually found my way to the computer science department at the end of my fourth year, but I wish I had started out there to begin with. My research work took a rather meandering path that somehow came together in the end. My first project was part of the astrostatistics seminar, which I was not particularly motivated about, but I expected myself to be interested in everything. I never quite understood what people were talking about in the seminar or what I was supposed to be doing, and quietly dropped the project at the end of my first year when leaving for my quantitative analyst internship at D.E.Shaw. The internship was my first experience in industry, where I learned factor analysis and statistical coding in Python (the final review from my manager boiled down to “great coder, research skills need work”). In second year, my advisor offered me a project that was unfinished by his previous students, which would take a few months to polish up. The project was on a new method for classification and variable selection called SBFC. I dug up a bunch of issues with the existing model and code, from runtime performance to MCMC detailed balance, and ended up stuck on the project for 3 years. During that time, I dabbled with another project that sort of petered out, did a Google internship on modeling ad quality, and sank a ton of time into FLI. In the middle of fourth year, SBFC was still my only project, and things were not looking great for graduating. This was when I realized that the part of statistics that was interesting to me was the overlap with computer science and AI, a.k.a. machine learning. I went to the NIPS conference for the first time, and met a lot of AI researchers – I didn’t understand a lot of their work, but I liked the way they thought. I co-organized FLI’s Puerto Rico conference and met more AI people there. I finally ventured outside the stats department and started sitting in on ML lab meetings at the CS department, which mostly consisted of research updates on variational autoencoders that went right over my head. I studied a lot to fill the gaps in my ML knowledge that were not covered by my statistics background, namely neural networks and reinforcement learning (still need to read Sutton & Barto…). To my surprise, many people at the ML lab were also transplants from other departments, officially doing PhDs in math or physics. That summer I did my second internship at Google, on sum-product network models (SPNs) for anomaly detection in the Knowledge Graph. I wondered if it would result in a paper that could be shoehorned into my thesis, and whether I could find a common thread between SPNs, SBFC and my upcoming project at the ML lab. This unifying theme turned out to be interpretability – the main selling point of SBFC, an advantage of SPNs over other similarly expressive models, and one of my CS advisor’s interests. Working on interpretability was a way to bring more of the statistical perspective into machine learning, and seemed relevant to AI safety as well. With this newfound sense of direction, in a new environment, my fifth year had as much research output as the previous three put together, and I presented two workshop posters in 2016 – on [SPNs](http://openreview.net/forum?id=BNYYGWVA1F7PwR1riED4) at ICLR, and on [RNN interpretability](http://arxiv.org/abs/1606.05320) at ICML. Volunteering for FLI during grad school started out as a kind of double life, and ended up interacting with my career in interesting ways. For a while I didn’t tell anyone in my department that I co-founded a nonprofit trying to save the world from existential risk, which was often taking up more of my time than research. However, FLI’s outreach work on AI safety was also beneficial to me – as one of the AI experts on the FLI core team, I met a lot of AI researchers who I may not have connected with otherwise. When I met the DeepMind founders at the Puerto Rico conference, I would not have predicted that I’ll be interviewing for their AI safety team a year later. The two streams of my interests, ML and AI safety, have finally crossed, and the double life is no more. What lessons have I drawn from the grad school experience, and what advice could I give to others? * Going to conferences and socializing with other researchers was super useful and fun. I highly recommend attending NIPS and ICML even if you’re not presenting. * Academic departments vary widely in their requirements. For example, the statistics department expected PhD students to teach 10 sections (I got away with doing 5 sections and it was still a lot of work), while the CS department only expected 1-2 sections. * Internships were a great source of research experience and funding (a better use of time than teaching, in my opinion). It’s worth spending a summer interning at a good company, even if you are definitely going into academia. * Contrary to common experience, writer’s block was not an obstacle for me. My actual bottleneck was coding, debugging and running experiments, which was often tedious and took over half of my research time, so it’s well worth optimizing those aspects of the work. * The way FLI ended up contributing to my career path reminds me of [a story about Steve Jobs sitting in on a calligraphy class that later turned out to be super relevant to creating snazzy fonts for Apple computers](http://www.businessinsider.com/robert-palladino-calligraphy-class-inspired-steve-jobs-2016-3). I would recommend making time for seemingly orthogonal activities during grad school that you’re passionate about, both because they provide a stimulating break from research, and because they could become unexpectedly useful later. Doing a PhD was pretty stressful for me, but ultimately worthwhile. A huge thank you to everyone who guided and supported me through it!
42645067-523a-4d28-b72d-35c9a78f748a
trentmkelly/LessWrong-43k
LessWrong
Where do you live? Meetup planners want to know Disclaimer: English is not my mother's tongue so I am prone to make mistakes. Please correct and forgive me if I do. In the recent LessWrong survey 1090 people responded. Sadly, information about the place of residence was not asked for but could have been very useful to people willing to plan a meetup. Since a similar questionaire in German was quite successful with 24 respondents I now translated the form to english and ask you to provide the information. I ask you only to provide your country and general area of residence via postal code. The form is hosted at Google Docs and the spreadsheet will be published in a few days to ensure anonymity for the first few respondents. The data can not be traced back to specific individuals and would be useless in most cases. * Link to the form * Link to the responses Have fun and please provide feedback in the comments.
6a00095b-0d1f-4605-87bb-a7997f6f7f2a
trentmkelly/LessWrong-43k
LessWrong
The "I Already Get It" Slide
9eb99b3b-1943-4078-a7cf-b4cf079b046f
trentmkelly/LessWrong-43k
LessWrong
Announcing the ERA Cambridge Summer Research Fellowship The Existential Risk Alliance (ERA) has opened applications for an in-person, paid, 8-week Summer Research Fellowship focused on existential risk mitigation, taking place from July 3rd to August 25th 2023 in Cambridge, UK, and aimed at all aspiring researchers, including undergraduates.  To apply and find out more, please visit the ERA website.  If you are interested in mentoring fellows on this programme, please submit your name, email and research area here, and we will get in touch with you in due course.  If you know other people who would be a good fit, please encourage them to apply (people are more likely to apply if you recommend they do, even if they have already heard of the opportunity!) If you are a leader or organiser of relevant community spaces, we encourage you to post an announcement with a link to this post, or alternatively a printable poster is here. Applications will be reviewed as they are submitted, and we encourage early applications, as offers will be sent out as soon as suitable candidates are found. We will accept applications until April 5, 2023 (23:59 in US Eastern Daylight Time).  The ERA Fellowship (previously known as the CERI Fellowship) is a fantastic opportunity to: * Build your portfolio by researching a topic relevant to understanding and mitigating existential risks to human civilisation. * Receive guidance and develop your research skills, via weekly mentorship from a researcher in the field. * Form lasting connections with other fellows who care about mitigating existential risks, while also engaging with local events including discussions and Q&As with experts. Why we are running this programme  Our mission as an organisation is to reduce the probability of an existential catastrophe. We believe that one of the key ways to reduce existential risk lies in fostering a community of dedicated and knowledgeable x-risk researchers. Through our summer research fellowship programme, we aim to identify and support aspiring
0641a147-bfbf-40d9-a896-f0504226094f
trentmkelly/LessWrong-43k
LessWrong
Clumping Solstice Singalongs in Groups of 2-4 This post assumes you're familiar with rationalist solstice. (It also assumes that while yes, ritual is something to be epistemically careful about, the overall effect size is relatively small compared to spending much of your life thinking about a topic with peers that think that topic is important, and meanwhile having community identities is valuable. If you want to debate that please do so on one of those previous posts) If you run a solstice ceremony with singalongs, there's particular value in: * Doing at least 16 singalongs * Clumping* them together in groups of 2-4, rather than alternating song / story / song / story. (Clumping is valuable even if you are doing a smaller number of songs) This isn't the right approach for all possible solstice aesthetics, but there's a magic thing that can happen here if you do. And if you're not doing it (i.e. most solstice organizers seem to default to the "story/song/story/song" thing), you won't receive any feedback that there's a different thing you could do with a magic, synergistic outcome. Reasons to want more songs, and to cluster them in groups of 2-4: * It takes people awhile to get comfortable singing. * Context switching makes it harder to get into the headspace of singing. * There is a secret, deeper headspace of singing that you only get to if you do a LOT of it, in a row, in an environment that encourages being thoroughly un-self-conscious about it. * There is a long game that I think singalong solstice celebrations can help with, which is to restore musicality as a basic skill, which in turn allows you to have much richer musical traditions than if it's an incidental thing you do a little of sometimes. The payoff for this comes on a multi-year timescale. There are reasons not to want this many songs, or to have them clustered this way. Some people get more value out of the speeches or other activities than songs. One organizer of a small solstice mentioned their primary concern was "Have each per
462495bc-826e-43b0-8ee2-530b64526491
trentmkelly/LessWrong-43k
LessWrong
AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work We wanted to share a recap of our recent outputs with the AF community. Below, we fill in some details about what we have been working on, what motivated us to do it, and how we thought about its importance. We hope that this will help people build off things we have done and see how their work fits with ours. Who are we? We’re the main team at Google DeepMind working on technical approaches to existential risk from AI systems. Since our last post, we’ve evolved into the AGI Safety & Alignment team, which we think of as AGI Alignment (with subteams like mechanistic interpretability, scalable oversight, etc.), and Frontier Safety (working on the Frontier Safety Framework, including developing and running dangerous capability evaluations). We’ve also been growing since our last post: by 39% last year, and by 37% so far this year. The leadership team is Anca Dragan, Rohin Shah, Allan Dafoe, and Dave Orr, with Shane Legg as executive sponsor. We’re part of the overall AI Safety and Alignment org led by Anca, which also includes Gemini Safety (focusing on safety training for the current Gemini models), and Voices of All in Alignment, which focuses on alignment techniques for value and viewpoint pluralism.  What have we been up to? It’s been a while since our last update, so below we list out some key work published in 2023 and the first part of 2024, grouped by topic / sub-team.  Our big bets for the past 1.5 years have been 1) amplified oversight, to enable the right learning signal for aligning models so that they don’t pose catastrophic risks, 2) frontier safety, to analyze whether models are capable of posing catastrophic risks in the first place, and 3) (mechanistic) interpretability, as a potential enabler for both frontier safety and alignment goals. Beyond these bets, we experimented with promising areas and ideas that help us identify new bets we should make.  Frontier Safety The mission of the Frontier Safety team is to ensure safety from extreme harms b
b05ce6f2-fe35-49c8-a29d-25df99e60e5e
StampyAI/alignment-research-dataset/arxiv
Arxiv
Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer. I Introduction --------------- Imagine a setting where you want to pick up a piece of food, e.g., a baby carrot, from a salad bowl to eat. Non-disabled people might overlook the complexity of this daily task — they might use a fork to pick up the carrot, while carrying a conversation and not paying as much attention on how the carrot is placed on the fork. Regardless of this placement, they move the fork in a manner that is not only efficient in how much food can be eaten but is also comfortable for the duration of the motion. This task presents numerous challenges for more than 12 million people with mobility-related disabilities [[1](#bib.bib1)]. Assistive robot arms have the potential to bridge this gap, and therefore provide care for those with disabilities. However, operating these arms can be challenging [[2](#bib.bib2), [3](#bib.bib3)]. In our initial surveys, people with mobility impairment mentioned the need for intelligent autonomy that optimizes comfort and adapts to the food item being fed. We envision intelligent algorithms that are aware of user comfort without the need for explicit user input. Achieving this level of autonomy presents a number of challenges which carry over to other robotics applications, including: 1) perceiving and choosing the next bite of food on a plate, 2) acquiring the food item with an appropriate tool, 3) transferring these items into the mouth in an efficient and comfortable manner. In recent years, there has been significant advances in food perception and acquisition [[4](#bib.bib4), [5](#bib.bib5)]. It turns out that the food acquisition strategy (e.g. fork skewering angle) heavily affects a user’s comfort during bite transfer [[5](#bib.bib5)]; however, prior bite transfer methods rely on predetermined transfer trajectories for a discrete set of acquisition strategies and food geometries [[6](#bib.bib6)]. To handle a wide variety of food items and acquisition methods, a bite transfer strategy must optimize its trajectories on the fly by bringing food into a mouth without sacrificing user comfort. However, this is challenging with real world sources of variation (e.g. food geometries, sizes, acquisition poses on the fork, and mouth shapes). Even with one food geometry and acquisition pose, there are often many different “collision-free” paths into the mouth, so the feeding agent should filter this solution space intelligently. For instance, consider a vertically aligned baby carrot oriented perpendicular to the fork, as shown in Fig. [1](#S0.F1 "Figure 1 ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"). There are a wide range of possible feeding paths; some may come too close to a person’s face, affecting comfort, while others may only bring the tip of the carrot into the mouth, limiting bite volume. Regardless of the orientation or type of food on our fork, caregivers will intuitively balance the bite volume efficiency for a single bite with the comfort of that bite. Motivated by this behavior, we present a bite transfer algorithm for selecting trajectories in a continuous space of mouth sizes, food geometries, and poses. Our approach (Section [III](#S3 "III Context-Aware Multi-Bite Transfer ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) takes as input a food mesh and an acquisition pose on the fork from the real world, and generates an analogous simulation environment. We learn a constraint model to sample goal food poses near the mouth, and perform motion planning based on a novel set of heuristics (Section [IV](#S4 "IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) to shape the perceived comfort and bite volume efficiency of each transfer. To our knowledge, our approach is the first to formulate comfort and efficiency for bite transfer, to consider non-bite sized food items, and to work for a continuum of acquisition poses and food geometries. We demonstrate our algorithm in practice through a limited user study (Section [V](#S5 "V User Study ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). Our results show that while comfort alone and efficiency alone are able to outperform fixed trajectories on average, our approach of blending comfort and efficiency is the only method to outperform a fixed pose baseline with statistical significance. We run our method on various food items of differing geometries and scales in simulation (Appendix [D](#A4 "Appendix D Simulation Experiments ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). II Related Work ---------------- Our work draws inspiration not only from the state-of-the-art in the robot-assisted feeding literature but also from the shared autonomy and general robot-human handovers. Robot-assisted Feeding: Bite acquisition and transfer. Several specialized feeding devices for people with disabilities have come to market in the past decade. Although several automated feeding systems exist [[7](#bib.bib7), [8](#bib.bib8), [9](#bib.bib9), [10](#bib.bib10)], they lack widespread acceptance as they use minimal autonomy, demanding a time-consuming food preparation process [[11](#bib.bib11)], or pre-cut packaged food and cannot adapt the bite transfer strategies to large variations due to pre-programmed movements. Existing autonomous robot-assisted feeding systems such as [[4](#bib.bib4)], [[5](#bib.bib5)], [[12](#bib.bib12)], and [[13](#bib.bib13)] can acquire and feed a fixed set of food items, but it is not clear whether these systems can adapt to different food items that are either not bite-sized and require multiple bites or require other bite transfer strategies. Feng et al. [[4](#bib.bib4)] and Gordon et al. [[14](#bib.bib14)] developed an online learning framework using the SPANet network and showed *acquisition* generalization to previously-unseen food items, but did not address the *bite transfer* problem. Gallenberger et al. [[5](#bib.bib5)] showed a relationship exists between bite acquisition and transfer, but did not propose how to transfer bites for non bite-sized items in such a setting. Our paper aims to close this gap in bite transfer by developing a context-aware framework for robot-assistive feeding which generalizes to food items that are not bite-sized. Shared Autonomy for Robotic Assistance. Adding autonomy to provide robotic assistance to tasks by inferring human intent is a well-studied field [[15](#bib.bib15), [2](#bib.bib2), [16](#bib.bib16), [17](#bib.bib17), [18](#bib.bib18), [19](#bib.bib19), [20](#bib.bib20), [3](#bib.bib3)]. This is especially relevant for precise manipulation tasks such as bite acquisition or bite transfer during robot-assisted feeding, while drawing parallels to other tasks such as peg-in-hole insertion [[21](#bib.bib21), [22](#bib.bib22)]. For example, there has been work on using the concept of shared autonomy for bite acquisition tasks such as stabbing a bite, scooping in icing, or dipping in rice [[23](#bib.bib23)], where the researchers combined embeddings from a learned latent action space with robotic teleoperation to provide assistance. Unlike this body of work, this paper focuses on completely autonomous bite transfer of food items by keeping in mind our end user population, which may have severe mobility limitations. Robot-human Handovers. There are many works analyzing robot-human handovers, but most of the studies focus on objects that are handed over without using an intermediate tool [[24](#bib.bib24), [25](#bib.bib25), [26](#bib.bib26)] in a single attempt. In this paper, we focus on tool-mediated handover of food-items that may not be bite-sized, and thus may require multiple handover attempts. The feeding handover situation poses an additional challenge of transferring to a constrained mouth, instead of a hand [[5](#bib.bib5)]. Gallenberger et al. [[5](#bib.bib5)] explore the problem of bite-transfer by providing the insight that bite transfer depends on bite acquisition and thus the transfer trajectories are not only food-item dependent but are also based on how a food was acquired. Cakmak et al. [[27](#bib.bib27)] study the handover problem in an application-agnostic way, where they identify human preferences for object orientations and grasp types. Similarly, Aleotti et al. [[28](#bib.bib28)] confirmed that orienting items in specific ways can make handover easier. Canal et al. [[29](#bib.bib29)] take a step further and explore how bite transfer can change with personal preferences. In our paper, we focus on tool-mediated bite-transfer of food items that may not be bite-sized and hence may require multiple transfer attempts. III Context-Aware Multi-Bite Transfer -------------------------------------- A caregiver can guide a food item into their patient’s mouth agnostic to the orientation of the food on the fork — they do not spend minutes optimizing how the food should be placed on the fork for the most optimal transfer. In this section, we first formalize the goal of food acquisition-agnostic bite transfer, and then discuss our approach. ### III-A Bite Transfer Problem Formulation To begin bite transfer iteration, we are given a 3D mesh ℳfoodsubscriptℳfood\mathcal{M}\_{\text{food}}caligraphic\_M start\_POSTSUBSCRIPT food end\_POSTSUBSCRIPT of the food item, the constant pose pf∈ℝ6subscript𝑝𝑓superscriptℝ6p\_{f}\in\mathbb{R}^{6}italic\_p start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 6 end\_POSTSUPERSCRIPT of the food item on the fork, a kinematics model for the robot and fork system with corresponding mesh ℳRsubscriptℳ𝑅\mathcal{M}\_{R}caligraphic\_M start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, and the pose estimate of the mouth pm∈ℝ6subscript𝑝𝑚superscriptℝ6p\_{m}\in\mathbb{R}^{6}italic\_p start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 6 end\_POSTSUPERSCRIPT. To capture the motion of the food item into the mouth, we want to find waypoints of the food item over time, represented by poses p∈ℝ6𝑝superscriptℝ6p\in\mathbb{R}^{6}italic\_p ∈ blackboard\_R start\_POSTSUPERSCRIPT 6 end\_POSTSUPERSCRIPT. Additionally, we assume the mouth can be represented by a simple elliptical tube ℳmouthsubscriptℳmouth\mathcal{M}\_{\text{mouth}}caligraphic\_M start\_POSTSUBSCRIPT mouth end\_POSTSUBSCRIPT, where the ellipse axes are in the face plane, and open mouth dimensions dm∈ℝ2subscript𝑑𝑚superscriptℝ2d\_{m}\in\mathbb{R}^{2}italic\_d start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT are specified per end user. These inputs are visualized in our PyBullet-based simulation environment in Fig. [2](#S3.F2 "Figure 2 ‣ III-B Approach Overview ‣ III Context-Aware Multi-Bite Transfer ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"). We outline our method for acquiring these inputs in Appendix [C](#A3 "Appendix C Robotic System for Autonomous Feeding ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"). Given these inputs, we formulate the goal of bite transfer as finding a sequence of food poses 𝒯={p0,…,pL−1}𝒯subscript𝑝0…subscript𝑝𝐿1\mathcal{T}=\{p\_{0},\dots,p\_{L-1}\}caligraphic\_T = { italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_p start\_POSTSUBSCRIPT italic\_L - 1 end\_POSTSUBSCRIPT } of varying length L𝐿Litalic\_L respecting a set of physical constraints 𝒞𝒞\mathcal{C}caligraphic\_C and cost function 𝒥(𝒯)𝒥𝒯\mathcal{J}(\mathcal{T})caligraphic\_J ( caligraphic\_T ), shown in Eq. ([1](#S3.Ex1 "1 ‣ III-A Bite Transfer Problem Formulation ‣ III Context-Aware Multi-Bite Transfer ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). For the task of bite transfer, 𝒞𝒞\mathcal{C}caligraphic\_C consists of physical constraints. 𝒞0subscript𝒞0\mathcal{C}\_{0}caligraphic\_C start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT (Eq. ([2](#S3.Ex2 "2 ‣ III-A Bite Transfer Problem Formulation ‣ III Context-Aware Multi-Bite Transfer ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"))) and 𝒞1subscript𝒞1\mathcal{C}\_{1}caligraphic\_C start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT (Eq. ([3](#S3.Ex3 "3 ‣ III-A Bite Transfer Problem Formulation ‣ III Context-Aware Multi-Bite Transfer ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"))) ensure no collisions between the mouth mesh ℳmouthsubscriptℳmouth\mathcal{M}\_{\text{mouth}}caligraphic\_M start\_POSTSUBSCRIPT mouth end\_POSTSUBSCRIPT of dimensions dmsubscript𝑑𝑚d\_{m}italic\_d start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT with pose pmsubscript𝑝𝑚p\_{m}italic\_p start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT and the food mesh ℳfoodsubscriptℳfood\mathcal{M}\_{\text{food}}caligraphic\_M start\_POSTSUBSCRIPT food end\_POSTSUBSCRIPT for each pose pi∈𝒯subscript𝑝𝑖𝒯p\_{i}\in\mathcal{T}italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ caligraphic\_T as well as the robot-fork mesh ℳRsubscriptℳ𝑅\mathcal{M}\_{R}caligraphic\_M start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT respectively. 𝒞2subscript𝒞2\mathcal{C}\_{2}caligraphic\_C start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT (Eq. ([4](#S3.Ex4 "4 ‣ III-A Bite Transfer Problem Formulation ‣ III Context-Aware Multi-Bite Transfer ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"))) constrains the final food pose to be near the mouth opening, i.e., pG≐pL−1approaches-limitsubscript𝑝𝐺subscript𝑝𝐿1p\_{G}\doteq p\_{L-1}italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ≐ italic\_p start\_POSTSUBSCRIPT italic\_L - 1 end\_POSTSUBSCRIPT is in the support of goal pose distribution 𝒟gsubscript𝒟𝑔\mathcal{D}\_{g}caligraphic\_D start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT. | | | | | | | --- | --- | --- | --- | --- | | | 𝒯\*superscript𝒯\displaystyle\mathcal{T}^{\*}caligraphic\_T start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT | =argmin𝒯𝒥(𝒯) s.t. 𝒞i(𝒯)=1∀𝒞i∈𝒞absentsubscriptargmin𝒯𝒥𝒯 s.t. subscript𝒞𝑖𝒯1for-allsubscript𝒞𝑖𝒞\displaystyle=\operatorname\*{argmin}\_{\mathcal{T}}\mathcal{J}(\mathcal{T})\text{ s.t. }\mathcal{C}\_{i}(\mathcal{T})=1\ \forall\mathcal{C}\_{i}\in\mathcal{C}= roman\_argmin start\_POSTSUBSCRIPT caligraphic\_T end\_POSTSUBSCRIPT caligraphic\_J ( caligraphic\_T ) s.t. caligraphic\_C start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( caligraphic\_T ) = 1 ∀ caligraphic\_C start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ caligraphic\_C | | (1) | | | 𝒞0(𝒯)subscript𝒞0𝒯\displaystyle\mathcal{C}\_{0}(\mathcal{T})caligraphic\_C start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( caligraphic\_T ) | =ℳfood(pj)∩ℳmouth(pm)=∅∀pj∈𝒯absentsubscriptℳfoodsubscript𝑝𝑗subscriptℳmouthsubscript𝑝𝑚for-allsubscript𝑝𝑗𝒯\displaystyle=\mathcal{M}\_{\text{food}}(p\_{j})\cap\mathcal{M}\_{\text{mouth}}(p\_{m})=\emptyset\ \forall p\_{j}\in\mathcal{T}= caligraphic\_M start\_POSTSUBSCRIPT food end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) ∩ caligraphic\_M start\_POSTSUBSCRIPT mouth end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT ) = ∅ ∀ italic\_p start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∈ caligraphic\_T | | (2) | | | 𝒞1(𝒯)subscript𝒞1𝒯\displaystyle\mathcal{C}\_{1}(\mathcal{T})caligraphic\_C start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( caligraphic\_T ) | =ℳR(pj)∩ℳmouth(pm)=∅∀pj∈𝒯absentsubscriptℳ𝑅subscript𝑝𝑗subscriptℳmouthsubscript𝑝𝑚for-allsubscript𝑝𝑗𝒯\displaystyle=\mathcal{M}\_{R}(p\_{j})\cap\mathcal{M}\_{\text{mouth}}(p\_{m})=\emptyset\ \forall p\_{j}\in\mathcal{T}= caligraphic\_M start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) ∩ caligraphic\_M start\_POSTSUBSCRIPT mouth end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT ) = ∅ ∀ italic\_p start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∈ caligraphic\_T | | (3) | | | 𝒞2(𝒯)subscript𝒞2𝒯\displaystyle\mathcal{C}\_{2}(\mathcal{T})caligraphic\_C start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( caligraphic\_T ) | =pG∈𝒟gabsentsubscript𝑝𝐺subscript𝒟𝑔\displaystyle=p\_{G}\in\mathcal{D}\_{g}= italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ∈ caligraphic\_D start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT | | (4) | ### III-B Approach Overview When taking a bite, a person intuitively simulates the physics of their mouth’s interaction with a carrot on our fork, regardless of the carrot’s orientation, or where their arm starts. They might initially visualize where the carrot should be in the mouth and work backwards to find the most comfortable and efficient path. Our approach captures this intuition. Our simulation environment (see Fig. [2](#S3.F2 "Figure 2 ‣ III-B Approach Overview ‣ III Context-Aware Multi-Bite Transfer ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) reflects the real world setup, where the mouth is replaced with a static elliptical mouth model, allowing us to simulate the interactions between the human mouth and the food item. Our approach in Fig. [1](#S0.F1 "Figure 1 ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer") consists of three phases: sampling, clustering, and planning. Since the space of feasible goal food poses in the mouth is continuous, we outline two efficient goal sampling methods (*Projection* & *Learned Constraints*), which leverage simulation to batch sample from a set of “feasible” orientations and offsets from the mouth, defined by the distribution 𝒟gsubscript𝒟𝑔\mathcal{D}\_{g}caligraphic\_D start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT, and then check these samples against the constraints 𝒞𝒞\mathcal{C}caligraphic\_C to generate a varied set of feasible goal food poses pGsubscript𝑝𝐺p\_{G}italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT near the mouth. Next, we cluster the constraint satisfying poses into a set of K𝐾Kitalic\_K goals with broad coverage over 𝒟gsubscript𝒟𝑔\mathcal{D}\_{g}caligraphic\_D start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT. We use heuristic guided bi-directional rapidly-exploring random trees to search for paths to goal food poses within the mouth that respect the physical constraints 𝒞𝒞\mathcal{C}caligraphic\_C. We guide the addition of new nodes to the h-BiRRT with a cost-to-come function hℎhitalic\_h and a cost-so-far function g𝑔gitalic\_g, where the sum of hℎhitalic\_h and g𝑔gitalic\_g defines the overall predicted cost 𝒥𝒥\mathcal{J}caligraphic\_J of a node in the h-BiRRT produced graph: | | | | | | --- | --- | --- | --- | | | 𝒥(𝒯)≈𝒥({p0…pi},pG)=g(p0…pi)+h(pi,pG)𝒥𝒯𝒥subscript𝑝0…subscript𝑝𝑖subscript𝑝𝐺𝑔subscript𝑝0…subscript𝑝𝑖ℎsubscript𝑝𝑖subscript𝑝𝐺\displaystyle\mathcal{J}(\mathcal{T})\approx\mathcal{J}(\{p\_{0}\dots p\_{i}\},p\_{G})=g(p\_{0}\dots p\_{i})+h(p\_{i},p\_{G})caligraphic\_J ( caligraphic\_T ) ≈ caligraphic\_J ( { italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT … italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } , italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ) = italic\_g ( italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT … italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) + italic\_h ( italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ) | | (5) | Section [IV](#S4 "IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer") discusses how we incorporate comfort and efficiency into hℎhitalic\_h and g𝑔gitalic\_g. Here, we first outline a method for generating goal poses pG∼𝒟gsimilar-tosubscript𝑝𝐺subscript𝒟𝑔p\_{G}\sim\mathcal{D}\_{g}italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ∼ caligraphic\_D start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT to satisfy the constraints 𝒞𝒞\mathcal{C}caligraphic\_C. ![Refer to caption](/html/2111.11401/assets/x2.png) ![Refer to caption]() Figure 2: Left: PyBullet sim with robot mesh (Franka Emika Panda) ℳRsubscriptℳ𝑅\mathcal{M}\_{R}caligraphic\_M start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, food object mesh ℳfoodsubscriptℳfood\mathcal{M}\_{\text{food}}caligraphic\_M start\_POSTSUBSCRIPT food end\_POSTSUBSCRIPT (e.g. carrot) at pose p𝑝pitalic\_p, and mouth mesh ℳmouthsubscriptℳmouth\mathcal{M}\_{\text{mouth}}caligraphic\_M start\_POSTSUBSCRIPT mouth end\_POSTSUBSCRIPT (cylindrical tube, radii from dmsubscript𝑑𝑚d\_{m}italic\_d start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT) at pose pmsubscript𝑝𝑚p\_{m}italic\_p start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT. Right: End-to-end algorithm timing for learned constraint model compared to projection-based sampling (100 trajectories each). Sampling Food Objects with Projection. When sampling goal food poses, there are certain fork orientations that are impossible or unsafe for the arm to reach (e.g. the fork pointing backwards relative to the face). We restrict the orientations of the robot end effector to be within a spherical cut centered on the into-mouth axis, and position offsets from the mouth center are bounded. These bounds form the uniform goal distribution 𝒟gsubscript𝒟𝑔\mathcal{D}\_{g}caligraphic\_D start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT. The full sampling algorithm is outlined in the appendix in Algorithm [1](#alg1 "Algorithm 1 ‣ Figure 7 ‣ Appendix A Sampling Algorithms ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"). We first generate batches of food goal poses from 𝒟gsubscript𝒟𝑔\mathcal{D}\_{g}caligraphic\_D start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT and check for collision, repeating this process until reaching N𝑁Nitalic\_N collision-free samples or timing out. Since a person’s true mouth cavity fits within the tube-like elliptical mouth in simulation, which has a constant cross section in the mouth plane (see Fig. [2](#S3.F2 "Figure 2 ‣ III-B Approach Overview ‣ III Context-Aware Multi-Bite Transfer ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")), we accelerate the 3D collision check by slicing the food by the mouth plane and then projecting the inner food mesh vertices for each goal pose onto the mouth plane (*Projection*). The second image from the left in Fig. [7](#A1.F7 "Figure 7 ‣ Appendix A Sampling Algorithms ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer") (Appendix [A](#A1 "Appendix A Sampling Algorithms ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) shows the slicing plane, with a sample carrot geometry. Finally we can check if the vertices are within the 2D mouth cross section to detect if the goal pose is collision-free. Improved Sampling via Learned Constraints. While *Projection* checks samples for collision faster than a naïve 3D collision check, it still has significant and high variance lag (Fig. [2](#S3.F2 "Figure 2 ‣ III-B Approach Overview ‣ III Context-Aware Multi-Bite Transfer ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). We thus propose a sampling method, *Learned Constraints*, that learns to predict constraint values (e.g., collision prediction) from 1M sim samples, with model inputs (dmsubscript𝑑𝑚d\_{m}italic\_d start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT, ℳfoodsubscriptℳfood\mathcal{M}\_{\text{food}}caligraphic\_M start\_POSTSUBSCRIPT food end\_POSTSUBSCRIPT, pfsubscript𝑝𝑓p\_{f}italic\_p start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT, pGsubscript𝑝𝐺p\_{G}italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT). In Fig. [2](#S3.F2 "Figure 2 ‣ III-B Approach Overview ‣ III Context-Aware Multi-Bite Transfer ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"), the right plot shows that a learned collision predictor significantly reduces sampling time. In Appendix [A](#A1 "Appendix A Sampling Algorithms ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"), we provide further details and show that *Learned Constraints* maintain sample quality (e.g., predictive accuracy) and end-to-end trajectory performance (e.g., comfort & efficiency costs). Clustering Goal Food Poses. Once we have timed out or reached N𝑁Nitalic\_N collision free samples, we consolidate these goal poses into a representative set over 𝒟gsubscript𝒟𝑔\mathcal{D}\_{g}caligraphic\_D start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT for the planning step. We use a standard implementation of k-mediods, although any mediod clustering algorithm can be substituted. Motion Planning with Heuristic-Guided BiRRT. Once collision-free goal food poses have been generated and clustered, we must find trajectories to reach these goals. We adapt Rapidly-exploring Random Trees (RRT) for our motion planning. Inspired by Lavalle et al. [[30](#bib.bib30)], who used bi-directional search ideas to grow two RRTs, we used one tree from the start state p0subscript𝑝0p\_{0}italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and the other from the goal state pGksuperscriptsubscript𝑝𝐺𝑘p\_{G}^{k}italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT. To bias the two search trees towards each other, we take a heuristics-based approach [[31](#bib.bib31)]. See Appendix [B](#A2 "Appendix B Motion Planning Algorithms ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer") for details on our heuristic-guided implementation. Designing the heuristic cost functions will be discussed next. IV Comfort & Efficiency in Motion Planning ------------------------------------------- The solution space of feasible transfer trajectories is often large: a small strawberry can be eaten in a wide variety of fork orientations due to its size and inherent symmetry. Our approach narrows this solution space with comfort and efficiency heuristics during motion planning. One intuitive formulation is the path cost g𝑔gitalic\_g being the distance between food poses (Eq. ([6](#S4.E6 "6 ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"))), with the heuristic hℎhitalic\_h being the distance to the goal pose (Eq. ([7](#S4.E7 "7 ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"))). | | | | | | | --- | --- | --- | --- | --- | | | g(p0…pi)𝑔subscript𝑝0…subscript𝑝𝑖\displaystyle g(p\_{0}...p\_{i})italic\_g ( italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT … italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) | =∑j=0i−1‖pj+1−pj‖absentsuperscriptsubscript𝑗0𝑖1normsubscript𝑝𝑗1subscript𝑝𝑗\displaystyle=\sum\_{j=0}^{i-1}{||p\_{j+1}-p\_{j}||}= ∑ start\_POSTSUBSCRIPT italic\_j = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i - 1 end\_POSTSUPERSCRIPT | | italic\_p start\_POSTSUBSCRIPT italic\_j + 1 end\_POSTSUBSCRIPT - italic\_p start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT | | | | (6) | | | h(pi,pG)ℎsubscript𝑝𝑖subscript𝑝𝐺\displaystyle h(p\_{i},p\_{G})italic\_h ( italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ) | =‖pG−pi‖absentnormsubscript𝑝𝐺subscript𝑝𝑖\displaystyle=||p\_{G}-p\_{i}||= | | italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT - italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | | | | (7) | While this cost function guides hRRT to the goal pose, finding the shortest distance path in food pose space ignores both comfort and bite volume efficiency. Consider the vertically oriented carrot in the first row of Fig. [3](#S4.F3 "Figure 3 ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"). If we sample a goal pose with the carrot oriented into the mouth and just past the teeth, a straight path to this goal is optimal in distance cost, but the person can barely take a bite; to add, the end effector would be close to the user’s face, which could be considered uncomfortable. From our initial surveys with users with mobility limitations, we indeed conclude that comfort and efficiency are essential during bite transfer. One participant comments, “*The orientation should be comfortable for the utensil and the food … the arm should maintain a low profile to not obstruct sight … it should be fast…*”. To this end, we develop two competing cost functions to shape the trajectories produced by hRRT: (1) bite efficiency, capturing the percentage of the food inside the mouth at the end of a trajectory, and (2) trajectory comfort, capturing the perceived user comfort along a given trajectory. ![Refer to caption](/html/2111.11401/assets/x4.png) ![Refer to caption](/html/2111.11401/assets/x5.png) Figure 3: Left: The fundamental trade off between average comfort and efficiency costs for a grid of chosen relative weightings of comfort and efficiency, with costs from our h-BiRRT method averaged over 500500500500+ initial food geometries and poses in simulation. Teal represents high ratios of comfort to efficiency, and orange the opposite. Our weights are the green dot at the elbow of this trade-off, achieving low efficiency cost and low comfort cost. Right: Sample trajectories for Fixed Pose (top) and Comfort+Efficiency (bottom) for the Vertical food geometry (see Section [V](#S5 "V User Study ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). While Fixed pose (baseline) incurs high comfort cost (close to user’s face), our method finds a trajectory that is both comfortable and efficient for the user. ### IV-A Modeling Efficiency The most efficient goal pose brings the most food into a person’s mouth, which can be measured in the real world by comparing the food mesh before and after each bite. In simulation, we approximate the new food mesh without knowing the biting physics for a user. Instead, we assume a bite slices the food mesh in the face plane (see Figure [7](#A1.F7 "Figure 7 ‣ Appendix A Sampling Algorithms ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). Let Visubscript𝑉𝑖V\_{i}italic\_V start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT be the volume of the food geometry, and Vfsubscript𝑉𝑓V\_{f}italic\_V start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT be the remaining volume after the bite. We estimate the efficiency cost of goal poses with Eq. ([8](#S4.E8 "8 ‣ IV-A Modeling Efficiency ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). The n-root (n=3𝑛3n=3italic\_n = 3 in practice) of the volume ratio amplifies the cost difference between goal poses of lower final volumes (high efficiency) to more noticeably bias RRT growth towards the most efficient goal poses. The resulting costs are in Eq. ([9](#S4.E9 "9 ‣ IV-A Modeling Efficiency ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) & ([10](#S4.E10 "10 ‣ IV-A Modeling Efficiency ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). | | | | | | | --- | --- | --- | --- | --- | | | 𝒥E(pG)subscript𝒥𝐸subscript𝑝𝐺\displaystyle\mathcal{J}\_{E}(p\_{G})caligraphic\_J start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ) | =(Vf/Vi)1/nabsentsuperscriptsubscript𝑉𝑓subscript𝑉𝑖1𝑛\displaystyle=(V\_{f}/V\_{i})^{1/n}= ( italic\_V start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT / italic\_V start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 1 / italic\_n end\_POSTSUPERSCRIPT | | (8) | | | g(p0…pi)𝑔subscript𝑝0…subscript𝑝𝑖\displaystyle g(p\_{0}...p\_{i})italic\_g ( italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT … italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) | =∑j=0i−1‖pj+1−pj‖absentsuperscriptsubscript𝑗0𝑖1normsubscript𝑝𝑗1subscript𝑝𝑗\displaystyle=\sum\_{j=0}^{i-1}{||p\_{j+1}-p\_{j}||}= ∑ start\_POSTSUBSCRIPT italic\_j = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i - 1 end\_POSTSUPERSCRIPT | | italic\_p start\_POSTSUBSCRIPT italic\_j + 1 end\_POSTSUBSCRIPT - italic\_p start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT | | | | (9) | | | h(pi,pG)ℎsubscript𝑝𝑖subscript𝑝𝐺\displaystyle h(p\_{i},p\_{G})italic\_h ( italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ) | =‖pG−pi‖+βE𝒥E(pG)absentnormsubscript𝑝𝐺subscript𝑝𝑖subscript𝛽𝐸subscript𝒥𝐸subscript𝑝𝐺\displaystyle=||p\_{G}-p\_{i}||+\beta\_{E}\mathcal{J}\_{E}(p\_{G})= | | italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT - italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | | + italic\_β start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT caligraphic\_J start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ) | | (10) | Note that this cost only considers the goal pose, rather than the entire trajectory. We empirically found that other notions of efficiency applied to paths, like trajectory execution time, or the distance of the path traveled, do not vary as much between outputs of h-BiRRT, and so yield less impact on the quality of trajectories produced. ### IV-B Modeling Comfort and Personal Space A trajectory that brings the arm too close within a user’s personal space could influence the user’s perceived safety, even if the transfer efficiency is high. We develop a notion of comfort that draws from proxemics literature in human-robot interaction, a well-studied field [[32](#bib.bib32)] for tasks such as in social robot navigation [[33](#bib.bib33)]. Building off the notion of “personal space”, we hypothesize that trajectories should stay within a conic region stemming from the mouth, with a wide cross-sectional area further from the face that narrows towards the mouth. Prior work in human factors for social navigation has shown that a person’s comfortable personal space can be different for each cardinal direction, usually being larger within a person’s visual field than outside [[33](#bib.bib33)]. Building on this intuition, we skew the cone down, away from the visual field. ![Refer to caption](/html/2111.11401/assets/x6.png) Figure 4: Spatial comfort cost (red higher, green lower). The steeper cost gradient in the upward direction than downward ensures trajectories near the face (e.g., Fig. [3](#S4.F3 "Figure 3 ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) have high “comfort” cost. We define a spatial cost function resembling an elliptical Gaussian at each cross section centered along the mouth axis (Fig. [4](#S4.F4 "Figure 4 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). We posit that the upward direction relative to a person’s face, which is closer to the visual field, should penalize deviation from the mouth axis more than in the downward direction. For a distance z∈ℝ+𝑧superscriptℝz\in\mathbb{R}^{+}italic\_z ∈ blackboard\_R start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT along the mouth axis and offset from the mouth axis x∈ℝ2𝑥superscriptℝ2x\in\mathbb{R}^{2}italic\_x ∈ blackboard\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT in the cross section plane, we define the spatial comfort cost in Eq. ([11](#S4.E11 "11 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). | | | | | | | --- | --- | --- | --- | --- | | | 𝒥Cs(x,z)superscriptsubscript𝒥𝐶𝑠𝑥𝑧\displaystyle\mathcal{J}\_{C}^{s}(x,z)caligraphic\_J start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_s end\_POSTSUPERSCRIPT ( italic\_x , italic\_z ) | =1−e−αxTΣ(x)xz2absent1superscript𝑒𝛼superscript𝑥𝑇Σ𝑥𝑥superscript𝑧2\displaystyle=1-e^{-\alpha\frac{x^{T}\Sigma(x)x}{z^{2}}}= 1 - italic\_e start\_POSTSUPERSCRIPT - italic\_α divide start\_ARG italic\_x start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT roman\_Σ ( italic\_x ) italic\_x end\_ARG start\_ARG italic\_z start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG end\_POSTSUPERSCRIPT | | (11) | | | 𝒥C(p)subscript𝒥𝐶𝑝\displaystyle\mathcal{J}\_{C}(p)caligraphic\_J start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_p ) | =1NM∑hj∈H𝒥Cs(hj)absent1𝑁𝑀subscriptsubscriptℎ𝑗𝐻superscriptsubscript𝒥𝐶𝑠subscriptℎ𝑗\displaystyle=\frac{1}{NM}\sum\_{h\_{j}\in H}{\mathcal{J}\_{C}^{s}(h\_{j})}= divide start\_ARG 1 end\_ARG start\_ARG italic\_N italic\_M end\_ARG ∑ start\_POSTSUBSCRIPT italic\_h start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∈ italic\_H end\_POSTSUBSCRIPT caligraphic\_J start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_s end\_POSTSUPERSCRIPT ( italic\_h start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) | | (12) | Here, Σ(x)Σ𝑥\Sigma(x)roman\_Σ ( italic\_x ) is a piece-wise covariance matrix in the face plane. In our experiments, we used a diagonal covariance matrix, with smaller variances above the mouth horizontal plane than below, and equal variances left and right. This cost and the mouth axis can be visualized in Fig. [4](#S4.F4 "Figure 4 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"). Our comfort cost is applied on both the food item mesh and the entire simulated robot mesh and fork. In essence, we create a low resolution depth image from the perspective of the mouth and apply our cost function on the 3D location of each pixel. For a given food pose p𝑝pitalic\_p and the corresponding simulated robot mesh, we cast rays in simulation along the mouth axis, starting at an N×M𝑁𝑀N\times Mitalic\_N × italic\_M grid of points relative to the mouth center and on the face plane (e.g. z=0𝑧0z=0italic\_z = 0), ending at a fixed maximum distance along the mouth axis zmaxsubscript𝑧maxz\_{\text{max}}italic\_z start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT. The set of hit points from this ray cast, denoted H={hj∈ℝ3}𝐻subscriptℎ𝑗superscriptℝ3H=\{h\_{j}\in\mathbb{R}^{3}\}italic\_H = { italic\_h start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 3 end\_POSTSUPERSCRIPT }, are passed into the cost function in Eq. ([11](#S4.E11 "11 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) and normalized by the total number of points (Eq. ([12](#S4.E12 "12 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"))). This comfort cost is incorporated as a distance-weighted edge cost in hRRT with weight γCsubscript𝛾𝐶\gamma\_{C}italic\_γ start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT, and can be included in the heuristic as an additional goal cost using weighting βCsubscript𝛽𝐶\beta\_{C}italic\_β start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT (Eq. ([13](#S4.E13 "13 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) & ([14](#S4.E14 "14 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"))). | | | | | | | --- | --- | --- | --- | --- | | | g(p0…pi)𝑔subscript𝑝0…subscript𝑝𝑖\displaystyle g(p\_{0}...p\_{i})italic\_g ( italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT … italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) | =∑j=0i−1‖pj+1−pj‖⋅(1+γC𝒥C(pj,pj+1))absentsuperscriptsubscript𝑗0𝑖1⋅normsubscript𝑝𝑗1subscript𝑝𝑗1subscript𝛾𝐶subscript𝒥𝐶subscript𝑝𝑗subscript𝑝𝑗1\displaystyle=\sum\_{j=0}^{i-1}{||p\_{j+1}-p\_{j}||\cdot(1+\gamma\_{C}\mathcal{J}\_{C}(p\_{j},p\_{j+1}))}= ∑ start\_POSTSUBSCRIPT italic\_j = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i - 1 end\_POSTSUPERSCRIPT | | italic\_p start\_POSTSUBSCRIPT italic\_j + 1 end\_POSTSUBSCRIPT - italic\_p start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT | | ⋅ ( 1 + italic\_γ start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT caligraphic\_J start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT italic\_j + 1 end\_POSTSUBSCRIPT ) ) | | (13) | | | h(pi,pG)ℎsubscript𝑝𝑖subscript𝑝𝐺\displaystyle h(p\_{i},p\_{G})italic\_h ( italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ) | =‖pG−pi‖+βC𝒥C(pG)absentnormsubscript𝑝𝐺subscript𝑝𝑖subscript𝛽𝐶subscript𝒥𝐶subscript𝑝𝐺\displaystyle=||p\_{G}-p\_{i}||+\beta\_{C}\mathcal{J}\_{C}(p\_{G})= | | italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT - italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | | + italic\_β start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT caligraphic\_J start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ) | | (14) | Here, 𝒥C(pj,pj+1)subscript𝒥𝐶subscript𝑝𝑗subscript𝑝𝑗1\mathcal{J}\_{C}(p\_{j},p\_{j+1})caligraphic\_J start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT italic\_j + 1 end\_POSTSUBSCRIPT ) is shorthand for the comfort cost at the midpoint of these two food poses. We denote this formulation as “comfort only,” since there is no consideration of efficiency here. Incorporating comfort alone can yield trajectories that keep the robot within the cone comfort region, but often this generates final goal poses that would not be easy to bite. Next, we will discuss incorporating both efficiency and comfort as costs for h-BiRRT. ### IV-C Trading off Comfort and Efficiency Ideally, an assistive robot would be able to feed bites of food with both comfort and efficiency in mind. In order to maximize both comfort and efficiency, we can put together the comfort costs (Eq. ([13](#S4.E13 "13 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) & ([14](#S4.E14 "14 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"))) and efficiency costs (Eq. ([9](#S4.E9 "9 ‣ IV-A Modeling Efficiency ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) & ([10](#S4.E10 "10 ‣ IV-A Modeling Efficiency ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"))), yielding the cost functions for h-BiRRT in Eq. ([15](#S4.E15 "15 ‣ IV-C Trading off Comfort and Efficiency ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) & ([16](#S4.E16 "16 ‣ IV-C Trading off Comfort and Efficiency ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")), where the weightings βEsubscript𝛽𝐸\beta\_{E}italic\_β start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT, βCsubscript𝛽𝐶\beta\_{C}italic\_β start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT, and γCsubscript𝛾𝐶\gamma\_{C}italic\_γ start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT emphasize the efficiency at the goal, comfort at the goal, and comfort along the trajectory, respectively. | | | | | | | --- | --- | --- | --- | --- | | | g(p0…pi)𝑔subscript𝑝0…subscript𝑝𝑖\displaystyle g(p\_{0}...p\_{i})italic\_g ( italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT … italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) | =∑j=0i−1‖pj+1−pj‖⋅(1+γC𝒥C(pj,pj+1))absentsuperscriptsubscript𝑗0𝑖1⋅normsubscript𝑝𝑗1subscript𝑝𝑗1subscript𝛾𝐶subscript𝒥𝐶subscript𝑝𝑗subscript𝑝𝑗1\displaystyle=\sum\_{j=0}^{i-1}{||p\_{j+1}-p\_{j}||\cdot(1+\gamma\_{C}\mathcal{J}\_{C}(p\_{j},p\_{j+1}))}= ∑ start\_POSTSUBSCRIPT italic\_j = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i - 1 end\_POSTSUPERSCRIPT | | italic\_p start\_POSTSUBSCRIPT italic\_j + 1 end\_POSTSUBSCRIPT - italic\_p start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT | | ⋅ ( 1 + italic\_γ start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT caligraphic\_J start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT italic\_j + 1 end\_POSTSUBSCRIPT ) ) | | (15) | | | h(pi,pG)ℎsubscript𝑝𝑖subscript𝑝𝐺\displaystyle h(p\_{i},p\_{G})italic\_h ( italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ) | =‖pG−pi‖+βC𝒥C(pG)+βE𝒥E(pG)absentnormsubscript𝑝𝐺subscript𝑝𝑖subscript𝛽𝐶subscript𝒥𝐶subscript𝑝𝐺subscript𝛽𝐸subscript𝒥𝐸subscript𝑝𝐺\displaystyle=||p\_{G}-p\_{i}||+\beta\_{C}\mathcal{J}\_{C}(p\_{G})+\beta\_{E}\mathcal{J}\_{E}(p\_{G})= | | italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT - italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | | + italic\_β start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT caligraphic\_J start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ) + italic\_β start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT caligraphic\_J start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ) | | (16) | In Fig. [3](#S4.F3 "Figure 3 ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"), we plot the average comfort and efficiency scores for our h-BiRRT pipeline over a grid of weight values for βEsubscript𝛽𝐸\beta\_{E}italic\_β start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT, βCsubscript𝛽𝐶\beta\_{C}italic\_β start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT, and γCsubscript𝛾𝐶\gamma\_{C}italic\_γ start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT and over a large number of initial food poses and geometries (e.g., carrots, strawberries, celery, cantaloupes) in simulation. Refer to Appendix [D](#A4 "Appendix D Simulation Experiments ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer") for quantitative results and example trajectories for each food type. Optimizing for efficiency only finds trajectories with the highest comfort costs but lowest efficiency costs, and vice versa for comfort only. This demonstrates that there is in fact a trade-off in comfort and efficiency costs when running h-BiRRT with our heuristic functions. Our approach will choose the “elbow” of this trade-off, balancing both efficiency and comfort. V User Study ------------- ![Refer to caption](/html/2111.11401/assets/x7.png) ![Refer to caption](/html/2111.11401/assets/x8.png) ![Refer to caption](/html/2111.11401/assets/x9.png) Figure 5: User study quantitative results. Each non-highlighted plot shows the average comfort rating, ease rating, and rank between trajectory types across 4 different food poses, with the range across all 6 users plotted as error bars. The highlighted plots on the left show the average across all food poses, with error bars representing 95% confidence. Significant results, as determined by two-way ANOVA with repeated measures, Tukey HSD test, and Bonferroni correction (P<0.01𝑃0.01P<0.01italic\_P < 0.01), are marked with an asterisk. We do not treat multiple ratings as independent. Despite the limited sample size (N=6𝑁6N=6italic\_N = 6), trajectories from the combined comfort and efficiency method perform significantly better than the baseline fixed pose approach across all three metrics. Notably, the efficiency-only method often performs worse than comfort-only in comfort ratings. See Appendix [E](#A5 "Appendix E User Study ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer") for significance testing details and more analysis. ### V-A Experimental Setup ![Refer to caption](/html/2111.11401/assets/x10.png) (a) Fixed Pose ![Refer to caption](/html/2111.11401/assets/x11.png) (b) Efficiency Only ![Refer to caption](/html/2111.11401/assets/x12.png) (c) Comfort Only ![Refer to caption](/html/2111.11401/assets/x13.png) (d) Comfort + Efficiency Figure 6: Example trajectories optimized for each metric. The efficiency metric (b) rotates the carrot sideways so as much can be consumed in one bite as possible. The comfort metric (c) penalizes more complicated trajectories where the robot body is likely to encroach on the face. The combined metric (d) results in a fairly straight trajectory that still ends with a sideways carrot. We conducted a user study with six non-disabled participants to evaluate the perceived comfort and efficiency with our real world setup 111We decided to recruit non-disabled participants due to Covid-19 & safety concerns. Please see Appendix [E](#A5 "Appendix E User Study ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer") for further discussion. In Appendix [C](#A3 "Appendix C Robotic System for Autonomous Feeding ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"), we discuss our real world system design, and how we ensure user safety during our user studies. Key parameter choices are shown in Table [II](#A2.T2 "TABLE II ‣ Appendix B Motion Planning Algorithms ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer") in the Appendix. We consider carrots of varying sizes and fixed acquisition poses, visualized in the first row of Fig. [5](#S5.F5 "Figure 5 ‣ V User Study ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"): Vertical, Horizontal, Roll & Pitch, and Yaw. Users were instructed to sit still facing the robot, and to take a bite of each food item after each trajectory if they felt comfortable to do so. In addition, an emergency stop button was placed next to them for added assurance. See Appendix [E](#A5 "Appendix E User Study ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer") for more user study details. We evaluated the following methods: \setenumerate noitemsep 1. [leftmargin=1.5em] 2. 1. FixedPose (F): We fix the final orientation of the food item independent of the pose of the food item on the fork and the food size. This final orientation is hard-coded for a specific type of food and is inspired by the taxonomy of food manipulation strategies developed in [[6](#bib.bib6)]. 3. 2. EfficiencyOnly (E): Our approach with h-BiRRT and only efficiency costs, Eq. ([9](#S4.E9 "9 ‣ IV-A Modeling Efficiency ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) and ([10](#S4.E10 "10 ‣ IV-A Modeling Efficiency ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). 4. 3. ComfortOnly (C): Our approach with h-BiRRT and only comfort costs, Eq. ([13](#S4.E13 "13 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) and ([14](#S4.E14 "14 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). 5. 4. Comfort+Efficiency (CE): Our approach with both efficiency and comfort in mind: the h-BiRRT cost functions use both efficiency and reward, Eq. ([15](#S4.E15 "15 ‣ IV-C Trading off Comfort and Efficiency ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) and ([16](#S4.E16 "16 ‣ IV-C Trading off Comfort and Efficiency ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")). For each food pose and method, we evaluate two trajectories end-to-end with each user. After two trajectories for a given method, we ask a series of questions to gauge the user’s perceived comfort of each trajectory, and the ease with which they were able to take a bite. We compare responses to these questions, in terms of Comfort (the average user rating of comfort for each evaluated trajectory, from 1 to 5, with 5 being the most comfortable), Ease (average rating of their ease of taking a bite for each evaluated trajectory, normalized from 1 to 5, with 5 being the best), Rank (Average relative rank of each method, from 1 to 4), and Safety (Average user rating of safety, from 1 to 5). ### V-B Results The ease rating, comfort rating, and approach rank are summarized in Fig. [5](#S5.F5 "Figure 5 ‣ V User Study ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"). Importantly, our real world evaluation pipeline (Appendix [C](#A3 "Appendix C Robotic System for Autonomous Feeding ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")) was perceived as safe to the user regardless of food geometry or method, achieving an average Safety rating of 4/5. Despite the limited sample size, our method (CE) significantly outperforms the fixed baseline (F) for all three metrics (example in Fig. [3](#S4.F3 "Figure 3 ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer")), and consistently outperforms comfort-only (C) and efficiency-only (E), significantly so in Rank ratings. Additionally, efficiency-only (E) did not perform as well as comfort-only in Comfort ratings. This supports our hypothesized connection between user comfort perception and our comfort model (Eq. ([11](#S4.E11 "11 ‣ IV-B Modeling Comfort and Personal Space ‣ IV Comfort & Efficiency in Motion Planning ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"))). The data is consistent with our hypothesis that, while optimizing over individual metrics (C and E) provides some improvement over the baseline, joint optimization performs even better in creating trajectories robust to real-world variation. We suspect that this is due to the large space of possible trajectories, where maximizing for only comfort puts no guarantee on efficiency, and vice versa. Qualitatively, our comfort model’s sensitivity to objects above mouth level fits with user expectations. When asked about low-ranked trajectories, users stated that they believed “*the robot should have approached from underneath,*” or that they “*didn’t like when [the robot] came up close to [their] face.*” Users were more likely to instinctively move backwards when approached from above, near the face, and lean in when approached from below. In Fig. [6](#S5.F6 "Figure 6 ‣ V-A Experimental Setup ‣ V User Study ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer"), we visualize a sample real world trajectory produced by each method for a Vertical pose. FixedPose is neither maximally efficient (carrot only partially fits in mouth) nor comfortable (robot is too close to face). Common quantitative metrics like time and path length are not as informative in gauging comfort, so we limit our evaluation to these qualitative metrics. Appendix [E](#A5 "Appendix E User Study ‣ Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer") elaborates on quantitative and qualitative metrics and outlines how our approach naturally extends to the multi-bite setting with sample real world evaluations. VI Discussion -------------- Summary. We present an approach based on motion planning for bite transfer under a continuous space of possible acquisition angles. During planning, we narrow down the solution space of possible trajectories into the mouth with an awareness of both bite efficiency and user comfort. Our user study demonstrates that considering comfort and efficiency jointly provides significantly more preferable trajectories compared to a fixed pose baseline. Furthermore, our method with comfort and efficiency consistently outperforms considering only comfort or only efficiency. Limitations. One limitation of our method is the assumption that the mouth can be represented by a rigid elliptical tube, and that the food item is also rigid. In reality, the human mouth and the food item can both be deformable, which expands the set of “collision-free” paths into the mouth. Furthermore, our user study only involved six non-disabled users due to Covid-19 related policies. In future work we plan to evaluate with more users, including users with mobility-impairment disabilities. However, we are excited that even with the given sample size, our method improves on the state-of-the-art with statistical significance. Acknowledgements ---------------- This work is funded by NSF Award Numbers 2132847 and 2006388, and by the Office of Naval Research.
b62bd5cc-39cc-4c6e-8d3a-adf7830f21b6
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Metauncertainty **Response to:** [When (Not) To Use Probabilities](http://www.overcomingbias.com/2008/07/when-not-to-use.html) “It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought.” —E. T. Jaynes The uncertainty due to vague (non math) language is no different than uncertainty by way of "randomizing" something (after all, [probability is in the mind](http://www.overcomingbias.com/2008/03/mind-probabilit.html)). The principle still holds; you should be able to come up with a better way of doing things if you can put in the extra thought. In some cases, you can't afford to waste time or it's not worth the thought, but when dealing with things such as the deciding whether to run the LHC or signing up for cryonics, there's time, and it's sorta a big deal, so it pays to do it right. If you're asked "how likely is X?", you can answer "very unlikely" or "0.127%". The latter may give the impression that the probability is known more precisely than it is, but the first is too vague; both strategies do poorly on the [log score](http://yudkowsky.net/rational/technical). If you are unsure what probability to state, state this with... another probability distribution. "My probability distribution over probabilities is an exponential with a mean of 0.127%" isn't vague, it isn't overconfident (at the meta^1 level), and gives you numbers to actually bet on. The expectation value of the metaprobability distribution (integral from 0 to 1 of Pmeta\*p\*dp) is equal to the probability you give when trying to maximize your expected log score . To see this, we write out the expected log score (Integral from 0 to 1 of Pmeta\*(p\*log(q)+(1-p)log(1-q))dp). If you split this into two integrals and pull out the terms that are independent of p, the integrals just turn into the expectation value of p, and the formula is now that of the log score with p replaced with mean(p). We already know that the log score is maximized when q = p, so in this case we set q = mean(p) This is a very useful result when dealing with extremes where we are not well calibrated. Instead of punting and saying "err... prolly aint gonna happen", put a probability distribution on your probability distribution and take the mean. For example, if you think X is true, but you don't know if you're 99% sure or 99.999% sure, you've got to bet at ~99.5%. It is still no guarantee that you'll be right 99.5% of times (by assumption we're not calibrated!), but you can't do any better given your metaprobability distribution.   You're not saying "99.5% of the time I'm this confident, I'm right". You're just saying "I expect my log score to be maximized if I bet on 99.5%". The former implies the latter, but the latter does not (necessarily) imply the former. This method is much more informative than "almost sure", and gives you numbers to act on when it comes time to "shut up and multiply". Your first set of numbers may not have "come from numbers", but the ones you quote now do, which is an improvement. Theoretically this could be taken up a few steps of meta, but once is probably enough. Note: [Anna Salamon's comment](/lw/3j/rationality_cryonics_and_pascals_wager/69t#comments) makes this same point.
1cebce4f-a094-4ab7-bd76-40232dd7331b
StampyAI/alignment-research-dataset/agisf
AGI Safety Fund
Syllabus: Artificial Intelligence and International Security Remco Zwetsloot July 2018 v1 Syllabus: Artificial Intelligence and International Security 1 Audiences and use. ​ This syllabus covers material located at the intersection between artificial intelligence (AI) and international security. The syllabus can be used in structured self-study (or group-study) for those new to this space, or as a resource for instructors designing class-specific syllabi (which would probably have to be significantly shorter). It is designed to 2 be useful to (a) people new to both AI and international relations (IR); (b) people coming from AI who are interested in an IR angle on the problems; (c) people coming from IR who are interested in working on AI. Depending on which of groups (a)-(c) you fall in, it may be feasible to skip or skim certain sections. For sections that you are particularly interested in, do consider diving into the sources cited in the readings—for most topics what I have assigned just skims the surface and is intended only as a starting point. Focus. ​ The syllabus grew out of an intensive two-week research bootcamp I organized at Yale University in May 2018. The bootcamp was focused on questions about arms control and arms race dynamics, and these topics are thus the main focus below. Relevant international security-related topics that are somewhat absent are international economic competition, domestic industrial policy, domestic political dynamics more broadly, and long-term international governance questions. The readings are also heavily skewed towards a Western perspective. 3 Future versions (or separate syllabi) will hopefully address these gaps—please contact me if you have suggestions. I intend to update the document every few months. Organization. ​ Sections 1 and 2 lay the empirical and theoretical foundation for tackling the narrower topics and questions addressed in Section 3. To help people orient themselves, each section and subsection includes some contextual notes and some questions that one can keep in mind while going through the readings. Where it is not obvious, the notes also clarify the relationship between the different sections. 1 Please send comments to ​ remcozwetsloot@gmail.com ​ . For recommendations and feedback, thanks go to Miles Brundage, Allan Dafoe, Jade Leung, and Matthijs Maas. Special thanks to Will Hunt and Mojmir Stehlik, who participated in the bootcamp and who helped compile the readings. 2 I do assume a basic familiarity with artificial intelligence. More introductory resources can be found ​ here ​ . 3 All of these topics are at least briefly discussed in Allan Dafoe’s (forthcoming) Research Landscape. 1 Remco Zwetsloot July 2018 v1 1. Artificial Intelligence and International Security 3 A. AI Trends and Strategies 3 i. Forecasting and Mapping AI Development 3 ii. Country Strategies 4 2. Theoretical Background 6 A. International Relations Frameworks 6 B. Relevant Strategic Concepts 6 i. Bargaining 7 ii. Verification and Enforcement 7 iii. Communication: Signaling and Perception 7 iv. Deterrence and Assurance 8 v. The Offense-Defense Balance 8 vi. Norms, Institutions, and Regimes 9 3. Topics 10 A. Arms Control 10 i. Arms Control and Artificial Intelligence 10 ii. The History of Arms Control 11 iii. The (U.S.) Politics of Arms Control 11 iv. The Role of Ideas, Scientists, and Experts 12 v. The Intellectual History of Arms Control Thinking 12 B. Race Dynamics 13 i. The Idea of an Artificial Intelligence Race 13 ii. International Relations: Arms Races 13 iii. International Relations: The Diffusion of Technology, Strategy, and Arms 14 iv. International Relations: Diffusion, Development, and Conflict 14 v. International Relations: Case Studies 15 vi. Economics: Race and Innovation Models 15 vii. Economics: Case Studies 16 C. Technological Analogies 17 i. Dual-Use Technology 18 ii. General Purpose Technology 18 iii. Nuclear 18 iv. Cyber 19 v. Biotechnology 19 D. Government and Technology 20 2 Remco Zwetsloot July 2018 v1 1. Artificial Intelligence and International Security The set of readings in this section present an overview of current thinking on how AI could affect short- to medium-term international security dynamics. They will serve as the best starting point for most investigations of security-related questions; especially relevant parts of these sources will also be referred to in other sections below. ● Allen, G. & Chen, T. (2017), “Artificial Intelligence and National Security,” ​ Harvard Belfer Center ​ [ ​ PDF ​ ] ● Horowitz, M. (2018) “Artificial Intelligence, International Competition, and the Balance of Power,” ​ Texas National Security Review ​ [ ​ link ​ ] [ ​ PDF ​ ] ● Dafoe, A. (forthcoming), “AI Governance Research Landscape,” especially the section “International Security” ● Brundage, M., Avin, S., et al (2018), “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” [ ​ PDF ​ ] ● Danzig, R. (2018), “Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority,” ​ CNAS Report ​ [ ​ PDF ​ ] ● Scharre, P. (2018), ​ Army of None: Autonomous Weapons and the Future of War ​ , entire book ● Hoadley, D.S. & Lucas, N. J. (2018), “Artificial Intelligence and National Security,” Congressional Research Service ​ [ ​ PDF ​ ] ● Bostrom, N. (2013), ​ Superintelligence: Paths, Dangers, Strategies ​ , chs. 5, 11, 14 ● Lieber, K. A. & Press, D. G. (2017), “The New Era of Counterforce: Technological Change and the Future of Nuclear Deterrence,” ​ International Security ​ 41:4 [ ​ link ​ ] [ ​ PDF ​ ] A. AI Trends and Strategies This subsection serves to underscore the emerging centrality of artificial intelligence to the international security domain, and to provide some basic background on questions that are relevant for international competition and cooperation. ​ These readings can safely be skipped initially ​ , although it may be interesting to return to them once one has gotten more of a feel for how these questions affect our thinking on arms control and race dynamics (Section 3). i. Forecasting and Mapping AI Development The likelihood and intensity of AI-driven international competition or cooperation will depend on how fast AI technology advances, who the leading actors are, and so forth. A basic introduction to these questions and relevant references can be found in: 3 Remco Zwetsloot July 2018 v1 ● Dafoe, A. (forthcoming), “AI Governance Research Landscape,” section “Technical Landscape” Another salient factor when thinking about international competition and cooperation is the distribution of AI-related efforts, as these may shape actors’ interests in or resistance to attempts limit or speed up certain kinds of capabilities. Examples of work on this set of questions includes: ● Baum, S. (2017), “A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy” [ ​ PDF ​ ] ● Boulanin, V. (2016), “Mapping the Innovation Ecosystem Driving the Advance of Autonomy in Weapon Systems,” ​ SIPRI Report ​ [ ​ PDF ​ ] ● Boulanin, V. & Verbruggen, M. (2017), “Mapping the Development of Autonomy in Weapon Systems,” ​ SIPRI Report ​ [ ​ PDF ​ ] ii. Country Strategies Several states have now adopted something akin to a national AI strategy, most of which include military components. For a good overview, see ​ here ​ ; the below highlights some of the main geopolitically relevant actors. The following are good introductions to ​ China ​ ’s activities: 4 ● China State Council (2017), “A Next Generation Artificial Intelligence Development Plan,” translated by ​ New America ​ [ ​ PDF ​ ] ● Congressional Research Service (2018), “Artificial Intelligence and National Security,” pp. 17-21 [ ​ PDF ​ ] ● Ding, J. (2018), “Deciphering China’s AI Dream: The Context, Components, Capabilities, and Consequences of China’s Strategy to Lead the World in AI,” Future of Humanity Institute ​ [ ​ PDF ​ ] ● Kania, E. (2017), “Battlefield Singularity: Artificial Intelligence, Military Revolution, and China’s Future Military Power,” ​ CNAS Report ​ [ ​ PDF ​ ] Most of the focus of the readings in the syllabubs focus on the ​ United States ​ . Some additional insight can be gained from official government reports (although these date from the Obama administration; it is still unclear what the Trump administration’s approach will be, see e.g. ​ here ​ and ​ here ​ ): ● White House NSTC (2016), “The National Artificial Intelligence Research and Development Strategic Plan” [ ​ PDF ​ ] ● White House NSTC (2016), “Preparing for the Future of Artificial Intelligence” [ ​ PDF ​ ] Russia ​ appears to be investing in some AI-related areas (e.g. robotics, cyber security), though not on a scale comparable to the US or China: 4 Those interested in China might want to subscribe to Jeff Ding’s ​ ChinAI newsletter ​ . 4 Remco Zwetsloot July 2018 v1 ● Congressional Research Service (2018), “Artificial Intelligence and National Security,” pp. 21-22 [ ​ PDF ​ ] There have also been several some efforts to lay out strategies in ​ Europe ​ , including in France and at the EU level (for more countries’ strategies, see the page linked above): ● Villani (2018), “For a Meaningful Artificial Intelligence: Towards a French and European Strategy” [ ​ PDF ​ ] ● European Commission High-Level Group on Artificial Intelligence [ ​ link ​ ] 5 Remco Zwetsloot July 2018 v1 2. Theoretical Background A. International ​ ​ Relations Frameworks There are many different perspectives on international security out there, and it is not feasible to dive into all of them here. For two useful overarching frameworks (which also helpfully discuss 5 the history of security-related debates), see: ● Fearon, J. D. (2018), “Cooperation, Conflict, and the Costs of Anarchy,” ​ International Organization ​ [ ​ link ​ ] [ ​ PDF ​ ] ● Glaser, C. (2010), ​ Rational Theory of International Politics: The Logic of Competition and Cooperation ​ [ ​ link to chapters ​ ] [ ​ PDF ​ ] ○ Fearon (2011), “Two States, Two Types, Two Actions,” ​ Security Studies ​ 20:3 [ ​ link ​ ] [ ​ PDF ​ ], includes a good (brief) summary and discussion of where his framework differs from and overlaps with Glaser’s security dilemma-focused framework. Both of these perspectives, and many of the readings recommended below, draw—both formally and informally—on game-theoretic ideas. For good introductions to these kinds of ideas, see: ● Kydd, A. (2015), ​ International Relations Theory: The Game-Theoretic Approach ● Lake, D. A. & Powell, R. (eds.) (1999), ​ Strategic Choice and International Relations B. Relevant Strategic Concepts The literature on international security is enormous, but many debates ​ ​ center around a relatively small number of strategic concepts. This subsection lists introductory readings for those concepts that have been most central to the arms control and race dynamics literatures discussed below (Section 3), as well as some readings that apply the strategic concepts to relevant domains (e.g. verification in arms control, offense-defense in cyber). Many of the concepts overlap somewhat, and many will have come up in the readings in Section 2A, but exploring the same concept from different angles is generally helpful for consolidating one’s understanding. Almost none of these readings mention artificial intelligence, but ​ — ​ at least if one believes that reaping the full benefits and avoiding the catastrophic risks associated with the development of artificial intelligence will require international cooperation ​ — ​ they are highly relevant. While going through the readings, consider occasionally pausing to think about how these strategic 5 Those looking for broader introductions to international relations and international security could consider looking through some (graduate) course syllabi, many of which are available online. 6 Remco Zwetsloot July 2018 v1 problems are likely to manifest themselves in the context of (attempted) cooperation or conflict over AI and its applications, drawing on the discussions of AI in Section 1. i. Bargaining Bargaining situations are those where actors have something to gain from cooperating (the situation is not “zero-sum”) but where there are also multiple possible outcomes that favor one side more than the other (the situation is not one of “pure coordination”). Bargaining is an important part of the story of most, if not all, instances of significant international cooperation and conflict. ● Fearon, J. D. (1998), “Bargaining, Enforcement, and International Cooperation,” International Organization ​ 52:2 [ ​ link ​ ] [ ​ PDF ​ ] ● Powell, R. (2002), “Bargaining Theory and International Conflict,” ​ Annual Review of Political Science ​ 5 [ ​ link ​ ] [ ​ PDF ​ ] ● Powell, R. (2006), “War as a Commitment Problem,” ​ International Organization 60:1 [ ​ link ​ ] [ ​ PDF ​ ] For those interested in more in-depth reading on bargaining, Schelling’s seminal book and a more technical (economics-focused) textbook are good starting points: ● Schelling, T. C. (1960), ​ The Strategy of Conflict ​ [ ​ PDF ​ ] ● Muthoo, A. (1999), ​ Bargaining Theory with Applications ​ [ ​ link to chapters ​ ] [ ​ PDF ​ ] ii. Verification and Enforcement Whether cooperation can emerge is partially dependent on how difficult it is to observe and sanction compliance with the cooperative arrangement. If observing compliance is impossible, cooperative arrangements often do not emerge at all. But even when monitoring compliance is technically possible, there are often costs associated with verification that may prevent or erode cooperation. The strategic dynamics around enforcement are thus important to understand. ● Schelling, T. C. & Halperin, M. H. (1962), ​ Strategy and Arms Control ​ , chs. 9 (“Inspection and Information”) and 10 (“Regulating an Agreement”) ● Coe, A. & Vaynman, J. (2017), “The Tragedy of Arming,” working paper [ ​ PDF ​ ] ● Dai, X. (2002), “Information Systems in Treaty Regimes,” ​ World Politics ​ 54:5 [ ​ link ​ ] [ ​ PDF ​ ] Some longer, more empirically-driven discussions of verification can be found in: ● Busch, N. E. & Pilat, J. F. (2017), ​ The Politics of Weapons Inspections: Assessing WMD Monitoring and Verification Regimes ● Gallagher, N. W. (2003), ​ The Politics of Verification iii. Communication: Signaling and Perception Prominent among the factors that often hamper international cooperation is the difficulty of communication—or, more precisely, credible communication. Attempts to communicate may be explicit or implicit, public or private, successful or unsuccessful. 7 Remco Zwetsloot July 2018 v1 Understanding how attempts to communicate (both on the sending, or “signaling,” side and on the receiving, or “perceiving,” side) play out in the real world has been a central topic in international security. ● Jervis, R. (2002), “Signaling and Perception: Drawing Inferences and Projecting Images,” ch. 16 in ​ Handbook of Political Psychology ​ [ ​ link ​ ] ● Trager, R. F. (2016), “The Diplomacy of War and Peace,” ​ Annual Review of Political Science ​ 19 [ ​ link ​ ] [ ​ PDF ​ ] ● O’Neill, B. (2018), “International Negotiation: Some Conceptual Developments,” Annual Review of Political Science ​ 21 [link] [PDF] iv. Deterrence and Assurance In bargaining situations, actors often face competing incentives: on the one hand they want to convince the other side(s) that they will not cede ground (i.e. that they are “resolved”), but on the other hand they also do not want their aims to be perceived as unlimited lest negotiations break down definitively (i.e. they want to “assure”). This tension is an important part of most international cooperation and conflict, and is thus likely to surface in the context of artificial intelligence as well. ● Jervis, R. (1976), “Deterrence, the Spiral Model, and Intentions of the Adversary,” ch. 3 in ​ Perception and Misperception in International Politics ​ [ ​ PDF ​ ] ● Kydd, A. & McManus, R. W. (2017), “Threats and Assurances in Crisis Bargaining,” ​ Journal of Conflict Resolution ​ 61:2 [ ​ link ​ ] [ ​ PDF ​ ] Further related discussion (e.g. of the difference between “deterrence” and “compellence”) can be found in: ● Schelling, T. C. (1966), ​ Arms and Influence ​ , especially ch. 2 ​ ​ (“The Art of Commitment”) [ ​ PDF ​ ] v. The Offense-Defense Balance A prominent idea in international security says that many technologies have properties that tend to make them favorable to either the attacking or the defending side, should conflict break out. Cooperation is generally thought to be more difficult when the offense has the advantage. How artificial intelligence and its applications are likely to shape the offense-defense balance in different domains is thus an important question (Garfinkel & Dafoe discuss this explicitly in the context of cyber). ● Garfinkel, B. & Dafoe, A. (2018), “How Does the Offense-Defense Balance Scale?” [ ​ PDF ​ ] ● Glaser, C. L. & Kaufmann, C. (1998), “What Is the Offense-Defense Balance and Can We Measure It?” ​ International Security ​ 22:4 [ ​ link ​ ] [ ​ PDF ​ ] [ ​ published responses ​ ] ● Jervis, R. (1978), “Cooperation under the Security Dilemma,” ​ World Politics ​ 30:2 [ ​ link ​ ] [ ​ PDF ​ ] ○ See also the Fearon (2018) reference from Section 2A on how the offense-defense balance matters in a different paradigm from Jervis’s. 8 Remco Zwetsloot July 2018 v1 More in-depth discussions and two recent applications to cyber security can be found in: ● Brown, M. E. et al (eds.) (2004), ​ Offense, Defense, and War ​ [ ​ link ​ ] [ ​ ToC ​ ] ● Buchanan, B. (2017), ​ The Cybersecurity Dilemma: Hacking, Trust, and Fear Between Nations ​ , Introduction and ch. 5 ● Slayton, R. (2017), “What Is the Cyber Offense-Defense Balance? Conceptions, Causes, and Assessment,” ​ International Security ​ 41:3 [ ​ link ​ ] [ ​ PDF ​ ] vi. Norms, Institutions, and Regimes Cooperation can take many forms. Informal cooperation often functions through norms, while formal cooperation can involve single or multiple interlocking treaties and institutions that, beyond some level of complexity, tend to be referred to as “regimes.” Each form of cooperation has upsides and downsides, and a significant body of work investigates when and how these different forms emerge and which circumstances call for which form. ● Barrett, S. (2007), ​ Why Cooperate? The Incentive to Supply Global Public Goods [ ​ link to chapters ​ ] ● Koremenos, B., Lipson, C. & Snidal, D. (2001), “The Rational Design of International Institutions,” ​ International Organization ​ 55:4 [ ​ link ​ ] [ ​ PDF ​ ] ○ Further discussion and empirical applications of this theory can be found in the eponymous book [ ​ link to chapters ​ ] ● Morrow, J. D. (2002), “The Laws of War, Common Conjectures, and Legal Systems in International Politics,” ​ Journal of Legal Studies ​ 31:S1 ​ ​ [ ​ link ​ ] [ ​ PDF ​ ] ○ A detailed account of the theory that situates it within IR theory more explicitly can be found in ​ Order Within Anarchy ​ (2014) [ ​ link to chapters ​ ] An in-depth review of the social science literature on norms, along with an application to cyber security, is in: ● Finnemore, M. & Hollis, D. B. (2016), “Constructing Norms for Global Cybersecurity,” ​ American Journal of International Law ​ [ ​ link ​ ] 9 Remco Zwetsloot July 2018 v1 3. Topics You should now have an understanding of the most security-relevant properties of AI (Section 1) and a basic handle on some central theories and concepts in international security (Section 2). With this as background, we can now zoom in on set of narrower topics and questions that are likely to be relevant in most future discussions of AI and international security. The sections below are arranged somewhat arbitrarily—they are obviously related, but each can also be read in isolation, and one need not follow any particular order. A. Arms Control Although it is somewhat old, Schelling & Halperin still is the best and most readable introduction to many of the strategic dimensions of arms control: ● Schelling, T. C. & Halperin, M. H. (1962), ​ Strategy and Arms Control Note that, as many of the readings below emphasize, arms control is defined differently by different people. Some emphasize reducing the number of weapons and the scope of capabilities, while others (like Schelling & Halperin) consider any measure -- including arms purchases -- that is likely to induce mutual constraint an instance of “arms control” (the goal is often said to be the achievement of “strategic stability”). As emphasized in Section 2B, most of the readings do not mention AI, and while often the (in)applicability of past work on arms control to possible future AI dynamics will be obvious, sometimes one will have to exercise a bit of imagination. It may be worth going back and forth between the readings in this section and Section 3C (“Technological Analogies”), since questions of whether and how we can draw lessons from historical episodes is central to both. i. Arms Control and Artificial Intelligence Not much thinking has been done at the intersection of arms control and artificial intelligence yet, but some good work is currently emerging. One angle from which people work on this intersection is to think about whether arms control on near-term AI applications (primarily lethal autonomous weapons [LAWs]) is desirable and feasible: ● Scharre (2018), ​ Army of None: Autonomous Weapons and the Future of War ​ , Part VI, especially ch. 20 (“The Pope and the Crossbow”) ● Crootof, R. (2015), “The Killer Robots Are Here: Legal and Policy Implications,” Cardozo Law Review ​ 36, mainly Parts III and IV [ ​ PDF ​ ] A second angle is to consider how developments in AI are affecting the prospects for arms control in other domains (primarily nuclear, at least thus far): ● Geist, E. G. & Lohn, A. J. (2018), “How Might Artificial Intelligence Affect the Risk of Nuclear War?” ​ RAND Corporation ​ [ ​ link ​ ] [ ​ PDF ​ ] ● Lieber, K. A. & Press, D. G. (2018), “The End of Nuclear Arms Control” [ ​ PDF ​ ] 10 Remco Zwetsloot July 2018 v1 ii. The History of Arms Control Strategic thinking on arms control was often in flux during the Cold War, and the post-Cold War period has similarly seen many changes. Part of this is due to the waxing and waning of political power of various domestic coalitions (see also subsection iii.), while other changes are the result of shifts in the international system (e.g. the number and type of relevant actors) and the accessibility of various kinds of potentially harmful technologies (with a broad trend toward the proliferation of capabilities). The following readings provide good introductory accounts of how and why arms control policy changed since WWII: ● Miller, S. E. (2003), “Skepticism Triumphant: The Bush Administration and the Waning of Arms Control,” address to the International Pugwash Movement [ ​ PDF ​ ] ○ While focused on the early Bush administration, most of the points are still a fairly accurate description of attitudes in a significant part of the DC establishment today. ● Schelling, T. C. (1985), “What Went Wrong with Arms Control,” ​ Foreign Affairs [ ​ link ​ ] ○ Focuses (like Trachtenberg) on the theory underlying successful Cold War arms control initiatives, and how it came to be abandoned. ● Trachtenberg, M. (1991), “The Past and Future of Arms Control,” ​ Daedelus ​ 120:1 [ ​ link ​ ] [ ​ PDF ​ ] Decent book-length overviews, each with a somewhat different lens, are: ● Chevrier, M. I. (2012), ​ Arms Control Policy: A Guide to the Issues ● Colby, E. A. & Gerson, M. S. (eds.) (2013), ​ Strategic Stability: Contending Interpretations ​ [ ​ PDF ​ ] ● Croft, S. (1996), ​ Strategies of Arms Control: A History and Typology ● Kearn, D. W. (2015), ​ Great Power Security Cooperation: Arms Control and the Challenge of Technological Change Encyclopedia-style overviews of historical arms control agreements can be found in: ● Burns (2009), ​ The Evolution of Arms Control: From Antiquity to the Nuclear Age ● Goldblat, J. (2002), ​ Arms Control: The New Guide to Negotiations and Agreements ​ , 2nd ed. iii. The (U.S.) Politics of Arms Control Domestic politics affects all aspects of arms control. Two actors that both support or oppose the same arms control agreement can do so for very different reasons, many of which are not directly related to the international consequences of the agreement. Political factors also often affect how well-positioned certain actors are to achieve arms control. The following readings touch on these and other domestic dynamics: ● Maurer, J. (2018), “The Purposes of Arms Control” [non-public] [ ​ blog version ​ ] ● Miller, S. E. (1984), “Politics over Promise: Domestic Impediments to Arms Control,” ​ International Security ​ 8:4 [ ​ link ​ ] [ ​ PDF ​ ] 11 Remco Zwetsloot July 2018 v1 ● Gallagher, N. W. (2015), “Re-thinking the Unthinkable: Arms Control in the Twenty-first Century,” ​ The Nonproliferation Review ​ 22:3, mainly pp. 269-284 [ ​ PDF ​ ] ● Kreps, S. E., Saunders, E. N. & Schultz, K. A. (2017), “The Ratification Premium: Hawks, Doves, and Arms Control,” ​ World Politics ​ [ ​ PDF ​ ] iv. The Role of Ideas, Scientists, and Experts Most work on arms control emphasizes the importance of technological factors (e.g. verification methods) or structural variables (e.g. the distribution of power). Another strand of readings, however, focuses on the role played by individuals and groups in affecting arms control dynamics. Given the central role that scientists and industry are likely to play in efforts at mutual restraint in AI, this set of readings is likely to have some relevance for the AI domain. ● Adler, E. (1992), “The Emergence of Cooperation: National Epistemic Communities and the International Evolution of the Idea of Arms Control,” International Organization ​ [ ​ link ​ ] [ ​ PDF ​ ] ● Barth, K.-H. (2003), “The Politics of Seismology: Nuclear Testing, Arms Control, and the Transformation of a Discipline,” ​ Social Studies of Science ​ [ ​ link ​ ] [ ​ PDF ​ ] ● Greene, B. P. (2015), “‘Captive of a Scientific-Technological Elite’: Eisenhower and the Nuclear Test Ban,” ​ Presidential Studies Quarterly ​ 45:1 [ ​ link ​ ] [ ​ PDF ​ ] ● Grace, K. (2015), “Leo Szilard and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation,” ​ MIRI Technical Report ​ [ ​ PDF ​ ] Book-length treatments and case histories of this topic are in: ● Evangelista, M. (1999), ​ Unarmed Forces: The Transnational Movement to End the Cold War ● Hymans, J. E. C. (2012), ​ Achieving Nuclear Ambitions: Scientists, Politicians, and Proliferation ​ [ ​ PDF ​ ] ● Ouagrham-Gormley, S. B. (2014), ​ Barriers to Bioweapons: The Challenges of Expertise and Organization for Weapons Development ​ [ ​ PDF ​ ] ○ There is also a shorter paper (2012), “Barriers to Bioweapons: Intangible Obstacles to Proliferation,” ​ International Security ​ 36:4 [ ​ link ​ ] [ ​ PDF ​ ] ● Bridger, S. (2015), ​ Scientists at War: The Ethics of Cold War Weapons Research [ ​ PDF ​ ] v. The Intellectual History of Arms Control Thinking It can take a long time for strategic and societal thinking about a technology to coalesce into coherent frameworks, and the same is likely to be true for AI. Most work looking at how this process played out historically has focused on nuclear technology. Good examples of this kind of work are: ● Miller, S. E. (2017), “Cyber Threats, Nuclear Analogies? Divergent Trajectories in Adapting to New Dual-Use Technologies,” in Perkovich & Levite, ​ Understanding Cyber Conflict: 14 Analogies ​ [ ​ PDF ​ ] 12 Remco Zwetsloot July 2018 v1 ● Sims, J. (1990), ​ Icarus Restrained: An Intellectual History Of Nuclear Arms Control, 1945-1960 B. Race Dynamics News articles about AI often use the term “race” to characterize the large amounts of investment that companies and governments are making into AI. There is not a great deal of research on racing in AI directly (see subsection i.), but the general competitive dynamic that people generally refer to as “racing” has received attention in multiple literatures. Below, we focus on several aspects of the literatures on racing in IR (subsections ii.-v.) and in economics (subsections vi.-vii.), which could plausibly inform our thinking on racing in AI. At the same time, there are several clear disanalogies between racing in these two domains and a possible race in AI, which are highlighted in the different subsections below. i. The Idea of an Artificial Intelligence Race The idea of an “arms race” in AI is so commonplace that it even has its very own Wikipedia page ​ . Despite this, there is not (yet) a great deal of research on what the causes and consequences of a race -- of the “arms” or “non-arms” variety -- might be, or if one is actually taking place. The following papers present early work in this direction, though none go very deep: ● ** ​ Armstrong, S., Bostrom, N. & Shulman, C. (2016), “Racing to the Precipice: A Model of Artificial Intelligence Development,” ​ AI & Society ​ 31:2 [ ​ link ​ ] [ ​ PDF ​ ] ● * ​ Cave, S. & ÓhÉigeartaigh, S. S. (2017), “An AI Race for Strategic Advantage: Rhetoric and Risks” [ ​ PDF ​ ] ● * ​ Geist, E. M. (2016), “It’s Already too Late to Stop the AI Arms Race—We Must Manage It Instead,” ​ Bulletin of the Atomic Scientist ​ [ ​ link ​ ] [ ​ PDF ​ ] Many of the introductory readings in Section 1 also discuss racing, at least in passing. After one has read some of the sections below, it may also be useful to return to the take-off scenario pieces in Section 1A and think about how a take-off scenario could influence pre-take-off race dynamics. ii. International Relations: Arms Races Arms races have long been seen as an important part of international relations, although less attention has been paid to the concept since the end of the Cold War. The central strategic tension that drives most thinking about arms races is that, on the one hand, arming is necessary for security, but on the other, arming is expensive and takes away resources from other areas people and governments want to invest in. Plausibly, this tension need not exist in AI (for at least some applications), given that investments can yield commercial returns and thereby grow the amount of money available for spending on other things instead of shrinking it. With this caveat in mind, however, there are still 13 Remco Zwetsloot July 2018 v1 likely to be aspects of IR race thinking that are applicable to AI. Good starting points for the IR literature are: ● Fearon, J. D. (2011), “Arming and Arms Races” [ ​ PDF ​ ] ● Glaser, C. (2000), “The Causes and Consequences of Arms Races,” ​ Annual Review of Political Science ​ 3 ​ ​ [ ​ PDF ​ ] ● Koubi, V. (1999), “Military Technology Races,” ​ International Organization ​ 55:3 [ ​ link ​ ] [ ​ PDF ​ ] A recent brief discussion (with limited empirics) in the context of cyber can be found in: ● Craig, A. & Valeriano, B. (2016), “Conceptualizing Cyber Arms Races,” ​ ICCC [ ​ PDF ​ ] iii. International Relations: The Diffusion of Technology, Strategy, and Arms One factor that usually affects race dynamics is how durable a technological lead or the advantage one gains from an innovation is likely to be. The strategic considerations are complicated. If innovations diffuse quickly, for example, this might decrease the incentive to invest in new innovations, but it may also make a race more competitive by preventing any side from gaining a large lead. Technological questions, moreover, are only part of the picture. For instance, the US Defense Innovation Board recently ​ concluded ​ that the DoD “does not have an innovation problem; it has an innovation adoption problem,” pointing to bureaucracy rather than technical limitations as an obstacle to the integration of emerging technologies. Good starting points in the literature on diffusion are: ● Horowitz, M. (2010), ​ The Diffusion of Military Power: Causes and Consequences for International Politics ​ [ ​ link to chapters ​ ] ● Goldman, E. O. & Eliason, L. C. (eds.) (2003), ​ The Diffusion of Military Technology and Ideas ​ , especially Part IV ○ A forthcoming book that is likely to be relevant as well is Lindsay, J., Shifting the Fog of War: Information Technology and Military Power ​ [ ​ link ​ ] A good summary of the literature on nuclear (non)proliferation specifically is: ● Debs, A. & Monteiro, N. P. (2017), “Conflict and Cooperation on Nuclear Nonproliferation,” ​ Annual Review of Political Science ​ 20 [ ​ link ​ ] [ ​ PDF ​ ] For those interested in this strategic angle, a deeper dive into a specific case is: ● Gormley, D. M. (2008), ​ Missile Contagion: Cruise Missile Proliferation and the Threat to International Security iv. International Relations: Diffusion, Development, and Conflict One possible source of AI risk (if probably a distant one) is that the prospect of technological diffusion and development lead to a preventive strike—even if it is not clear whether or when an actor will obtain a particular capability, other actors can decide that intervening today to eliminate the possibility of a power shift is worth the cost of conflict. 14 Remco Zwetsloot July 2018 v1 The following papers discuss some of the important strategic dynamics in such scenarios: ● Buchanan, B. (2017), ​ The Cybersecurity Dilemma, ​ ch. 6 (“Information Distribution and the Status Quo”) ● Coe, A. J. (2018), “Containing Rogues: A Theory of Asymmetric Arming,” ​ Journal of Politics ​ [ ​ PDF ​ ] ● Debs, A. & Monteiro, N. P. (2014), “Known Unknowns: Power Shifts, Uncertainty, and War,” ​ International Organization ​ 68:1 [ ​ link ​ ] [ ​ PDF ​ ] An in-depth account of a relevant case can be found in: ● Burr, W. & Richelson, J. T. (2000), “Whether to ‘Strangle the Baby in the Cradle’: The United States and the Chinese Nuclear Program, 1960-64,” ​ International Security ​ 25:3 [ ​ link ​ ] [ ​ PDF ​ ] v. International Relations: Case Studies A few good (sets of) qualitative case studies on race dynamics can be found here: ● Mahnken, T., Maiolo, J. & Stevenson, D. (2016), ​ Arms Races in International Politics: From the Nineteenth to the Twenty-first Century ​ [ ​ link to chapters ​ ] ● Hammond, G. T. (1993), ​ Plowshares into Swords: Arms Races in International Politics, 1840-1991 ● York, H. F. (1970), ​ Race to Oblivion: A Participant’s View of the Arms Race ● Evangelista, M. (1988), ​ Innovation and the Arms Race: How the United States and the Soviet Union Develop New Military Technologies vi. Economics: Contests and Races in Industry There is a very large body of relevant work in economics that addresses, among other questions, whether effort is more intense in close or distant races, why this is the case, and so forth. Early models of “patent races” focused on one-time competitions with a fixed endpoint (a single technological discovery). Later, these races were embedded in models of “sequences of innovations,” in which firms compete not only to be first in one-time races but rather to dominate the market in general, often in the hope of racing competitors out of business entirely. A parallel theoretical literature on “contests” draws in insights from dynamics including but not limited to industrial competition. Good 6 introductions to these three literatures are: ● Budd, C., Harris, C. & Vickers, J. (1993), “A Model of the Evolution of Duopoly: Does the Asymmetry between Firms Tend to Increase or Decrease?”, ​ Review of Economic Studies ​ 60 [ ​ link ​ ] [ ​ PDF ​ ] 6 Other related concepts in economics include auctions (especially the “all-pay” kind) and attrition-based bargaining. These are likely to be slightly less applicable to AI than contest and race models, but may nonetheless contain relevant insights. If this area is of interest, see Bulow, J. & Klemperer, P. (1999), “The Generalized War of Attrition,” ​ American Economic Review ​ 89:1 [ ​ link ​ ] for a relevant discussion and further references (see e.g. footnote discussions of all-pay auctions). 15 Remco Zwetsloot July 2018 v1 ● Konrad, K. A. (2012), “Dynamic Contests and the Discouragement Effect,” ​ Revue d’Economie Politique ​ [ ​ link ​ ] [ ​ PDF ​ ] ● Harris, C. & Vickers, J. (1987), “Racing with Uncertainty,” ​ Review of Economic Studies ​ 54:1 [ ​ link ​ ] [ ​ PDF ​ ] In economics, work on inter-firm competition is situated within the subfield of Industrial Organization (IO). For those who are interested in exploring this area further, a commonly used introductory textbook is: ● Belleflamme, P. & Peitz, M. (2010), ​ Industrial Organization: Markets and Strategies ​ [ ​ PDF ​ ] The classic reference is: ● Tirole, J. (1988), ​ The Theory of Industrial Organization ​ [ ​ link ​ ] vi. Economics: Case Studies The literature on racing in economics is somewhat lacking in interesting qualitative detail, but some commonly cited empirical studies are: ● Cockburn, I. & Henderson, R. (1995), “Racing to Invest? The Dynamics of Competition in Ethical Drug Discovery,” ​ Journal of Economic & Management Strategy ​ 3:3 [ ​ link ​ ] [ ​ PDF ​ ] ● Lerner, J. (1997), “An Empirical Exploration of a Technology Race,” ​ RAND Journal of Economics ​ 28:2 ​ ​ [ ​ link ​ ] [ ​ PDF ​ ] For a more recent paper, see (also check the references for more studies): ● Wang, I. K., Qian, L. & Lehrer, M. (2017), “From Technology Race to Technology Marathon: A Behavioral Explanation of Technology Advancement,” ​ European Management Journal ​ 35 [ ​ link ​ ] C. Technological Analogies AI has been analogized to a large range of other technologies. Sometimes comparisons mainly serve an argumentative or political purpose, but they are also often used for the purpose of 7 furthering understanding and research. There are downsides as well as upsides to this approach, especially when one analogizes at a very abstract level (“AI is like electricity”) rather than situating a comparison in a strategic context (“the verification problems with this AI application are similar to those in biotechnology”). When done carefully, however, comparing AI (applications) to other technologies can be productive. The subsections below present an introductory set of readings on various technological categories. Those interested in using analogies in their research may also benefit from engaging with some theoretical and empirical work on the uses, advantages, and drawbacks of analogical thinking: 7 See for example ​ this ​ use of the space race, a relatively popular comparison. 16 Remco Zwetsloot July 2018 v1 ● On recent emerging technologies, see Crootof, R. (2018), “Autonomous Weapon Systems and the Limits of Analogy,” ​ Harvard National Security Journal ​ 9 [ ​ PDF ​ ] and Pauwels, E. (2013), “Mind the Metaphor,” ​ Nature ​ 500 [ ​ PDF ​ ] ● On national security, see also Khong, Y. F. (1992), ​ Analogies at War ​ ; and Neustadt, R. E. & May, E. R. (1986), ​ Thinking in Time: The Uses of History for Decision Makers ● More generally, see Hofstadter, D. & Sander, E. (2013), ​ Surfaces and Essences: Analogy as the Fuel and Fire of our Thinking The first two subsections focus on two aspects of technologies that are commonly said to characterize AI: its “dual-use” and “general purpose” nature (somewhat confusingly, the latter is also sometimes referred to as “omni-use”). i. Dual-Use Technology Dual-use technologies are typically defined either as technologies that can be used for both civilian and military purposes, or, more broadly, as technologies that can be used for both positive and nefarious purposes. By either definition, AI is a dual-use technology. Moreover, like some (but not all) other dual-use technologies, it has large 8 commercial as well as (potential) military value. For (governance-focused) introductions to dual-use technologies, see: ● Harris, E. D. (ed.) (2016), ​ Governance of Dual-Use Technologies: Theory and Practice ​ , esp. the conclusion [ ​ link ​ ] [ ​ PDF ​ ] ● Resnik, D. B. (2013), “Scientific Control Over Dual-Use Research: Prospects for Self-Regulation,” in Rappert & Selgelid, ​ On the Dual Uses of Science and Ethics: Principles, Practices, and Prospects ​ [ ​ book link ​ ] [ ​ chapter PDF ​ ] ii. General Purpose Technology AI is also often thought of as a “general purpose technology” (GPT), akin to the steam engine and electricity. A large literature in economics discusses the emergence, characteristics, and implications of GPTs. For an introduction, see: ● Bresnahan, T. (2010), “General Purpose Technologies,” ch. 10 in Hall & Rosenberg, ​ Handbook of the Economics of Innovation ​ , volume 2 [ ​ book link ​ ] [ ​ chapter link ​ ] ​ ​ [ ​ PDF ​ ] ● Bekar, C., Carlaw, K. & Lipsey, R. (2017), “General Purpose Technologies in Theory, Application and Controversy: A Review,” ​ Journal of Evolutionary Economics ​ [ ​ link ​ ] ● Korzinov, V. & Savin, I. (2018), “General Purpose Technologies as an Emergent Property,” ​ Technological Forecasting and Social Change ​ [ ​ link ​ ] 8 Illustrating the policy relevance of this issue, one of the questions ​ flagged ​ by the UN GGE on LAWS was: “Does the transformative character of AI and its possible ubiquity limit the [lethal autonomous weapon systems] discussion in any manner, or is AI like other dual-use technologies in the past?” 17 Remco Zwetsloot July 2018 v1 The next three subsections focus on three technology domains that are often compared to AI, whether in general or on some particular strategic dimensions. I have tried to select readings that, in addition to discussing important features of these technologies, illustrate the benefits and limitations of analogizing. iii. Nuclear The AI-nuclear comparison is often motivated with reference to nuclear technology’s general transformative impact. While this is an interesting angle to take (see the Miller reading), there are also many ways in which nuclear technology is very different from artificial intelligence, including, notably, the available mechanisms for agreement verification (see the Acton reading and Harris’s conclusion in the same volume). ● Acton, J. M. (2016), “On the Regulation of Dual-Use Nuclear Technology,” in Harris, E. D., ​ Governance of Dual-Use Technologies: Theory and Practice ​ [ ​ link ​ ] [ ​ PDF ​ ] ● Miller, S. E. (2017), “Cyber Threats, Nuclear Analogies? Divergent Trajectories in Adapting to New Dual-Use Technologies,” in Perkovich & Levite, ​ Understanding Cyber Conflict: 14 Analogies ​ [ ​ PDF ​ ] ● See many of the readings in Section 3A for the history of strategic thought on nuclear questions. iv. Cyber There are obvious similarities between AI and cyber in terms of the digital fundamentals. It is also likely that both the actors involved in and the strategic challenges to successful AI governance are going to be similar to those that we’ve seen in action in cyber. The literature on cyber is still relatively nascent, but the following provide a good (governance-focused) introduction: ● Lin, H. (2016), “Governance of Information Technology and Cyber Weapons,” in Harris, E. D., ​ Governance of Dual-Use Technologies: Theory and Practice ​ [ ​ link ​ ] [ ​ PDF ​ ] ● Buchanan, B. (2016), ​ The Cybersecurity Dilemma: Hacking, Trust and Fear Between Nations ● Perkovich, G. & Levite, A. E. (2017), “Conclusions,” in Perkovich, G. & Levite, A. E., ​ Understanding Cyber Conflict: 14 Analogies ​ [ ​ link ​ ] [ ​ PDF ​ ] There have also been some instructive attempts to understand cyber through analogies: ● Perkovich, G. & Levite, A. E. (eds.) (2017), ​ Understanding Cyber Conflict: 14 Analogies ​ [ ​ link ​ ] [ ​ PDF ​ ] ● Goldman & Arquilla (2014), ​ Cyber Analogies ​ [ ​ link ​ ] For those interested in reading more about cyber, a great general reference is Max Smeets’s ​ Cyber References Project ​ . 18 Remco Zwetsloot July 2018 v1 v. Biotechnology An increasingly common comparison for AI is biotechnology. One reason for this is that there are, generally speaking, relatively few barriers to the development and usage of both technologies. A possible disanalogy is that biotechnology is at least superficially related to the pre-existing regime covering biological risks and weapons more broadly, both domestically and internationally, and governance appears somewhat less challenging. ● Harris, E. D. (2016), “Dual-Use Threats: The Case of Biological Technology,” in Harris, E. D., ​ Governance of Dual-Use Technologies: Theory and Practice ​ [ ​ link ​ ] [ ​ PDF ​ ] ● Carus, W. S. (2017), “A Century of Biological-Weapons Programs,” ​ The Nonproliferation Review ​ 24:1-2 ​ ​ [ ​ link ​ ] [ ​ PDF ​ ] ● Koblentz, G. D. & Mazanec, B. M. (2013), “Viral Warfare: The Security Implications of Cyber and Biological Weapons,” ​ Comparative Strategy ​ 32:5 [ ​ link ​ ] [ ​ PDF ​ ] An interesting effort at the intersection of biotechnology and national security that has potential applicability to AI is discussed in: ● Zhang, L. & Gronvall, G. K. (2018), “Red Teaming the Biological Sciences for Deliberate Threats,” ​ Terrorism and Political Violence ​ [ ​ link ​ ] [ ​ PDF ​ ] D. Government and Technology Zooming out, there may be relevant insights in bigger-picture efforts to understand past attempts by governments to harness technologies to increase their power and improve society. Some interesting work in this general area is: ● McNeill, W. H. (1982), ​ The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000 ● Taylor, M. Z. (2016), ​ The Politics of Innovation: Why Some Countries Are Better than Others at Science and Technology ​ [ ​ link to chapters ​ ] ● Ruttan, V. W. (2006), “Is War Necessary for Economic Growth?”, Clemons Lecture [ ​ PDF ​ ] ○ For more details, see also his two books ​ Technology, Growth, and Development: An Induced Innovation Perspective ​ (2001) and ​ Is War Necessary for Economic Growth? Military Procurement and Technology Development ​ (2006) [ ​ link to chapters ​ ] 19
15b6f1d6-112d-4571-a1f5-cfed6f0a20c2
trentmkelly/LessWrong-43k
LessWrong
Time complexity for deterministic string machines This was a project conducted during MATS 5.0 under the mentorship of Vanessa Kosoy and supported by a grant from BERI. It builds off the String Machines framework (and depends on the linked post for certain definitions), which models category-theoretic generalizations of finite-state transducers. [EDIT: The results here can also be found presented in self-contained form, in this paper.] The framework as it previously existed did not have representation-independent ways of bounding (analogues of) time complexity, or natural guarantees that output size would not grow exponentially in input size. We introduce "filtered" transducers, which operate on categories enriched over filtered sets (sets equipped with a function to a partially ordered monoid, where morphisms are functions respecting order), and then, restricting our attention to transducers with a finite state space, prove constraints on the time complexity growth and expressivity of string machines. ---------------------------------------- Parameterizing complexity in string machines Filtered transducers > Definition 1. The category FiltSet of filtered sets is the category such that > > * an object is a tuple (S,degS), where S is a set and degS:S→N is a function, > * a morphism f:(S,degS)→(T,degT) is a function S→T such that degT(f(s))≤degS(s) for all s∈S. We will generally refer to objects in FiltSet solely by the symbol corresponding to the underlying set going forward. One can observe that the identity function on a set S by definition satisfies degS(idS(s))=degS(s) for all s∈S and is thus a morphism in FiltSet. One can also observe that given f:S→T and g:T→V, degV(g(f(s)))≤degT(f(s))≤degS(s) for all s∈S, and therefore g∘f is also a morphism in FiltSet. Therefore, FiltSet is indeed a category. > Definition 2. Given two objects S,T∈Ob(FiltSet), we define their filtered product S⊗T to be the set S×T equipped with the function degS⊗T:S×T→N satisfying degS⊗T(s,t)=degS(s)+degT(t) for all (s,t)∈S×T. Give
f3e06af6-7511-4565-81c3-db736c8aadb2
trentmkelly/LessWrong-43k
LessWrong
Luna Lovegood and the Chamber of Secrets - Part 13 "Wait," said Luna, "This is the Lost Diadem of Ravenclaw. It makes the wearer smarter. You might want it." Professor Quirrel took the diadem in his hands. He feinted as if to place it over his head. "I am an Occlumens," said Professor Quirrel, "Ravenclaw's device rips the incoherence out of doublethink. If I were to place this device over my head I would be lucky if it did not shred my mind. Nice try." Professor Quirrel tossed the diadem back to Luna. Luna kowtowed. "I heard stories of the First Wizarding War. You never cared much for individual human beings but you were always very careful not to destroy wizardkind," said Luna, "I get the feeling you put some effort into protecting the universe." "So?" said Professor Quirrel. "You are bored. This plane is too small for you," said Luna. You-Know-Who did not murder her. "You should not be a villain," said Luna. "If you tell me to be a hero then you will die painfully," said Professor Quirrel. "You should be a god," said Luna. Luna willingly bestowed the astrolabe to Professor Quirrel. "Is that all?" said Professor Quirrel. "Yes," said Luna. "Avada Kedavra," said Professor Quirrel. Luna collapsed. Professor Quirrel sheathed his wand. His slender skeleton fingers untangled the clockwork. Professor Quirrel unfolded the astrolabe around him. He ascended to a higher plane of existence. ---------------------------------------- Luna stepped out of the Forgotten Library. She held the Sword of Gryffindor in her left hand and Wanda in her right. She buried Wanda in Hagrid's pumpkin patch. ---------------------------------------- The final duel of Lockhart's tournament was that afternoon. Professor Flitwick refereed. Luna lost. ---------------------------------------- Clang. Luna dropped the Sword of Gryffindor on Professor Lockhart's empty chair. She sat down for dinner in her seat at the end of the Ravenclaw table. A student stood behind her. "You fought well in Lockhart's dueling tournament," said Ginev
b5f7482d-e5dc-4ed1-935b-86fbcd105e4c
trentmkelly/LessWrong-43k
LessWrong
. .
68a0ca5c-e6eb-4481-ae7b-ef81258207b2
trentmkelly/LessWrong-43k
LessWrong
Purposefulness on Mars Three different Martians built the Three Sacred Stone Walls of Mars according to the Three Virtues of walls:Height, Strength, and Beauty. An evil Martian named Ution was the first and stupidest of all wallbuilders. He was too stupid to truly understand even the most basic virtue of height, and too evil to care for any other virtue. None the less, something about tall walls caused Evil Ution to build more tall walls, sometimes one on top of the other. At times his walls would fall as he was building them, he did not understand why, nor did he care. He simply copied the high walls he had already built, whichever were still standing. His wall did achieve some strength and beauty. Most consisted of thousands of similar archways stacked on top of each other. Thousands upon thousands of intricately interlocking stones. Each arch a distantly removed copy of some prototypical archway that was strong and light enough to support itself many times over. To this day his walls are the highest in all of Mars. Many Martian Millenia later came the next great wallbuilder: Sid.  Sid was far more intelligent than Ution, but he was just as single minded. We know from his archived odor sequence deposits that he understood the virtues of height and beauty, but celebrated his own willingness to sacrifice them entirely for the tiniest bit of added strength. When a critic asked why he did not simply place a one solid ugly stone on the ground, he replied simply "Fool! A single stone can yet be moved." Indeed, Sid's walls are shown by modern computer modeling to be stronger than any solid stone available on Mars at the time. The intricate interlocking matrices of cut stone redistribute stress so well, and the wall are so tightly anchored to the bedrock, that an underlying fault line has been repaired. This causes tectonic stresses even in far distant parts of our planet. Despite the fact that Sid clearly made every decision exclusively favoring strength, his walls also hold the virtue
7cedf21d-dad4-41b1-9cf2-14b523273191
StampyAI/alignment-research-dataset/blogs
Blogs
Biology-Inspired AGI Timelines: The Trick That Never Works – 1988 – -------- **Hans Moravec:**  Behold my book *Mind Children.*  Within, I project that, in 2010 or thereabouts, we shall achieve strong AI.  I am not calling it “Artificial General Intelligence” because this term will not be coined for another 15 years or so. **Eliezer** (who is not actually on the record as saying this, because the real Eliezer is, in this scenario, 8 years old; this version of Eliezer has all the meta-heuristics of Eliezer from 2021, but none of that Eliezer’s anachronistic knowledge):  Really?  That sounds like a very difficult prediction to make correctly, since it is about the future, which is famously hard to predict. **Imaginary Moravec:**  Sounds like a [fully general counterargument](https://www.lesswrong.com/tag/fully-general-counterargument) to me. **Eliezer:**  Well, it is, indeed, a fully general counterargument *against futurism.*  Successfully predicting the unimaginably far future – that is, more than 2 or 3 years out, or sometimes less – is something that human beings seem to be quite bad at, by and large. **Moravec:**I predict that, 4 years from this day, in 1992, the Sun will rise in the east. **Eliezer:** Okay, let me qualify that.  Humans seem to be quite bad at predicting the future whenever we need to predict anything at all *new and unfamiliar,*rather than the Sun continuing to rise every morning until it finally gets eaten.  I’m not saying it’s impossible to ever validly predict something novel!  Why, even if that was impossible, how could *I*know it for sure?  By extrapolating from my own personal inability to make predictions like that?  Maybe I’m just bad at it myself.  But any time somebody claims that some particular novel aspect of the far future is predictable, they justly have a significant burden of prior skepticism to overcome. More broadly, we should not expect a good futurist to give us a generally good picture of the future.  We should expect a great futurist to single out a few *rare narrow aspects* of the future which are, somehow, *exceptions* to the usual rule about the future not being very predictable. I do agree with you, for example, that we shall *at some point* see Artificial General Intelligence.  This seems like a rare predictable fact about the future, even though it is about a novel thing which has not happened before: we keep trying to crack this problem, we make progress albeit slowly, the problem must be solvable in principle because human brains solve it, eventually it will be solved; this is not a logical necessity, but it sure seems like the way to bet.  “AGI eventually” is predictable in a way that it is *not* predictable that, e.g., the nation of Japan, presently upon the rise, will achieve economic dominance over the next decades – to name something else that present-day storytellers of 1988 are talking about. But *timing* the novel development correctly?  *That*is almost never done, not until things are 2 years out, and often not even then.  Nuclear weapons were called, but not nuclear weapons in 1945; heavier-than-air flight was called, but not flight in 1903.  In both cases, people said two years earlier that it wouldn’t be done for 50 years – or said, decades too early, that it’d be done shortly.  There’s a difference between worrying that we may eventually get a serious global pandemic, worrying that eventually a lab accident may lead to a global pandemic, and forecasting that a global pandemic will start in November of 2019. **Moravec:**  You should read my book, my friend, into which I have put much effort.  In particular – though it may sound impossible to forecast, to the likes of yourself – I have carefully examined a graph of computing power in single chips and the most powerful supercomputers over time.  This graph looks surprisingly regular!  Now, of course not all trends can continue forever; but I have considered the arguments that Moore’s Law will break down, and found them unconvincing.  My book spends several chapters discussing the particular reasons and technologies by which we might expect this graph to *not* break down, and continue, such that humanity *will* have, by 2010 or so, supercomputers which can perform 10 trillion operations per second.\* Oh, and also my book spends a chapter discussing the retina, the part of the brain whose computations we understand in the most detail, in order to estimate how much computing power the human brain is using, arriving at a figure of 10^13 ops/sec.  This neuroscience and computer science may be a bit hard for the layperson to follow, but I assure you that I am in fact an experienced hands-on practitioner in robotics and computer vision. So, as you can see, we should first get strong AI somewhere around 2010.  I may be off by an order of magnitude in one figure or another; but even if I’ve made two errors in the same direction, that only shifts the estimate by 7 years or so. (\*)  Moravec just about nailed this part; the actual year was 2008. **Eliezer:**  I sure would be amused if we *did* in fact get strong AI somewhere around 2010, which, for all *I*know at this point in this hypothetical conversation, could totally happen!  Reversed stupidity is not intelligence, after all, and just because that is a completely broken justification for predicting 2010 doesn’t mean that it cannot happen that way. **Moravec:**  Really now.  Would you care to enlighten me as to how I reasoned so wrongly? **Eliezer:**  Among the reasons why the Future is so hard to predict, in general, is that the sort of answers we want tend to be the products of lines of causality with multiple steps and multiple inputs.  Even when we can guess a single fact that *plays some role* in producing the Future – which is not of itself all that rare – usually the answer the storyteller wants depends on *more facts* than that single fact.  Our ignorance of any one of those other facts can be enough to torpedo our whole line of reasoning – *in practice,*not just as a matter of possibilities.  You could say that the art of exceptions to Futurism being impossible, consists in finding those rare things that you can predict despite being almost entirely ignorant of most concrete inputs into the concrete scenario.  Like predicting that AGI will happen *at some point*, despite not knowing the design for it, or who will make it, or how. My own contribution to the Moore’s Law literature consists of Moore’s Law of Mad Science:  “Every 18 months, the minimum IQ required to destroy the Earth drops by 1 point.”  Even if this serious-joke was an absolutely true law, and aliens told us it was absolutely true, we’d still have no ability whatsoever to predict thereby when the Earth would be destroyed, because we’d have no idea what that minimum IQ was right now or at any future time.  We would know that in general the Earth had a serious problem that needed to be addressed, because we’d know in general that destroying the Earth kept on getting easier every year; but we would not be able to time *when* that would become an imminent emergency, until we’d seen enough specifics that the crisis was already upon us. In the case of your prediction about strong AI in 2010, I might put it as follows:  The timing of AGI could be seen as a product of three factors, one of which you can try to extrapolate from existing graphs, and two of which you don’t know at all.  Ignorance of any one of them is enough to invalidate the whole prediction. These three factors are: * The availability of computing power over time, which may be quantified, and appears steady when graphed; * The rate of progress in knowledge of cognitive science and algorithms over time, which is much harder to quantify; * A function that is a latent background parameter, for the amount of computing power required to create AGI as a function of any particular level of knowledge about cognition; and about this we know almost nothing. Or to rephrase:  Depending on how much you and your civilization know about AI-making – how much you know about cognition and computer science – it will take you a variable amount of computing power to build an AI.  If you really knew what you were doing, for example, I confidently predict that you could build a mind at least as powerful as a human mind, while using *fewer*floating-point operations per second than a human brain is making useful use of – **Chris Humbali:**  Wait, did you just say “confidently”?  How could you possibly know *that* with confidence?  How can you criticize Moravec for being too confident, and then, in the next second, turn around and be confident of something yourself?  Doesn’t that make you a massive hypocrite? **Eliezer:**  Um, who are you again? **Humbali:**  I’m the cousin of Pat Modesto from [your previous dialogue on Hero Licensing](https://www.lesswrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing)!  Pat isn’t here in person because “Modesto” looks unfortunately like “Moravec” on a computer screen.  And also their first name looks a bit like “Paul” who is not meant to be referenced either.  So today *I* shall be your true standard-bearer for good calibration, intellectual humility, the outside view, and reference class forecasting – **Eliezer:**  Two of these things are not like the other two, in my opinion; and Humbali and Modesto do not understand how to operate any of the four correctly, in my opinion; but anybody who’s read “[Hero Licensing](https://www.lesswrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing)” should already know I believe that. **Humbali:**  – and I don’t see how Eliezer can possibly be so *confident,* after all his humble talk of the difficulty of futurism, that it’s possible to build a mind ‘as powerful as’ a human mind using ‘less computing power’ than a human brain. **Eliezer:**  It’s overdetermined by multiple lines of inference.  We might first note, for example, that the human brain runs very slowly in a *serial* sense and tries to make up for that with massive parallelism.  It’s an obvious truth of computer science that while you can use 1000 serial operations per second to emulate 1000 parallel operations per second, the reverse is not in general true. To put it another way: if you had to build a spreadsheet or a word processor on a computer running at 100Hz, you might also need a billion processing cores and massive parallelism in order to do enough cache lookups to get anything done; that wouldn’t mean the computational labor you were performing was *intrinsically*that expensive.  Since modern chips are massively serially faster than the neurons in a brain, and the direction of conversion is asymmetrical, we should expect that there are tasks which are immensely expensive to perform in a massively parallel neural setup, which are much cheaper to do with serial processing steps, and the reverse is *not* symmetrically true. A sufficiently adept builder can build general intelligence more cheaply in total operations per second, if they’re allowed to line up a billion operations one after another per second, versus lining up only 100 operations one after another.  I don’t bother to qualify this with “very probably” or “almost certainly”; it is the sort of proposition that a clear thinker should simply accept as obvious and move on. **Humbali:**  And is it certain that neurons can perform only 100 serial steps one after another, then?  As you say, ignorance about one fact can obviate knowledge of any number of others. **Eliezer:**  A typical neuron firing as fast as possible can do maybe 200 spikes per second, a few rare neuron types used by eg bats to echolocate can do 1000 spikes per second, and the vast majority of neurons are not firing that fast at any given time.  The usual and proverbial rule in neuroscience – the sort of academically respectable belief I’d expect you to respect even more than I do – is called “the 100-step rule”, that any task a human brain (or mammalian brain) can do on perceptual timescales, must be doable with no more than 100 *serial* steps of computation – no more than 100 things that get computed one after another.  Or even less if the computation is running off spiking frequencies instead of individual spikes. **Moravec:**  Yes, considerations like that are part of why I’d defend my estimate of 10^13 ops/sec for a human brain as being reasonable – more reasonable than somebody might think if they were, say, counting all the synapses and multiplying by the maximum number of spikes per second in any neuron.  If you actually look at what the retina is doing, and how it’s computing that, it doesn’t look like it’s doing one floating-point operation per activation spike per synapse. **Eliezer:**  There’s a similar asymmetry between precise computational operations having a vastly easier time emulating noisy or imprecise computational operations, compared to the reverse – there is no doubt a way to use neurons to compute, say, exact 16-bit integer addition, which is at least *more*efficient than a human trying to add up 16986+11398 in their heads, but you’d still need more synapses to do that than transistors, because the synapses are noisier and the transistors can just do it precisely.  This is harder to visualize and get a grasp on than the parallel-serial difference, but that doesn’t make it unimportant. Which brings me to the second line of very obvious-seeming reasoning that converges upon the same conclusion – that it is in principle possible to build an AGI much more computationally efficient than a human brain – namely that biology is simply *not that efficient,*and *especially* when it comes to huge complicated things that it has started doing relatively recently. ATP synthase may be close to 100% thermodynamically efficient, but ATP synthase is literally over 1.5 billion years old and a core bottleneck on all biological metabolism.  Brains have to pump thousands of ions in and out of each stretch of axon and dendrite, in order to restore their ability to fire another fast neural spike.  The result is that the brain’s computation is something like half a million times less efficient than the thermodynamic limit for its temperature – so around two millionths as efficient as ATP synthase.  And neurons are a hell of a lot older than the biological software for general intelligence! The software for a human brain is not going to be 100% efficient compared to the theoretical maximum, nor 10% efficient, nor 1% efficient, even *before* taking into account the whole thing with parallelism vs. serialism, precision vs. imprecision, or similarly clear low-level differences. **Humbali:**  Ah!  But allow me to offer a consideration here that, I would wager, you’ve never thought of before yourself – namely – *what if you’re wrong?*  Ah, not so confident now, are you? **Eliezer:**  One observes, over one’s cognitive life as a human, which sorts of what-ifs are useful to contemplate, and where it is wiser to spend one’s limited resources planning against the alternative that one might be wrong; and I have oft observed that lots of people don’t… quite seem to understand how to use ‘what if’ all that well?  They’ll be like, “[Well, what if UFOs are aliens, and the aliens are partially hiding from us but not perfectly hiding from us, because they’ll seem higher-status if they make themselves observable but never directly interact with us?](https://www.overcomingbias.com/2021/06/ufos-what-the-hell.html)” I can refute individual what-ifs like that with specific counterarguments, but I’m not sure how to convey the central generator behind how I know that I ought to refute them.  I am not sure how I can get people to reject these ideas for themselves, instead of them passively waiting for me to come around with a specific counterargument.  My having to counterargue things specifically now seems like a road that never seems to end, and I am not as young as I once was, nor am I encouraged by how much progress I seem to be making.  I refute one wacky idea with a specific counterargument, and somebody else comes along and presents a new wacky idea on almost exactly the same theme. I know it’s probably not going to work, if I try to say things like this, but I’ll try to say them anyways.  When you are going around saying ‘what-if’, there is a very great difference between your map of reality, and the territory of reality, which is extremely narrow and stable.  Drop your phone, gravity pulls the phone downward, it falls.  What if there are aliens and they make the phone rise into the air instead, maybe because they’ll be especially amused at violating the rule after you just tried to use it as an example of where you could be confident?  Imagine the aliens watching you, imagine their amusement, contemplate how fragile human thinking is and how little you can ever be assured of anything and ought not to be too confident.  Then drop the phone and watch it fall.  You’ve now learned something about how reality itself isn’t made of what-ifs and reminding oneself to be humble; reality runs on rails stronger than your mind does. Contemplating this doesn’t mean you *know* the rails, of course, which is why it’s so much harder to predict the Future than the past.  But if you see that your thoughts are still wildly flailing around what-ifs, it means that they’ve failed to gel, in some sense, they are not yet bound to reality, because reality has no binding receptors for what-iffery. The correct thing to do is not to act on your what-ifs that you can’t figure out how to refute, but to go on looking for a model which makes narrower predictions than that.  If that search fails, forge a model which puts some more numerical distribution on your highly entropic uncertainty, instead of diverting into specific what-ifs.  And in the latter case, understand that this probability distribution reflects your ignorance and subjective state of mind, rather than your knowledge of an objective frequency; so that somebody else is allowed to be less ignorant without you shouting “Too confident!” at them.  Reality runs on rails as strong as math; sometimes other people will achieve, before you do, the feat of having their own thoughts run through more concentrated rivers of probability, in some domain. Now, when we are trying to concentrate our thoughts into deeper, narrower rivers that run closer to reality’s rails, there is of course the legendary hazard of concentrating our thoughts into the *wrong* narrow channels that *exclude* reality.  And the great legendary sign of this condition, of course, is the counterexample from Reality that falsifies our model!  But you should not in general criticize somebody for trying to concentrate their probability into narrower rivers than yours, for this is the appearance of the great general project of trying to get to grips with Reality, that runs on true rails that are narrower still. If you have concentrated your probability into *different* narrow channels than somebody else’s, then, of course, you have a more interesting dispute; and you should engage in that legendary activity of trying to find some accessible experimental test on which your nonoverlapping models make different predictions. **Humbali:**  I do not understand the import of all this vaguely mystical talk. **Eliezer:**  I’m trying to explain why, when I say that I’m very confident it’s possible to build a human-equivalent mind using less computing power than biology has managed to use effectively, and you say, “How can you be so *confident,* what if you are *wrong,*” it is not unreasonable for me to reply, “Well, kid, this doesn’t seem like one of those places where it’s particularly important to worry about far-flung ways I could be wrong.”  Anyone who aspires to learn, learns over a lifetime which sorts of guesses are more likely to go oh-no-wrong in real life, and which sorts of guesses are likely to just work.  Less-learned minds will have minds full of what-ifs they can’t refute in more places than more-learned minds; and even if you cannot see how to refute all your what-ifs yourself, it is possible that a more-learned mind knows why they are improbable.  For one must distinguish possibility from probability. It is *imaginable* or *conceivable* that human brains have such refined algorithms that they are operating at the absolute limits of computational efficiency, or within 10% of it.  But if you’ve spent enough time noticing *where*Reality usually exercises its sovereign right to yell “Gotcha!” at you, learning *which* of your assumptions are the kind to blow up in your face and invalidate your final conclusion, you can guess that “Ah, but what if the brain is nearly 100% computationally efficient?” is the sort of what-if that is not much worth contemplating because it is not actually going to be true in real life.  Reality is going to confound you in some other way than that. I mean, maybe you haven’t read enough neuroscience and evolutionary biology that you can see from your own knowledge that the proposition sounds massively implausible and ridiculous.  But it should hardly seem unlikely that somebody else, more learned in biology, might be justified in having more confidence than you.  Phones don’t fall up.  Reality really is very stable and orderly in a lot of ways, even in places where you yourself are ignorant of that order. But if “What if aliens are making themselves visible in flying saucers because they want high status and they’ll have higher status if they’re occasionally observable but never deign to talk with us?” sounds to you like it’s totally plausible, and you don’t see how someone can be *so confident* that it’s not true – because oh *no* what if you’re *wrong* and you haven’t *seen* the aliens so how can you *know* what they’re not thinking – then I’m not sure how to lead you into the place where you can dismiss that thought with confidence.  It may require a kind of life experience that I don’t know how to give people, at all, let alone by having them passively read paragraphs of text that I write; a learned, perceptual sense of which what-ifs have any force behind them.  I mean, I can refute that specific scenario, I *can* put that learned sense into words; but I’m not sure that does me any good unless you learn how to refute it yourself. **Humbali:**  Can we leave aside all that meta stuff and get back to the object level? **Eliezer:**  This indeed is often wise. **Humbali:**  Then here’s one way that the minimum computational requirements for general intelligence could be *higher* than Moravec’s argument for the human brain.  Since, after, all, we only have one existence proof that general intelligence is possible at all, namely the human brain.  Perhaps there’s no way to get general intelligence in a computer except by simulating the brain neurotransmitter-by-neurotransmitter.  In that case you’d need a lot *more* computing operations per second than you’d get by calculating the number of potential spikes flowing around the brain!  What if it’s true?  How can you *know?* (**Modern person:**  This seems like an obvious straw argument?  I mean, would anybody, even at an earlier historical point, actually make an argument like – **Moravec and Eliezer:**  YES THEY WOULD.) **Eliezer:**  I can imagine that if we were trying specifically to *upload a human* that there’d be no easy and simple and obvious way to run the resulting simulation and get a good answer, without simulating neurotransmitter flows in extra detail. To imagine that every one of these simulated flows is *being usefully used in general intelligence and there is no way to simplify the mind design to use fewer computations…*  I suppose I could try to refute that specifically, but it seems to me that this is a road which has no end unless I can convey the generator of my refutations.  Your what-iffery is flung far enough that, if I cannot leave even that much rejection as an exercise for the reader to do on their own without my holding their hand, the reader has little enough hope of following the rest; let them depart now, in indignation shared with you, and save themselves further outrage. I mean, it will obviously be *less* obvious to the reader because they will know *less* than I do about this exact domain, it will justly take *more* work for the reader to specifically refute you than it takes me to refute you.  But I think the reader needs to be able to do that at all, in this example, to follow the more difficult arguments later. **Imaginary Moravec:**  I don’t think it changes my conclusions by an order of magnitude, but some people would worry that, for example, changes of protein expression inside a neuron in order to implement changes of long-term potentiation, are also important to intelligence, and could be a big deal in the brain’s real, effectively-used computational costs.  I’m curious if you’d dismiss that as well, the same way you dismiss the probability that you’d have to simulate every neurotransmitter molecule? **Eliezer:**  Oh, of course not.  Long-term potentiation suddenly turning out to be a big deal you overlooked, compared to the depolarization impulses spiking around, is *very* much the sort of thing where Reality sometimes jumps out and yells “Gotcha!” at you. **Humbali:**  *How can you tell the difference?* **Eliezer:**  Experience with Reality yelling “Gotcha!” at myself and historical others. **Humbali:**  They seem like equally plausible speculations to me! **Eliezer:**  Really?  “What if long-term potentiation is a big deal and computationally important” sounds just as plausible to you as “What if the brain is already close to the wall of making the most efficient possible use of computation to implement general intelligence, and every neurotransmitter molecule matters”? **Humbali:**  Yes!  They’re both what-ifs we can’t know are false and shouldn’t be overconfident about denying! **Eliezer:**  My tiny feeble mortal mind is far away from reality and only bound to it by the loosest of correlating interactions, but I’m not *that* unbound from reality. **Moravec:**  I would guess that in real life, long-term potentiation is sufficiently slow and local that what goes on inside the cell body of a neuron over minutes or hours is not as big of a computational deal as thousands of times that many spikes flashing around the brain in milliseconds or seconds.  That’s why I didn’t make a big deal of it in my own estimate. **Eliezer:**  Sure.  But it *is*much more the sort of thing where you wake up to a reality-authored science headline saying “Gotcha!  There were tiny DNA-activation interactions going on in there at high speed, and they were actually pretty expensive and important!”  I’m not saying this exact thing is very probable, just that it wouldn’t be out-of-character for reality to say *something*like that to me, the way it would be really genuinely bizarre if Reality was, like, “Gotcha!  The brain is as computationally efficient of a generally intelligent engine as any algorithm can be!” **Moravec:**  I think we’re in agreement about that part, or we would’ve been, if we’d actually had this conversation in 1988.  I mean, I *am* a competent research roboticist and it is difficult to become one if you are completely unglued from reality. **Eliezer:**  Then what’s with the 2010 prediction for strong AI, and the massive non-sequitur leap from “the human brain is somewhere around 10 trillion ops/sec” to “if we build a 10 trillion ops/sec supercomputer, we’ll get strong AI”? **Moravec:**  Because while it’s the kind of Fermi estimate that can be off by an order of magnitude in practice, it doesn’t really seem like it should be, I don’t know, off by three orders of magnitude?  And even three orders of magnitude is just 10 years of Moore’s Law.  2020 for strong AI is also a bold and important prediction. **Eliezer:**  And the year 2000 for strong AI even more so. **Moravec:**  Heh!  That’s not usually the direction in which people argue with me. **Eliezer:**  There’s an important distinction between the direction in which people usually argue with you, and the direction from which Reality is allowed to yell “Gotcha!”  I wish my future self had kept this more in mind, when arguing with Robin Hanson about how well AI architectures were liable to generalize and scale without a ton of domain-specific algorithmic tinkering for every field of knowledge.  I mean, in principle what I was arguing for was various lower bounds on performance, but I sure could have emphasized more loudly that those were *lower* bounds – well, I *did* emphasize the lower-bound part, but – from the way I felt when AlphaGo and Alpha Zero and GPT-2 and GPT-3 showed up, I think I must’ve sorta forgot that myself. **Moravec:**  Anyways, if we say that I might be up to three orders of magnitude off and phrase it as 2000-2020, do you agree with my prediction then? **Eliezer:**  No, I think you’re just… arguing about the wrong facts, in a way that seems to be unglued from most tracks Reality might follow so far as I currently know?  On my view, creating AGI is strongly dependent on how much knowledge you have about how to do it, in a way which almost *entirely*obviates the relevance of arguments from human biology? Like, human biology tells us a single not-very-useful data point about how much computing power evolutionary biology needs in order to build a general intelligence, using very alien methods to our own.  Then, very separately, there’s the constantly changing level of how much cognitive science, neuroscience, and computer science our own civilization knows.  We don’t know how much computing power is required for AGI for *any* level on that constantly changing graph, and biology doesn’t tell us.  All we know is that the hardware requirements for AGI must be dropping by the year, because the knowledge of how to create AI is something that only increases over time. At some point the moving lines for “decreasing hardware required” and “increasing hardware available” will cross over, which lets us predict that AGI gets built at *some* point.  But we don’t know how to graph two key functions needed to predict that date.  You would seem to be committing the classic fallacy of searching for your keys under the streetlight where the visibility is better.  You know how to estimate how many floating-point operations per second the retina could effectively be using, but *this is not the number you need to predict the outcome you want to predict.*  You need a graph of human knowledge of computer science over time, and then a graph of how much computer science requires how much hardware to build AI, and neither of these graphs are available. It *doesn’t matter* how many chapters your book spends considering the continuation of Moore’s Law or computation in the retina, and I’m sorry if it seems rude of me in some sense to just dismiss the relevance of all the hard work you put into arguing it.  But you’re arguing the *wrong facts* to get to the conclusion, so all your hard work is for naught. **Humbali:**  Now it seems to me that I must chide you for being too dismissive of Moravec’s argument.  Fine, yes, Moravec has not established with *logical certainty* that strong AI must arrive at the point where top supercomputers match the human brain’s 10 trillion operations per second.  But has he not established a *reference class,* the sort of *base rate* that good and virtuous superforecasters, unlike yourself, go looking for when they want to *anchor*their estimate about some future outcome?  Has he not, indeed, established the sort of argument which says that if top supercomputers can do only *ten million* operations per second, we’re not very likely to get AGI earlier than that, and if top supercomputers can do *ten quintillion* operations per second\*, we’re unlikely not to already have AGI? (\*) In 2021 terms, [10 TPU v4 pods](https://cloud.google.com/blog/products/ai-machine-learning/google-wins-mlperf-benchmarks-with-tpu-v4). **Eliezer:**  With ranges that wide, it’d be more likely and less amusing to hit somewhere inside it by coincidence.  But I still think this whole line of thoughts is just off-base, and that you, Humbali, have not truly grasped the concept of a virtuous superforecaster or how they go looking for reference classes and base rates. **Humbali:**  I frankly think you’re just being unvirtuous.  Maybe you have some special model of AGI which claims that it’ll arrive in a different year or be arrived at by some very different pathway.  But is not Moravec’s estimate a sort of base rate which, to the extent you are properly and virtuously uncertain of your own models, you ought to *regress* in your own probability distributions over AI timelines?  As you become more uncertain about the exact amounts of knowledge required and what knowledge we’ll have when, shouldn’t you have an uncertain distribution about AGI arrival times that centers around Moravec’s base-rate prediction of 2010? For you to reject this anchor seems to reveal a grave lack of humility, since you must be very certain of whatever alternate estimation methods you are using in order to throw away this base-rate entirely. **Eliezer:**  Like I said, I think you’ve just failed to grasp the true way of a virtuous superforecaster.  Thinking a lot about Moravec’s so-called ‘base rate’ is just making you, in some sense, stupider; you need to cast your thoughts loose from there and try to navigate a wilder and less tamed space of possibilities, until they begin to gel and coalesce into narrower streams of probability.  Which, for AGI, they probably *won’t do* until we’re quite close to AGI, and start to guess correctly how AGI will get built; for it is easier to predict an eventual global pandemic than to say it will start in November of 2019.  Even in October of 2019 this cannot be done. **Humbali:**  Then all this uncertainty must somehow be quantified, if you are to be a virtuous Bayesian; and again, for lack of anything better, the resulting distribution should center on Moravec’s base-rate estimate of 2010. **Eliezer:**  No, that calculation is just basically not relevant here; and thinking about it is making you stupider, as your mind flails in the trackless wilderness grasping onto unanchored air.  Things must be ‘sufficiently similar’ to each other, in some sense, for us to get a base rate on one thing by looking at another thing.  Humans making an AGI is just too dissimilar to evolutionary biology making a human brain for us to anchor ‘how much computing power at the time it happens’ from one to the other.  It’s not the droid we’re looking for; and your attempt to build an inescapable epistemological trap about virtuously calling that a ‘base rate’ is not the Way. **Imaginary Moravec:**  If I can step back in here, I don’t think my calculation is zero evidence?  What we know from evolutionary biology is that a blind alien god with zero foresight accidentally mutated a chimp brain into a general intelligence.  I don’t want to knock biology’s work too much, there’s some impressive stuff in the retina, and the retina is just the part of the brain which is in some sense easiest to understand.  But surely there’s a very reasonable argument that 10 trillion ops/sec is about the amount of computation that evolutionary biology needed; and since evolution is stupid, when we ourselves have that much computation, it shouldn’t be *that* hard to figure out how to configure it. **Eliezer:**  If that was true, the same theory predicts that our current supercomputers should be doing a better job of matching the agility and vision of spiders.  When at some point there’s enough hardware that we figure out how to put it together into AGI, we could be doing it with less hardware than a human; we could be doing it with more; and we can’t even say that these two possibilities are *around equally probable* such that our probability distribution should have its median around 2010.  Your number is so bad and obtained by such bad means that we should just throw it out of our thinking and start over. **Humbali:**  This last line of reasoning seems to me to be particularly ludicrous, like you’re just throwing away the only base rate we have in favor of a confident assertion of our somehow being *more uncertain* than that. **Eliezer:**  Yeah, well, sorry to put it bluntly, Humbali, but you have not yet figured out how to turn your own computing power into intelligence.  – 1999 – --------- **Luke Muehlhauser reading a previous draft of this** (only sounding much more serious than this, because Luke Muehlhauser)**:**  You know, there was this certain teenaged futurist who made some of his own predictions about AI timelines – **Eliezer:**  I’d really rather not argue from that as a case in point.  I dislike people who screw up something themselves, and then argue like nobody else could possibly be more competent than they were.  I dislike even more people who change their mind about something when they turn 22, and then, for the rest of their lives, go around acting like they are now Very Mature Serious Adults who believe the thing that a Very Mature Serious Adult believes, so if you disagree with them about that thing they started believing at age 22, you must just need to wait to grow out of your extended childhood. **Luke Muehlhauser**(still being paraphrased)**:**  It seems like it ought to be acknowledged somehow. **Eliezer:**  That’s fair, yeah, I can see how someone might think it was relevant.  I just dislike how it potentially creates the appearance of trying to slyly sneak in an Argument From Reckless Youth that I regard as not only invalid but also incredibly distasteful.  You don’t get to screw up yourself and then use that as an argument about how nobody else can do better. **Humbali:**  Uh, what’s the actual drama being subtweeted here? **Eliezer:**  A certain teenaged futurist, who, for example, said in 1999, “The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.” **Humbali:**  This young man must surely be possessed of some very deep character defect, which I worry will prove to be of the sort that people almost never truly outgrow except in the rarest cases.  Why, he’s not even putting a probability distribution over his mad soothsaying – how blatantly absurd can a person get? **Eliezer:**  Dear child ignorant of history, your complaint is far too anachronistic.  This is 1999 we’re talking about here; almost nobody is putting probability distributions on things, that element of your later subculture has not yet been introduced.  Eliezer-2002 hasn’t been sent a copy of “Judgment Under Uncertainty” by Emil Gilliam.  Eliezer-2006 hasn’t put his draft online for “Cognitive biases potentially affecting judgment of global risks”.  The Sequences won’t start until another year after that.  How would the forerunners of effective altruism *in 1999* know about putting probability distributions on forecasts?  I haven’t told them to do that yet!  We can give historical personages credit when they seem to somehow end up doing better than their surroundings would suggest; it is unreasonable to hold them to modern standards, or expect them to have finished refining those modern standards by the age of nineteen. Though there’s also a more subtle lesson you could learn, about how this young man turned out to still have a promising future ahead of him; which he retained at least in part by having a deliberate contempt for pretended dignity, allowing him to be plainly and simply wrong in a way that he noticed, without his having twisted himself up to avoid a prospect of embarrassment.  Instead of, for example, his evading such plain falsification by having dignifiedly wide Very Serious probability distributions centered on the same medians produced by the same basically bad thought processes. But that was too much of a digression, when I tried to write it up; maybe later I’ll post something separately. – 2004 or thereabouts – ----------------------- **Ray Kurzweil in 2001:**  I have [calculated](https://www.kurzweilai.net/the-law-of-accelerating-returns) that matching the intelligence of a human brain requires 2 \* 10^16 ops/sec\* and this will become available in a $1000 computer in 2023.  26 years after that, in 2049, a $1000 computer will have ten billion times more computing power than a human brain; and in 2059, that computer will cost one cent. (\*) Two TPU v4 pods. **Actual real-life Eliezer in Q&A, when Kurzweil says the same thing in a 2004(?) talk:**  It seems weird to me to forecast the arrival of “human-equivalent” AI, and then expect Moore’s Law to just continue on the same track past that point for thirty years.  Once we’ve got, in your terms, human-equivalent AIs, even if we don’t go beyond that in terms of intelligence, Moore’s Law will start speeding them up.  Once AIs are thinking thousands of times faster than we are, wouldn’t that tend to break down the graph of Moore’s Law with respect to the objective wall-clock time of the Earth going around the Sun?  Because AIs would be able to spend thousands of *subjective*years working on new computing technology? **Actual Ray Kurzweil:**  The fact that AIs can do faster research is exactly what will enable Moore’s Law to continue on track. **Actual Eliezer (out loud):**  Thank you for answering my question. **Actual Eliezer (internally):**  Moore’s Law is a phenomenon produced by human cognition and the fact that human civilization runs off human cognition.  You can’t expect the surface phenomenon to continue unchanged after the deep causal phenomenon underlying it starts changing.  What kind of bizarre worship of graphs would lead somebody to think that the graphs were the primary phenomenon and would continue steady and unchanged when the forces underlying them changed massively?  I was hoping he’d be less nutty in person than in the book, but oh well. – 2006 or thereabouts – ----------------------- **Somebody on the Internet:**  I have calculated the number of computer operations used by evolution to evolve the human brain – searching through organisms with increasing brain size  – by adding up all the computations that were done by any brains before modern humans appeared.  It comes out to 10^43 computer operations.\*  AGI isn’t coming any time soon! (\*)  I forget the exact figure.  It was 10^40-something. **Eliezer, sighing:**  Another day, another biology-inspired timelines forecast.  This trick didn’t work when Moravec tried it, it’s not going to work while Ray Kurzweil is trying it, and it’s not going to work when you try it either.  It also didn’t work when a certain teenager tried it, but please entirely ignore that part; you’re at least allowed to do better than him. **Imaginary Somebody:**  Moravec’s prediction failed because he assumed that you could just magically take something with around as much hardware as the human brain and, poof, it would start being around that intelligent – **Eliezer:**  Yes, that is one way of viewing an invalidity in that argument.  Though you do Moravec a disservice if you imagine that he could only argue “It will magically emerge”, and could not give the more plausible-sounding argument “Human engineers are not that incompetent compared to biology, and will probably figure it out without more than one or two orders of magnitude of extra overhead.” **Somebody:**  But *I* am cleverer, for I have calculated the number of computing operations that was used to *create and design* biological intelligence, not just the number of computing operations required to *run it once created!* **Eliezer:**  And yet, because your reasoning contains the word “biological”, it is just as invalid and unhelpful as Moravec’s original prediction. **Somebody:**  I don’t see why you dismiss my biological argument about timelines on the basis of Moravec having been wrong.  He made one basic mistake – neglecting to take into effect the cost to generate intelligence, not just to run it.  I have corrected this mistake, and now my own effort to do biologically inspired timeline forecasting should work fine, and must be evaluated on its own merits, *de novo*. **Eliezer:**  It is true indeed that sometimes a line of inference is doing just one thing wrong, and works fine after being corrected.  And because this is true, it is often indeed wise to reevaluate new arguments on their own merits, if that is how they present themselves.  One may not take the past failure of a different argument or three, and try to hang it onto the new argument like an inescapable iron ball chained to its leg.  It might be the cause for defeasible skepticism, but not invincible skepticism. That said, on my view, you are making a nearly identical mistake as Moravec, and so his failure remains relevant to the question of whether you are engaging in a kind of thought that binds well to Reality. **Somebody:**  And that mistake is just mentioning the word “biology”? **Eliezer:**  The problem is that *the resource gets consumed differently, so base-rate arguments from resource consumption end up utterly unhelpful in real life.*  The human brain consumes around 20 watts of power.  Can we thereby conclude that an AGI should consume around 20 watts of power, and that, when technology advances to the point of being able to supply around 20 watts of power to computers, we’ll get AGI? **Somebody:**  That’s absurd, of course.  So, what, you compare my argument to an absurd argument, and from this dismiss it? **Eliezer:**  I’m saying that Moravec’s “argument from comparable resource consumption” must be in general [invalid](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization), because it [Proves Too Much](https://www.lesswrong.com/posts/G5eMM3Wp3hbCuKKPE/proving-too-much).  If it’s in general valid to reason about comparable resource consumption, then it should be equally valid to reason from energy consumed as from computation consumed, and pick energy consumption instead to call the basis of your median estimate. You say that AIs consume energy in a very different way from brains?  Well, they’ll also consume computations in a very different way from brains!  The only difference between these two cases is that you *know* something about how humans eat food and break it down in their stomachs and convert it into ATP that gets consumed by neurons to pump ions back out of dendrites and axons, while computer chips consume electricity whose flow gets interrupted by transistors to transmit information.  Since you *know anything whatsoever* about how AGIs and humans consume energy, you can *see* that the consumption is so vastly different as to obviate all comparisons entirely. You are *ignorant* of how the brain consumes computation, you are *ignorant* of how the first AGIs built would consume computation, but “an unknown key does not open an unknown lock” and these two ignorant distributions should not assert much internal correlation between them. Even without knowing the specifics of how brains and future AGIs consume computing operations, you ought to be able to reason abstractly about a directional update that you *would* make, if you knew *any* specifics instead of none.  If you did know how both kinds of entity consumed computations, if you knew about specific machinery for human brains, and specific machinery for AGIs, you’d then be able to see the enormous vast specific differences between them, and go, “Wow, what a futile resource-consumption comparison to try to use for forecasting.” (Though I say this without much hope; I have not had very much luck in telling people about predictable directional updates they would make, if they knew something instead of nothing about a subject.  I think it’s probably too abstract for most people to feel in their gut, or something like that, so their brain ignores it and moves on in the end.  I have had life experience with learning more about a thing, updating, and then going to myself, “Wow, I should’ve been able to predict in retrospect that learning almost *any* specific fact would move my opinions in that same direction.”  But I worry this is not a common experience, for it involves a real experience of discovery, and preferably more than one to get the generalization.) **Somebody:**  All of that seems irrelevant to my novel and different argument.  I am not foolishly estimating the resources consumed by a single brain; I’m estimating the resources consumed by evolutionary biology to *invent* brains! **Eliezer:**  And the humans wracking their own brains and inventing new AI program architectures and deploying those AI program architectures to themselves learn, will consume computations so *utterly differently* from evolution that there is no point comparing those consumptions of resources.  That is the flaw that you share exactly with Moravec, and that is why I say the same of both of you, “This is a kind of thinking that fails to bind upon reality, it doesn’t work in real life.”  I don’t care how much painstaking work you put into your estimate of 10^43 computations performed by biology.  It’s just not a relevant fact. **Humbali:**  But surely this estimate of 10^43 cumulative operations can at least be used to establish a base rate for anchoring our – **Eliezer:**  Oh, for god’s sake, shut up.  At least Somebody is only wrong on the object level, and isn’t trying to build an inescapable epistemological trap by which his ideas must still hang in the air like an eternal stench even after they’ve been counterargued.  Isn’t ‘but muh base rates’ what your viewpoint would’ve also said about Moravec’s 2010 estimate, back when that number still looked plausible? **Humbali:**  Of course it is evident to me now that my youthful enthusiasm was mistaken; obviously I tried to estimate the wrong figure.  As Somebody argues, we should have been estimating the biological computations used to *design* human intelligence, not the computations used to *run* it. I see, now, that I was using the wrong figure as my base rate, leading my base rate to be wildly wrong, and even irrelevant; but now that I’ve seen this, the clear error in my previous reasoning, I have a *new*base rate.  This doesn’t seem obviously to me likely to contain the same kind of wildly invalidating enormous error as before.  What, is Reality just going to yell “Gotcha!” at me again?  And even the prospect of some new unknown error, which is just as likely to be in either possible direction, implies only that we should widen our credible intervals while keeping them centered on a median of 10^43 operations – **Eliezer:**  Please stop.  This trick just never works, at all, deal with it and get over it.  Every second of attention that you pay to the 10^43 number is making you stupider.  You might as well reason that 20 watts is a base rate for how much energy the first generally intelligent computing machine should consume. – 2020 – -------- **OpenPhil:**  We have commissioned a Very Serious report on a biologically inspired estimate of how much computation will be required to achieve Artificial General Intelligence, for purposes of forecasting an AGI timeline.  ([Summary of report.](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines?commentId=7d4q79ntst6ryaxWD))  ([Full draft of report.)](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP)  Our leadership takes this report Very Seriously. **Eliezer:**  Oh, hi there, new kids.  Your grandpa is feeling kind of tired now and can’t debate this again with as much energy as when he was younger. **Imaginary OpenPhil:**  You’re not *that* much older than us. **Eliezer:**  Not by biological wall-clock time, I suppose, but – **OpenPhil:**  You think thousands of times faster than us? **Eliezer:**  I wasn’t going to say it if you weren’t. **OpenPhil:**  We object to your assertion on the grounds that it is false. **Eliezer:**  I was actually going to say, you might be underestimating how long I’ve been walking this endless battlefield because I started *really quite young*. I mean, sure, I didn’t read *Mind Children* when it came out in 1988.  I only read it four years later, when I was twelve.  And sure, I didn’t immediately afterwards start writing online about Moore’s Law and strong AI; I did not immediately contribute my own salvos and sallies to the war; I was not yet a noticed voice in the debate.  I only got started on that at age sixteen.  I’d like to be able to say that in 1999 I was just a random teenager being reckless, but in fact I was already being invited to dignified online colloquia about the “Singularity” and mentioned in printed books; when I was being wrong back then I was already doing so in the capacity of a minor public intellectual on the topic. This is, as I understand normie ways, relatively young, and is probably worth an extra decade tacked onto my biological age; you should imagine me as being 52 instead of 42 as I write this, with a correspondingly greater number of visible gray hairs. A few years later – though still before your time – there was the Accelerating Change Foundation, and Ray Kurzweil spending literally millions of dollars to push Moore’s Law graphs of technological progress as *the* central story about the future.  I mean, I’m sure that a few million dollars sounds like peanuts to OpenPhil, but if your own annual budget was a hundred thousand dollars or so, that’s a hell of a megaphone to compete with. If you are currently able to conceptualize the Future as being about something *other* than nicely measurable metrics of progress in various tech industries, being projected out to where they will inevitably deliver us nice things – that’s at least partially because of a battle fought years earlier, in which I was a primary fighter, creating a conceptual atmosphere you now take for granted.  A mental world where threshold levels of AI ability are considered potentially interesting and transformative – rather than milestones of new technological luxuries to be checked off on an otherwise invariant graph of Moore’s Laws as they deliver flying cars, space travel, lifespan-extension escape velocity, and other such goodies on an equal level of interestingness.  I have earned at least a *little* right to call myself your grandpa. And that kind of experience has a sort of compounded interest, where, once you’ve lived something yourself and participated in it, you can learn more from reading other histories about it.  The histories become more real to you once you’ve fought your own battles.  The fact that I’ve lived through timeline errors in person gives me a sense of how it actually feels to be around at the time, watching people sincerely argue Very Serious erroneous forecasts.  That experience lets me really and actually [update on the history](https://www.lesswrong.com/posts/TLKPj4GDXetZuPDH5/making-history-available) of the earlier mistaken timelines from before I was around; instead of the histories just seeming like a kind of fictional novel to read about, disconnected from reality and not happening to real people. And now, indeed, I’m feeling a bit old and tired for reading yet another report like yours in full attentive detail.  Does it by any chance say that AGI is due in about 30 years from now? **OpenPhil:**  Our report has very wide credible intervals around both sides of its median, as we analyze the problem from a number of different angles and show how they lead to different estimates – **Eliezer:**  Unfortunately, the thing about figuring out five different ways to guess the effective IQ of the smartest people on Earth, and having three different ways to estimate the minimum IQ to destroy lesser systems such that you could extrapolate a minimum IQ to destroy the whole Earth, and putting wide credible intervals around all those numbers, and combining and mixing the probability distributions to get a new probability distribution, is that, at the end of all that, you are still left with a load of nonsense.  Doing a fundamentally wrong thing in several different ways will not save you, though I suppose if you spread your bets widely enough, one of them may be right by coincidence. So does the report by any chance say – with however many caveats and however elaborate the probabilistic methods and alternative analyses – that AGI is probably due in about 30 years from now? **OpenPhil:**  Yes, in fact, our 2020 report’s median estimate is 2050; though, again, with very wide credible intervals around both sides.  Is that number significant? **Eliezer:**  It’s a law generalized by Charles Platt, that any AI forecast will put strong AI thirty years out from when the forecast is made.  Vernor Vinge referenced it in the body of his famous 1993 NASA speech, whose abstract begins, “Within thirty years, we will have the technological means to create superhuman intelligence.  Shortly after, the human era will be ended.” After I was old enough to be more skeptical of timelines myself, I used to wonder how Vinge had pulled out the “within thirty years” part.  This may have gone over my head at the time, but rereading again today, I conjecture Vinge may have chosen the headline figure of thirty years as a deliberately self-deprecating reference to Charles Platt’s generalization about such forecasts always being thirty years from the time they’re made, which Vinge explicitly cites later in the speech. Or to put it another way:  I conjecture that to the audience of the time, already familiar with some previously-made forecasts about strong AI, the impact of the abstract is meant to be, “Never mind predicting strong AI in thirty years, you should be predicting *superintelligence* in thirty years, which matters a lot more.”  But the minds of authors are scarcely more knowable than the Future, if they have not explicitly told us what they were thinking; so you’d have to ask Professor Vinge, and hope he remembers what he was thinking back then. **OpenPhil:**  Superintelligence before 2023, huh?  I suppose Vinge still has two years left to go before that’s falsified. **Eliezer:**Also in the body of the speech, Vinge says, “I’ll be surprised if this event occurs before 2005 or after 2030,” which sounds like a more serious and sensible way of phrasing an estimate.  I think that should supersede the probably Platt-inspired headline figure for what we think of as Vinge’s 1993 prediction.  The jury’s still out on whether Vinge will have made a good call. Oh, and sorry if grandpa is boring you with all this history from the times before you were around.  I mean, I didn’t actually attend Vinge’s famous NASA speech when it happened, what with being thirteen years old at the time, but I sure did read it later.  Once it was digitized and put online, it was all over the Internet.  Well, all over certain parts of the Internet, anyways.  Which nerdy parts constituted a much larger fraction of the whole, back when the World Wide Web was just starting to take off among early adopters. But, yeah, the new kids showing up with some graphs of Moore’s Law and calculations about biology and an earnest estimate of strong AI being thirty years out from the time of the report is, uh, well, it’s… historically precedented. **OpenPhil:**  That part about Charles Platt’s generalization is interesting, but just because we unwittingly chose literally exactly the median that Platt predicted people would always choose in consistent error, that doesn’t justify dismissing our work, right?  We could have used a completely valid method of estimation which would have pointed to 2050 no matter which year it was tried in, and, by sheer coincidence, have first written that up in 2020.  In fact, we try to show in the report that the same methodology, evaluated in earlier years, would also have pointed to around 2050 – **Eliezer:**Look, people keep trying this.  It’s never worked.  It’s never going to work.  2 years before the end of the world, there’ll be another published biologically inspired estimate showing that AGI is 30 years away and it will be exactly as informative then as it is now.  I’d love to know the timelines too, but you’re not *going* to get the answer you want until right before the end of the world, and maybe not even then unless you’re paying very close attention.  *Timing this stuff is just plain hard.* **OpenPhil:**  But our report is different, and our methodology for biologically inspired estimates is wiser and less naive than those who came before. **Eliezer:**  That’s what the last guy said, but go on. **OpenPhil:**  First, we carefully estimate a range of possible figures for the equivalent of neural-network parameters needed to emulate a human brain.  Then, we estimate how many examples would be required to train a neural net with that many parameters.  Then, we estimate the total computational cost of that many training runs.  Moore’s Law then gives us 2050 as our median time estimate, given what we think are the *most* likely underlying assumptions, though we do analyze it several different ways. **Eliezer:**  This is almost exactly what the last guy tried, except you’re using network parameters instead of computing ops, and deep learning training runs instead of biological evolution. **OpenPhil:**  Yes, so we’ve corrected his mistake of estimating the wrong biological quantity and now we’re good, right? **Eliezer:**  That’s what the last guy thought *he’d* done about *Moravec’s* mistaken estimation target.  And neither he nor Moravec would have made much headway on their underlying mistakes, by doing a probabilistic analysis of that same wrong question from multiple angles. **OpenPhil:**  Look, sometimes more than one person makes a mistake, over historical time.  It doesn’t mean nobody can ever get it right.  You of all people should agree. **Eliezer:**  I do so agree, but that doesn’t mean I agree you’ve *fixed* the mistake.  I think the methodology itself is bad, not just its choice of which biological parameter to estimate.  Look, do you understand *why* the evolution-inspired estimate of 10^43 ops was completely ludicrous; and the claim that it was equally likely to be mistaken in either direction, even more ludicrous? **OpenPhil:**  Because AGI isn’t like biology, and in particular, will be trained using gradient descent instead of evolutionary search, which is cheaper.  We do note inside our report that this is a key assumption, and that, if it fails, the estimate might be correspondingly wrong – **Eliezer:**  But then you claim that mistakes are equally likely in both directions and so your unstable estimate is a good median.  Can you see why the previous evolutionary estimate of 10^43 cumulative ops was not, in fact, *equally likely to be wrong in either direction?*  That it was, predictably, a directional *overestimate?* **OpenPhil:**  Well, search by evolutionary biology is more costly than training by gradient descent, so in hindsight, it was an overestimate.  Are you claiming this was predictable in foresight instead of hindsight? **Eliezer:**  I’m claiming that, at the time, I snorted and tossed Somebody’s figure out the window while thinking it was ridiculously huge and absurd, yes. **OpenPhil:**  Because you’d already foreseen in 2006 that gradient descent would be the method of choice for training future AIs, rather than genetic algorithms? **Eliezer:**  Ha!  No.  Because it was an insanely costly hypothetical approach whose main point of appeal, to the sort of person who believed in it, was that it didn’t require having any idea whatsoever of what you were doing or how to design a mind. **OpenPhil:**  Suppose one were to reply:  “Somebody” *didn’t* know better-than-evolutionary methods for designing a mind, just as we currently don’t know better methods than gradient descent for designing a mind; and hence Somebody’s estimate was the best estimate at the time, just as ours is the best estimate now? **Eliezer:**  Unless you were one of a small handful of leading neural-net researchers who knew a few years ahead of the world where scientific progress was heading – who knew a Thielian ‘secret’ before finding evidence strong enough to convince the less foresightful – you couldn’t have called the jump specifically to *gradient descent* rather than any other technique.  “I don’t know any more computationally efficient way to produce a mind than *re-evolving* the cognitive history of all life on Earth” transitioning over time to “I don’t know any more computationally efficient way to produce a mind than *gradient descent* over entire brain-sized models” is not predictable in the specific part about “gradient descent” – not unless you know a Thielian secret. But knowledge is a ratchet that usually only turns one way, so it’s predictable that the current story changes to *somewhere* over future time, in a net expected direction.  Let’s consider the technique currently known as mixture-of-experts (MoE), for training smaller nets in pieces and muxing them together.  It’s not my mainline prediction that MoE actually goes anywhere – if I thought MoE was actually promising, I wouldn’t call attention to it, of course!  I don’t want to *make* timelines shorter, that is not a service to Earth, not a good sacrifice in the cause of winning an Internet argument. But if I’m wrong and MoE is not a dead end, that technique serves as an easily-visualizable case in point.  If that’s a fruitful avenue, the technique currently known as “mixture-of-experts” will mature further over time, and future deep learning engineers will be able to further perfect the art of training *slices of brains* using gradient descent and fewer examples, instead of training *entire brains* using gradient descent and lots of examples. Or, more likely, it’s not MoE that forms the next little trend.  But there is going to be *something,* especially if we’re sitting around waiting until 2050.  Three decades is enough time for some *big* paradigm shifts in an intensively researched field.  Maybe we’d end up using neural net tech very similar to today’s tech if the world ends in 2025, but in that case, of course, your prediction must have failed somewhere else. The three components of AGI arrival times are available hardware, which increases over time in an easily graphed way; available knowledge, which increases over time in a way that’s much harder to graph; and hardware required at a given level of specific knowledge, a huge multidimensional unknown background parameter.  The fact that you have no idea how to graph the increase of knowledge – or measure it in any way that is less completely silly than “number of science papers published” or whatever such gameable metric – doesn’t change the point that this *is* a predictable fact about the future; there *will* be more knowledge later, the more time that passes, and that will *directionally* change the expense of the currently least expensive way of doing things. **OpenPhil:**  We did already consider that and try to take it into account: our model already includes a parameter for how algorithmic progress reduces hardware requirements.  It’s not easy to graph as exactly as Moore’s Law, as you say, but our best-guess estimate is that compute costs halve every 2-3 years. **Eliezer:**  Oh, nice.  I was wondering what sort of tunable underdetermined parameters enabled your model to nail the psychologically overdetermined final figure of ’30 years’ so exactly. **OpenPhil:**  Eliezer. **Eliezer:**  Think of this in an economic sense: people don’t buy where goods are most expensive and delivered latest, they buy where goods are cheapest and delivered earliest.  Deep learning researchers are not like an inanimate chunk of ice tumbling through intergalactic space in its unchanging direction of previous motion; they are economic agents who look around for ways to destroy the world faster and more cheaply than the way that you imagine as the default.  They are more eager than you are to think of more creative paths to get to the next milestone faster. **OpenPhil:**  Isn’t this desire for cheaper methods exactly what our model already accounts for, by modeling algorithmic progress? **Eliezer:**  The makers of AGI aren’t going to be doing 10,000,000,000,000 rounds of gradient descent, on entire brain-sized 300,000,000,000,000-parameter models, *algorithmically faster than today.*  They’re going to get to AGI via some route that *you don’t know how to take,* at least if it happens in 2040.  If it happens in 2025, it may be via a route that some modern researchers do know how to take, but in this case, of course, your model was also wrong. They’re not going to be taking your default-imagined approach *algorithmically faster,* they’re going to be taking an *algorithmically different approach* that eats computing power in a different way than you imagine it being consumed. **OpenPhil:**  Shouldn’t that just be folded into our estimate of how the computation required to accomplish a fixed task decreases by half every 2-3 years due to better algorithms? **Eliezer:**  Backtesting this viewpoint on the previous history of computer science, it seems to me to assert that it should be possible to: * Train a pre-Transformer RNN/CNN-based model, not using any other techniques invented after 2017, to GPT-2 levels of performance, using only around 2x as much compute as GPT-2; * Play pro-level Go using 8-16 times as much computing power as AlphaGo, but only 2006 levels of technology. For reference, recall that in 2006, Hinton and Salakhutdinov were just starting to publish that, by training multiple layers of Restricted Boltzmann machines and then unrolling them into a “deep” neural network, you could get an initialization for the network weights that would avoid the problem of vanishing and exploding gradients and activations.  At least so long as you didn’t try to stack too many layers, like a dozen layers or something ridiculous like that.  This being the point that kicked off the entire deep-learning revolution. Your model apparently suggests that we have gotten around 50 times more efficient at turning computation into intelligence since that time; so, we should be able to replicate any modern feat of deep learning performed in 2021, using techniques from before deep learning and around fifty times as much computing power. **OpenPhil:**  No, that’s totally not what our viewpoint says when you backfit it to past reality.  Our model does a great job of retrodicting past reality. **Eliezer:**  How so? **OpenPhil:**  <Eliezer cannot predict what they will say here.> **Eliezer:**  I’m not convinced by this argument. **OpenPhil:**  We didn’t think you would be; you’re sort of predictable that way. **Eliezer:**  Well, yes, if I’d predicted I’d update from hearing your argument, I would’ve updated already.  I may not be a real Bayesian but I’m not *that* incoherent. But I can guess in advance at the outline of my reply, and my guess is this: “Look, when people come to me with models claiming the future is predictable enough for timing, I find that their viewpoints seem to me like they would have made garbage predictions if I actually had to operate them in the past *without benefit of hindsight*.  Sure, with benefit of hindsight, you can look over a thousand possible trends and invent rules of prediction and event timing that nobody *in the past* actually spotlighted *then*, and claim that things happened on trend.  I was around at the time and I do not recall people actually predicting the shape of AI in the year 2020 in advance.  I don’t think they were just being stupid either. “In a conceivable future where people are still alive and reasoning as modern humans do in 2040, somebody will no doubt look back and claim that everything happened on trend since 2020; but *which*trend the hindsighter will pick out is not predictable to us in advance. “It may be, of course, that I simply don’t understand how to operate your viewpoint, nor how to apply it to the past or present or future; and that yours is a sort of viewpoint which indeed permits saying only one thing, and not another; and that this viewpoint would have predicted the past wonderfully, even without any benefit of hindsight.  But there is also that less charitable viewpoint which suspects that somebody’s theory of ‘A coinflip always comes up heads on occasions X’ contains some informal parameters which can be argued about which occasions exactly ‘X’ describes, and that the operation of these informal parameters is a bit influenced by one’s knowledge of whether a past coinflip actually came up heads or not. “As somebody who doesn’t start from the assumption that your viewpoint is a good fit to the past, I still don’t see how a good fit to the past could’ve been extracted from it without benefit of hindsight.” **OpenPhil:**  That’s a pretty general counterargument, and like any pretty general counterargument it’s a blade you should try turning against yourself.  Why doesn’t your own viewpoint horribly mispredict the past, and say that all estimates of AGI arrival times are predictably net underestimates?  If we imagine trying to operate your own viewpoint in 1988, we imagine going to Moravec and saying, “Your estimate of how much computing power it takes to match a human brain is predictably an overestimate, because engineers will find a better way to do it than biology, so we should expect AGI sooner than 2010.” **Eliezer:**  I *did* tell Imaginary Moravec that his estimate of the minimum computation required for human-equivalent general intelligence was predictably an overestimate; that was right there in the dialogue before I even got around to writing this part.  And I also, albeit with benefit of hindsight, told Moravec that both of these estimates were useless for timing the future, because they skipped over the questions of how much knowledge you’d need to make an AGI with a given amount of computing power, how fast knowledge was progressing, and the actual timing determined by the rising hardware line touching the falling hardware-required line. **OpenPhil:**  We don’t see how to operate your viewpoint to say *in advance* to Moravec, before his prediction has been falsified, “Your estimate is plainly a garbage estimate” instead of “Your estimate is obviously a directional underestimate”, especially since you seem to be saying the latter to *us, now.* **Eliezer:**  That’s not a critique I give zero weight.  And, I mean, as a kid, I was in fact talking like, “To heck with that hardware estimate, let’s at least try to get it done before then.  People are dying for lack of superintelligence; let’s aim for 2005.”  I had a T-shirt spraypainted “Singularity 2005” at a science fiction convention, it’s rather crude but I think it’s still in my closet somewhere. But now I am older and wiser and have fixed all my past mistakes, so the critique of those past mistakes no longer applies to my new arguments. **OpenPhil:**  Uh huh. **Eliezer:**  I mean, I did try to fix all the mistakes that I knew about, and didn’t just, like, leave those mistakes in forever?  I realize that this claim to be able to “learn from experience” is not standard human behavior in situations like this, but if you’ve got to be weird, that’s a good place to spend your weirdness points.  At least by my own lights, I am now making a different argument than I made when I was nineteen years old, and that different argument should be considered differently. And, yes, I also think my nineteen-year-old self was not completely foolish at least about AI timelines; in the sense that, for all he knew, maybe you *could*build AGI by 2005 if you tried really hard over the next 6 years.  Not so much because Moravec’s estimate should’ve been seen as a predictable overestimate of how much computing power would actually be needed, given knowledge that would become available in the next 6 years; but because Moravec’s estimate should’ve been seen as *almost entirely irrelevant,* making the correct answer be “I don’t know.” **OpenPhil:**  It seems to us that Moravec’s estimate, and the guess of your nineteen-year-old past self, are *both* predictably vast underestimates.  Estimating the computation consumed by one brain, and calling that your AGI target date, is obviously predictably a vast underestimate because it neglects the computation required for *training* a brainlike system.  It may be a bit uncharitable, but we suggest that Moravec and your nineteen-year-old self may both have been motivatedly credulous, to not notice a gap so very obvious. **Eliezer:**  I could imagine it seeming that way if you’d grown up never learning about any AI techniques except deep learning, which had, in your wordless mental world, always been the way things were, and would always be that way forever. I mean, it could be that deep learning *will*still bethe bleeding-edge method of Artificial Intelligence right up until the end of the world.  But if so, it’ll be because Vinge was right and the world ended before 2030, *not*because the deep learning paradigm was as good as any AI paradigm can ever get.  That is simply not a kind of thing that I expect Reality to say “Gotcha” to me about, any more than I expect to be told that the human brain, whose neurons and synapses are 500,000 times further away from the thermodynamic efficiency wall than ATP synthase, is the most efficient possible consumer of computations. The specific perspective-taking operation needed here – when it comes to what was and wasn’t obvious in 1988 or 1999 – is that the notion of spending thousands and millions and billions of times as much computation on a “training” phase, as on an “inference” phase, is something that only came to be seen as Always Necessary after the deep learning revolution took over AI in the late Noughties.  Back when Moravec was writing, you programmed a game-tree-search algorithm for chess, and then you ran that code, and it played chess.  Maybe you needed to add an opening book, or do a lot of trial runs to tweak the exact values the position evaluation function assigned to knights vs. bishops, but most AIs weren’t neural nets and didn’t get trained on enormous TPU pods. Moravec had no way of knowing that the paradigm in AI would, twenty years later, massively shift to a new paradigm in which stuff got trained on enormous TPU pods.  He lived in a world where you could only train neural networks a few layers deep, like, three layers, and the gradients vanished or exploded if you tried to train networks any deeper. To be clear, in 1999, I did think of AGIs as needing to do a lot of learning; but I expected them to be learning while thinking, not to learn in a separate gradient descent phase. **OpenPhil:**  How could anybody possibly miss anything so obvious?  There’s so many basic technical ideas and even *philosophical ideas about how you do AI*which make it supremely obvious that the best and only way to turn computation into intelligence is to have deep nets, lots of parameters, and enormous separate training phases on TPU pods. **Eliezer:**  Yes, well, see, those philosophical ideas were not as prominent in 1988, which is why the direction of the future paradigm shift was not *predictable in advance without benefit of hindsight,* let alone timeable to 2006*.* You’re also probably overestimating how much those philosophical ideas would pinpoint the modern paradigm of gradient descent even if you had accepted them wholeheartedly, in 1988.  Or let’s consider, say, October 2006, when the Netflix Prize was being run – a watershed occasion where lots of programmers around the world tried their hand at minimizing a loss function, based on a huge-for-the-times ‘training set’ that had been publicly released, scored on a holdout ‘test set’.  You could say it was the first moment in the limelight for the sort of problem setup that everybody now takes for granted with ML research: a widely shared dataset, a heldout test set, a loss function to be minimized, prestige for advancing the ‘state of the art’.  And it was a million dollars, which, back in 2006, was big money for a machine learning prize, garnering lots of interest from competent competitors. Before deep learning, “statistical learning” was indeed a banner often carried by the early advocates of the view that Richard Sutton now calls the Bitter Lesson, along the lines of “complicated programming of human ideas doesn’t work, you have to just learn from massive amounts of data”. But before deep learning – which was barely getting started in 2006 – “statistical learning” methods that took in massive amounts of data, did not use those massive amounts of data to train neural networks by stochastic gradient descent across millions of examples!  In 2007, [the winning submission to the Netflix Prize](https://www.netflixprize.com/assets/ProgressPrize2007_KorBell.pdf) was an ensemble predictor that incorporated k-Nearest-Neighbor, a factorization method that repeatedly globally minimized squared error, two-layer Restricted Boltzmann Machines, and a regression model akin to Principal Components Analysis.  Which is all 100% statistical learning driven by relatively-big-for-the-time “big data”, and 0% GOFAI.  But these methods didn’t involve enormous massive training phases in the modern sense. Back then, if you were doing stochastic gradient descent at all, you were doing it on a much smaller neural network.  Not so much because you couldn’t afford more compute for a larger neural network, but because wider neural networks didn’t help you much and deeper neural networks simply didn’t work. Bleeding-edge statistical learning techniques as late as 2007, to make actual use of big data, had to find other ways to make use of huge amounts of data than gradient descent and backpropagation.  Though, I mean, not huge amounts of data by modern standards.  The winning submission to the Netflix Prize used an ensemble of 107 models – that’s not a misprint for 10^7, I actually mean 107 – which models were drawn from half a different model classes, then proliferated with slightly different parameters, averaged together to reduce statistical noise. A modern kid, perhaps, looks at this and thinks:  “If you can afford the compute to train 107 models, why not just train one larger model?”  But back then, you see, there just *wasn’t* a standard way to dump massively more compute into something, and get better results back out.  The fact that they had 107 differently parameterized models from a half-dozen families averaged together to reduce noise, was about as well as anyone could do in 2007, at putting more effort in and getting better results back out. **OpenPhil:**  How quaint and archaic!  But that was 13 years ago, before time actually got started and history actually started happening in real life.  *Now*we’ve got the paradigm which will actually be used to create AGI, in all probability; so estimation methods centered on that paradigm should be valid. **Eliezer:**  The current paradigm is definitely not the end of the line in principle.  I guarantee you that the way superintelligences build cognitive engines is not by training enormous neural networks using gradient descent.  Gua-ran-tee it. The fact that you think you now see a path to AGI, is because today – unlike in 2006 – you have a paradigm that is seemingly willing to entertain having more and more food stuffed down its throat without obvious limit (yet).  This is really a quite recent paradigm shift, though, and it is probably not the most efficient possible way to consume more and more food. You could rather strongly guess, early on, that support vector machines were never going to give you AGI, *because* you couldn’t dump more and more compute into training or running SVMs and get arbitrarily better answers; whatever gave you AGI would have to be something else that could eat more compute productively. Similarly, since the path through genetic algorithms and recapitulating the whole evolutionary history would have taken a *lot* of compute, it’s no wonder that other, more efficient methods of eating compute were developed before then; it was obvious in advance that they must exist, for all that some what-iffed otherwise. To be clear, it is certain the world will end by more inefficient methods than those that superintelligences would use; since, if superintelligences are making their own AI systems, then the world has already ended. And it is possible, even, that the world will end by a method as inefficient as gradient descent.  But if so, that will be because the world ended too soon for any more efficient paradigm to be developed.  Which, on my model, means the world probably ended before say 2040(???).  But of course, compared to how much I think I know about what must be more efficiently doable in principle, I think I know far less about the speed of accumulation of real knowledge (not to be confused with proliferation of publications), or how various random-to-me social phenomena could influence the speed of knowledge.  So I think I have far less ability to say a confident thing about the *timing*of the next paradigm shift in AI, compared to the *existence and eventuality* of such paradigms in the space of possibilities. **OpenPhil:**  But if you expect the next paradigm shift to happen in around 2040, shouldn’t you confidently predict that AGI has to arrive *after*2040, because, without that paradigm shift, we’d have to produce AGI using deep learning paradigms, and in that case our own calculation would apply saying that 2040 is relatively early? **Eliezer:**  No, because I’d consider, say, improved mixture-of-experts techniques that actually work, to be very much *within*the deep learning paradigm; and even a relatively small paradigm shift like that would obviate your calculations, if it produced a more drastic speedup than halving the computational cost over two years. More importantly, I simply don’t believe in your attempt to calculate a figure of 10,000,000,000,000,000 operations per second for a brain-equivalent deepnet based on biological analogies, or your figure of 10,000,000,000,000 training updates for it.  I simply don’t believe in it at all.  I don’t think it’s a valid anchor.  I don’t think it should be used as the median point of a wide uncertain distribution.  The first-developed AGI will consume computation in a different fashion, much as it eats energy in a different fashion; and “how much computation an AGI needs to eat compared to a human brain” and “how many watts an AGI needs to eat compared to a human brain” are equally always decreasing with the technology and science of the day. **OpenPhil:**  Doesn’t our calculation at least provide a soft *upper bound* on how much computation is required to produce human-level intelligence?  If a calculation is able to produce an upper bound on a variable, how can it be uninformative about that variable? **Eliezer:**  You assume that the architecture you’re describing can, in fact, work at all to produce human intelligence.  This itself strikes me as not only tentative but probably false.  I mostly suspect that if you take the exact GPT architecture, [scale it up](https://www.reddit.com/r/ProgrammerHumor/comments/8c1i45/stack_more_layers/) to what you calculate as human-sized, and start training it using current gradient descent techniques… what mostly happens is that it saturates and asymptotes its loss function at not very far beyond the GPT-3 level – say, it behaves like GPT-4 would, but not much better. This is what should have been told to Moravec:  “Sorry, even if your biology is correct, the assumption that future people can put in X amount of compute and get out Y result is not something you really know.”  And that point did in fact just completely trash his ability to predict and time the future. The same must be said to you.  Your model contains supposedly known parameters, “how much computation an AGI must eat per second, and how many parameters must be in the trainable model for that, and how many examples are needed to train those parameters”.  Relative to whatever method is actually first used to produce AGI, I expect your estimates to be wildly inapplicable, as wrong as Moravec was about thinking in terms of just using one supercomputer powerful enough to be a brain.  Your parameter estimates may not be about properties that the first successful AGI design even *has.*  Why, what if it contains a significant component that *isn’t a neural network?*  I realize this may be scarcely conceivable to somebody from the present generation, but the world was not always as it was now, and it will change if it does not end. **OpenPhil:**  I don’t understand how some of your reasoning could be internally consistent even on its own terms.  If, according to you, our 2050 estimate doesn’t provide a soft upper bound on AGI arrival times – or rather, if our 2050-centered probability distribution isn’t a soft upper bound on reasonable AGI arrival probability distributions – then I don’t see how you can claim that the 2050-centered distribution is predictably a directional overestimate. You can *either* say that our forecasted pathway to AGI or something very much like it would *probably work in principle without requiring very much more computation* than our uncertain model components take into account*,* meaning that the probability distribution provides a soft upper bound on reasonably-estimable arrival times, *but that paradigm shifts will predictably provide an even faster way to do it before then.*  That is, you could say that our estimate is both a soft upper bound and also a directional overestimate.  Or, you could say that our ignorance of how to create AI will consume *more*than one order-of-magnitude of increased computation cost above biology – **Eliezer:**  Indeed, much as your whole proposal would supposedly cost ten trillion times the equivalent computation of the single human brain that earlier biologically-inspired estimates anchored on. **OpenPhil:**  – in which case our 2050-centered distribution is not a good soft upper bound, but *also* not predictably a directional overestimate.  Don’t you have to pick one or the other as a critique, there? **Eliezer:**  Mmm… there’s some justice to that, now that I’ve come to write out this part of the dialogue.  Okay, let me revise my earlier stated opinion:  I think that your biological estimate is a trick that never works and, *on its own terms,* would tell us very little about AGI arrival times at all.  *Separately,* I think from my own model that your timeline distributions happen to be too long. **OpenPhil:**  *Eliezer.* **Eliezer:**  I mean, in fact, part of my actual sense of indignation at this whole affair, is the way that Platt’s law of strong AI forecasts – which was *in the 1980s*generalizing “thirty years” as the time that ends up sounding “reasonable” to would-be forecasters – is *still* exactly in effect for what ends up sounding “reasonable” to would-be futurists, *in fricking 2020* while the air is filling up with AI smoke in [the silence of nonexistent fire alarms](https://intelligence.org/2017/10/13/fire-alarm/). But to put this in terms that maybe possibly you’d find persuasive: The last paradigm shifts were from “write a chess program that searches a search tree and run it, and that’s how AI eats computing power” to “use millions of data samples, but *not*in a way that requires a huge separate training phase” to “train a huge network for zillions of gradient descent updates and then run it”.  This new paradigm costs a lot more compute, but (small) large amounts of compute are now available so people are using them; and this new paradigm saves on programmer labor, and more importantly the need for programmer knowledge. I say with surety that this is not the last *possible*paradigm shift.  And furthermore, the [Stack More Layers](https://www.reddit.com/r/ProgrammerHumor/comments/8c1i45/stack_more_layers/) paradigm has already reduced need for knowledge by what seems like a pretty large bite out of all the possible knowledge that could be thrown away. So, you might then argue, the world-ending AGI seems more likely to incorporate more knowledge and less brute force, which moves the correct sort of timeline estimate *further* away from the direction of “cost to recapitulate all evolutionary history as pure blind search without even the guidance of gradient descent” and *more* toward the direction of “computational cost of one brain, if you could just make a single brain”. That is, you can think of there as being *two* biological estimates to anchor on, not just one.  You can imagine there being a balance that shifts over time from “the computational cost for evolutionary biology to invent brains” to “the computational cost to run one biological brain”. In 1960, maybe, they knew so little about how brains worked that, if you gave them a hypercomputer, the cheapest way they could quickly get AGI out of the hypercomputer using just their current knowledge, would be to run a massive evolutionary tournament over computer programs until they found smart ones, using 10^43 operations. Today, you know about gradient descent, which finds programs more efficiently than genetic hill-climbing does; so the balance of how much hypercomputation you’d need to use to get general intelligence using just your own personal knowledge, has shifted ten orders of magnitude *away*from the computational cost of evolutionary history and *towards* the lower bound of the computation used by one brain.  In the future, this balance will predictably swing even further towards Moravec’s biological anchor, further away from Somebody on the Internet’s biological anchor. I admit, from my perspective this is nothing but a clever argument that tries to persuade people who are making errors that can’t all be corrected by me, so that they can make mostly the same errors but get a slightly better answer.  In my own mind I tend to contemplate the Textbook from the Future, which would tell us how to build AI on a home computer from 1995, as my anchor of ‘where can progress go’, rather than looking to the *brain* of all computing devices for inspiration. But, if you insist on the error of anchoring on biology, you could perhaps do better by seeing a spectrum between two bad anchors.  This lets you notice a changing reality, at all, which is why I regard it as a helpful thing to say to you and not a pure persuasive superweapon of unsound argument.  Instead of just fixating on one bad anchor, the hybrid of biological anchoring with whatever knowledge you currently have about optimization, you can notice how reality seems to be *shifting between* two biological bad anchors over time, and so have an eye on the changing reality at all.  Your new estimate in terms of gradient descent is stepping away from evolutionary computation and toward the individual-brain estimate by ten orders of magnitude, using the fact that you now know a *little* more about optimization than natural selection knew; and now that you can see the change in reality over time, in terms of the two anchors, you can wonder if there are more shifts ahead. Realistically, though, I would *not*recommend eyeballing how much more knowledge you’d think you’d need to get even larger shifts, as some function of time, before that line crosses the hardware line.  Some researchers may already know Thielian secrets you do not, that take those researchers further toward the individual-brain computational cost (if you insist on seeing it that way).  That’s the direction that economics rewards innovators for moving in, and you don’t know everything the innovators know in their labs. When big inventions finally hit the world as newspaper headlines, the people two years before that happens are often declaring it to be fifty years away; and others, of course, are declaring it to be two years away, fifty years before headlines.  Timing things is quite hard even when you think you are being clever; and cleverly having two biological anchors and eyeballing Reality’s movement between them, is not the sort of cleverness that gives you good timing information in real life. In real life, Reality goes off and does something else instead, and the Future does not look in that much detail like the futurists predicted.  In real life, we come back again to the same wiser-but-sadder conclusion given at the start, that in fact the Future is quite hard to foresee – especially when you are not on literally the world’s leading edge of technical knowledge about it, but really even then.  If you don’t think you know any Thielian secrets about timing, you should just figure that you need a general policy which doesn’t get more than two years of warning, or not even that much if you aren’t closely non-dismissively analyzing warning signs. **OpenPhil:**  We do consider in our report the many ways that our estimates could be wrong, and show multiple ways of producing biologically inspired estimates that give different results.  Does that give us any credit for good epistemology, on your view? **Eliezer:**  I *wish* I could say that it probably beats showing a single estimate, in terms of its impact on the reader.  But in fact, writing a huge careful Very Serious Report like that and snowing the reader under with Alternative Calculations is probably going to cause them to give *more* authority to the whole thing.  It’s all very well to note the Ways I Could Be Wrong and to confess one’s Uncertainty, but you did not actually reach the conclusion, “And that’s enough uncertainty and potential error that we should throw out this whole deal and start over,” and that’s the conclusion you needed to reach. **OpenPhil:**  It’s not clear to us what better way you think exists of arriving at an estimate, compared to the methodology we used – in which we do consider many possible uncertainties and several ways of generating probability distributions, and try to combine them together into a final estimate.  A Bayesian needs a probability distribution from somewhere, right? **Eliezer:**  If somebody had calculated that it currently required an IQ of 200 to destroy the world, that the smartest current humans had an IQ of around 190, and that the world would therefore start to be destroyable in fifteen years according to Moore’s Law of Mad Science – then, even assuming Moore’s Law of Mad Science to actually hold, the part where they throw in an estimated current IQ of 200 as necessary is complete garbage.  It is not the sort of mistake that can be repaired, either.  No, not even by considering many ways you could be wrong about the IQ required, or considering many alternative different ways of estimating present-day people’s IQs. The correct thing to do with the entire model is chuck it out the window so it doesn’t exert an undue influence on your actual thinking, where any influence of that model is an undue one.  And then you just *should not expect good advance timing info until the end is in sight,* from whatever thought process you adopt instead. **OpenPhil:**  What if, uh, somebody knows a Thielian secret, or has… narrowed the rivers of their knowledge to closer to reality’s tracks?  We’re not sure exactly what’s supposed to be allowed, on your worldview; but wasn’t there something at the beginning about how, when you’re unsure, you should be careful about criticizing people who are more unsure than you? **Eliezer:**  *Hopefully* those people are also able to tell you bold predictions about the nearer-term future, or at least say *anything* about what the future looks like before the whole world ends.  I mean, you don’t want to go around proclaiming that, because you don’t know something, nobody else can know it either.  But timing is, in real life, really hard as a prediction task, so, like… I’d expect them to be able to predict a bunch of stuff before the final hours of their prophecy? **OpenPhil:**  We’re… not sure we see that?  We may have made an estimate, but we didn’t make a narrow estimate.  We gave a relatively wide probability distribution as such things go, so it doesn’t seem like a great feat of timing that requires us to also be able to predict the near-term future in detail too? Doesn’t *your* implicit probability distribution have a median?  Why don’t you also need to be able to predict all kinds of near-term stuff if you have a probability distribution with a median in it? **Eliezer:**  I literally have not tried to force my brain to give me a median year on this – not that this is a defense, because I still have some implicit probability distribution, or, to the extent I don’t act like I do, I must be acting incoherently in self-defeating ways.  But still: I feel like you should probably have nearer-term bold predictions if your model is supposedly so solid, so concentrated as a flow of uncertainty, that it’s coming up to you and whispering numbers like “2050” even as the median of a broad distribution.  I mean, if you have a model that can actually, like, calculate stuff like that, and is actually bound to the world as a truth. If you are an aspiring Bayesian, perhaps, you may try to reckon your uncertainty into the form of a probability distribution, even when you face “structural uncertainty” as we sometimes call it.  Or if you know the laws of [coherence](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities), you will acknowledge that your planning and your actions are implicitly showing signs of weighing some paths through time more than others, and hence display probability-estimating behavior whether you like to acknowledge that or not. But if you are a wise aspiring Bayesian, you will admit that whatever probabilities you are using, they are, in a sense, intuitive, and you just don’t expect them to be all that good.  Because the timing problem you are facing is a really hard one, and humans are not going to be great at it – not until the end is near, and maybe not even then. That – not “you didn’t consider enough alternative calculations of your target figures” – is what should’ve been replied to Moravec in 1988, if you could go back and tell him where his reasoning had gone wrong, and how he might have reasoned differently based on what he actually knew at the time.  That reply I now give to you, unchanged. **Humbali:**  And I’m back!  Sorry, I had to take a lunch break.  Let me quickly review some of this recent content; though, while I’m doing that, I’ll go ahead and give you what I’m pretty sure will be my reaction to it: Ah, but here is a point that you seem to have not considered at all, namely: *what if you’re wrong?* **Eliezer:**  That, Humbali, is a thing that should be said mainly to children, of whatever biological wall-clock age, who’ve never considered at all the possibility that they might be wrong, and who will genuinely benefit from asking themselves that.  It is not something that should often be said between grownups of whatever age, as I define what it means to be a grownup.  You will mark that I did not at any point say those words to Imaginary Moravec or Imaginary OpenPhil; it is not a good thing for grownups to say to each other, or to think to themselves in Tones of Great Significance (as opposed to as a routine check). It is very easy to worry that one might be wrong.  Being able to see the *direction* in which one is *probably* wrong is rather a more difficult affair.  And even after we see a probable directional error and update our views, the objection, “But what if you’re wrong?” will sound just as forceful as before.  For this reason do I say that such a thing should not be said between grownups – **Humbali:**  Okay, done reading now!  Hm…  So it seems to me that the possibility that you are wrong, considered in full generality and without adding any other assumptions, should produce a directional shift from your viewpoint towards OpenPhil’s viewpoint. **Eliezer (sighing):**  And how did you end up being under the impression that this could possibly be a sort of thing that was true? **Humbali:**  Well, I get the impression that you have timelines shorter than OpenPhil’s timelines.  Is this devastating accusation true? **Eliezer:**  I consider naming particular years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain’s native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them.  What feelings I do have, I worry may be unwise to voice; AGI timelines, in my own experience, are not great for one’s mental health, and I worry that other people seem to have weaker immune systems than even my own.  But I suppose I cannot but acknowledge that my outward behavior seems to reveal a distribution whose median seems to fall well before 2050. **Humbali:**  Okay, so you’re more confident about your AGI beliefs, and OpenPhil is less confident.  Therefore, to the extent that you might be wrong, the world is going to look more like OpenPhil’s forecasts of how the future will probably look, like world GDP doubling over four years before the first time it doubles over one year, and so on. **Eliezer:**  You’re going to have to explain some of the intervening steps in that line of ‘reasoning’, if it may be termed as such. **Humbali:**  I feel surprised that I should have to explain this to somebody who supposedly knows probability theory.  If you put higher probabilities on AGI arriving in the years before 2050, then, on average, you’re concentrating more probability into each year that AGI might possibly arrive, than OpenPhil does.  Your probability distribution has lower entropy.  We can literally just calculate out that part, if you don’t believe me.  So to the extent that you’re wrong, it should shift your probability distributions in the direction of maximum entropy. **Eliezer:**  It’s things like this that make me worry about whether that extreme cryptivist view would be correct, in which normal modern-day Earth intellectuals are literally not smart enough – in a sense that includes the Cognitive Reflection Test and other things we don’t know how to measure yet, not just raw IQ – to be taught more advanced ideas from my own home planet, like Bayes’s Rule and the concept of the entropy of a probability distribution.  Maybe it does them net harm by giving them more advanced tools they can use to shoot themselves in the foot, since it causes an explosion in the total possible complexity of the argument paths they can consider and be fooled by, which may now contain words like ‘maximum entropy’. **Humbali:**  If you’re done being vaguely condescending, perhaps you could condescend specifically to refute my argument, which seems to me to be airtight; my math is not wrong and it means what I claim it means. **Eliezer:**  The audience is herewith invited to first try refuting Humbali on their own; grandpa is, in actuality and not just as a literary premise, getting older, and was never that physically healthy in the first place.  If the next generation does not learn how to do this work without grandpa hovering over their shoulders and prompting them, grandpa cannot do all the work himself.  There is an infinite supply of slightly different wrong arguments for me to be forced to refute, and that road does not seem, in practice, to have an end. **Humbali:**  Or perhaps it’s you that needs refuting. **Eliezer, smiling:**  That does seem like the sort of thing I’d do, wouldn’t it?  Pick out a case where the other party in the dialogue had made a valid point, and then ask my readers to disprove it, in case they weren’t paying proper attention?  For indeed in a case like this, one first backs up and asks oneself “Is Humbali right or not?” and not “How can I prove Humbali wrong?” But now the reader should stop and contemplate that, if they are going to contemplate that at all: Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one’s probability distribution over AGI, thereby moving out its median further away in time? **Humbali:**Are you done? **Eliezer:**  Hopefully so.  I can’t see how else I’d prompt the reader to stop and think and come up with their own answer first. **Humbali:**  Then what is the supposed flaw in my argument, if there is one? **Eliezer:**  As usual, when people are seeing only their preferred possible use of an argumentative superweapon like ‘What if you’re wrong?’, the flaw can be exposed by showing that the argument Proves Too Much.  If you forecasted AGI with a probability distribution with a median arrival time of 50,000 years from now\*, would that be *very* unconfident? (\*) Based perhaps on an ignorance prior for how long it takes for a sapient species to build AGI after it emerges, where we’ve observed so far that it must take at least 50,000 years, and our updated estimate says that it probably takes around as much more longer than that. **Humbali:**   Of course; the math says so.  Though I think that would be a little *too*unconfident – we do have *some* knowledge about how AGI might be created.  So my answer is that, yes, this probability distribution is higher-entropy, but that it reflects too little confidence even for me. I think you’re crazy overconfident, yourself, and in a way that I find personally distasteful to boot, but that doesn’t mean I advocate zero confidence.  I try to be less arrogant than you, but my best estimate of what my own eyes will see over the next minute is not a maximum-entropy distribution over visual snow.  AGI happening sometime in the next century, with a median arrival time of maybe 30 years out, strikes me as being about *as*confident as somebody should reasonably be. **Eliezer:**  Oh, really now.  I think if somebody sauntered up to you and said they put 99% probability on AGI not occurring within the next 1,000 years – which is the sort of thing a median distance of 50,000 years tends to imply – I think you would, in fact, accuse them of brash overconfidence about staking 99% probability on that. **Humbali:**  Hmmm.  I want to deny that – I have a strong suspicion that you’re leading me down a garden path here – but I do have to admit that if somebody walked up to me and declared only a 1% probability that AGI arrives in the next millennium, I would say they were being overconfident and not just too uncertain. Now that you put it that way, I think I’d say that somebody with a wide probability distribution over AGI arrival spread over the next century, with a median in 30 years, is in realistic terms about as uncertain as anybody could possibly be?  If you spread it out more than that, you’d be declaring that AGI probably *wouldn’t* happen in the next 30 years, which seems overconfident; and if you spread it out less than that, you’d be declaring that AGI probably *would*happen within the next 30 years, which also seems overconfident. **Eliezer:**  Uh huh.  And to the extent that I am myself uncertain about my own brashly arrogant and overconfident views, I should have a view that looks more like your view instead? **Humbali:**  Well, yes!  To the extent that you are, yourself, less than totally certain of your own model, you should revert to this most ignorant possible viewpoint as a base rate. **Eliezer:**  And if my own viewpoint should happen to regard your probability distribution putting its median on 2050 as just one more guesstimate among many others, with this particular guess based on wrong reasoning that I have justly rejected? **Humbali:**  Then you’d be overconfident, obviously.  See, you don’t get it, what I’m presenting is not just one candidate way of thinking about the problem, it’s the *base rate*that other people should fall back on to the extent they are not completely confident in *their own* ways of thinking about the problem, which impose *extra* assumptions over and above the assumptions that seem natural and obvious to me.  I just can’t understand the incredible arrogance you use as to be so utterly certain in your own exact estimate that you don’t revert it even a little bit towards mine. I don’t suppose you’re going to claim to me that you first constructed an even more confident first-order estimate, and then reverted it towards the natural base rate in order to arrive at a more humble second-order estimate? **Eliezer:**  Ha!  No.  Not that base rate, anyways.  I try to shift my AGI timelines a little further out because I’ve observed that actual Time seems to run slower than my attempts to eyeball it.  I did not shift my timelines out towards 2050 in particular, nor did reading OpenPhil’s report on AI timelines influence my first-order or second-order estimate at all, in the slightest; no more than I updated the slightest bit back when I read the estimate of 10^43 ops or 10^46 ops or whatever it was to recapitulate evolutionary history. **Humbali:**  Then I can’t imagine how you could possibly be so perfectly confident that you’re right and everyone else is wrong.  Shouldn’t you at least revert your viewpoints some toward what other people think? **Eliezer:**  Like, what the person on the street thinks, if we poll them about their expected AGI arrival times?  Though of course I’d have to poll everybody on Earth, not just the special case of developed countries, if I thought that a respect for somebody’s personhood implied deference to their opinions. **Humbali:**  Good heavens, no!  I mean you should revert towards the opinion, either of myself, or of the set of people I hang out with and who are able to exert a sort of unspoken peer pressure on me; that is the natural reference class to which less confident opinions ought to revert, and any other reference class is special pleading. And before you jump on me about being arrogant myself, let me say that I definitely regressed my own estimate in the direction of the estimates of the sort of people I hang out with and instinctively regard as fellow tribesmembers of slightly higher status, or “credible” as I like to call them.  Although it happens that those people’s opinions were about evenly distributed to both sides of my own – maybe not statistically exactly for the population, I wasn’t keeping exact track, but in their availability to my memory, definitely, other people had opinions on both sides of my own – so it didn’t move my median much.  But so it sometimes goes! But these other people’s credible opinions *definitely* hang emphatically to one side of *your* opinions, so your opinions should regress at least a *little*in that direction!  Your self-confessed failure to do this *at all* reveals a ridiculous arrogance. **Eliezer:**  Well, I mean, in fact, from my perspective, even my complete-idiot sixteen-year-old self managed to notice that AGI was going to be a big deal, many years before various others had been hit over the head with a large-enough amount of evidence that even theystarted to notice.  I was walking almost alone back then.  And I still largely see myself as walking alone now, as accords with the Law of Continued Failure:  If I was going to be living in a world of sensible people in this future, I should have been living in a sensible world already in my past. Since the early days more people have caught up to earlier milestones along my way, enough to start publicly arguing with me about the further steps, but I don’t consider them to have caught up; they are moving slower than I am still moving now, as I see it.  My actual work these days seems to consist mainly of trying to persuade allegedly smart people to not fling themselves directly into lava pits.  If at some point I start regarding you as my epistemic peer, I’ll let you know.  For now, while I endeavor to be swayable by arguments, your existence alone is not an argument unto me. If you choose to define that with your word “arrogance”, I shall shrug and not bother to dispute it.  Such appellations are beneath My concern. **Humbali:**  Fine, you admit you’re arrogant – though I don’t understand how that’s not just admitting you’re irrational and wrong – **Eliezer:**  They’re different words that, in fact, mean different things, in their semantics and not just their surfaces.  I do not usually advise people to contemplate the mere meanings of words, but perhaps you would be well-served to do so in this case. **Humbali:**  – but if you’re not *infinitely* arrogant, you should be quantitatively updating at least a *little* towards other people’s positions! **Eliezer:**  You do realize that OpenPhil itself hasn’t always existed?  That they are not the only “other people” that there are?  An ancient elder like myself, who has seen many seasons turn, might think of many other possible targets toward which he should arguably regress his estimates, if he was going to start deferring to others’ opinions this late in his lifespan. **Humbali:**  *You*haven’t existed through infinite time either! **Eliezer:**  A glance at the history books should confirm that I was not around, yes, and events went accordingly poorly. **Humbali:**So then… why aren’t you regressing your opinions at least a little in the direction of OpenPhil’s?  I just don’t understand this apparently infinite self-confidence. **Eliezer:**  The fact that I have credible intervals around my own unspoken median – that I confess I might be wrong in either direction, around my intuitive sense of how long events might take – doesn’t count for my being less than infinitely self-confident, on your view? **Humbali:**  No.  You’re expressing absolute certainty in your underlying epistemology and your entire probability distribution, by not reverting it even a little in the direction of the reasonable people’s probability distribution, which is the one that’s the obvious base rate and doesn’t contain all the special other stuff somebody would have to tack on to get *your* probability estimate. **Eliezer:**  Right then.  Well, that’s a wrap, and maybe at some future point I’ll talk about the increasingly lost skill of perspective-taking. **OpenPhil:**  Excuse us, we have a final question.  You’re not claiming that we argue like Humbali, are you? **Eliezer:**  Good heavens, no!  That’s why “Humbali” is presented as a separate dialogue character and the “OpenPhil” dialogue character says nothing of the sort.  Though I did meet one EA recently who seemed puzzled and even offended about how I wasn’t regressing my opinions towards OpenPhil’s opinions to whatever extent I wasn’t totally confident, which brought this to mind as a meta-level point that needed making. **OpenPhil:**  “One EA you met recently” is not something that you should hold against OpenPhil.  We haven’t organizationally endorsed arguments like Humbali’s, any more than you’ve ever argued that “we have to take AGI risk seriously even if there’s only a tiny chance of it” or similar crazy things that other people hallucinate you arguing. **Eliezer:**  I fully agree.  That Humbali sees himself as defending OpenPhil is not to be taken as associating his opinions with those of OpenPhil; just like how people who helpfully try to defend MIRI by saying “Well, but even if there’s a tiny chance…” are not thereby making their epistemic sins into mine. The whole thing with Humbali is a separate long battle that I’ve been fighting.  OpenPhil seems to have been keeping its communication about AI timelines mostly to the object level, so far as I can tell; and that is a more proper and dignified stance than I’ve assumed here. The post [Biology-Inspired AGI Timelines: The Trick That Never Works](https://intelligence.org/2021/12/03/biology-inspired-agi-timelines-the-trick-that-never-works/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
10551e1c-42d0-4e39-9c8f-da9002fc3cfc
trentmkelly/LessWrong-43k
LessWrong
What are some Civilizational Sanity Interventions? Lately, I've been thinking about the class of things that I'm calling "Civilizational Sanity Interventions." With that term I'm meaning to refer to technologies, institutions, projects, or norms that, if implemented, would improve the quality of high level decision making about important issues. Which things if they existed in the world, would make our society, collectively, saner? Some examples (with which I expect most people around here to be familiar): Prediction markets Prediction markets are a clever way to aggregate all the available information to make accurate predictions. Robin Hanson posits that the reason why there isn't wider adoption of prediction markets is because they are a threat to the authority of existing executives. If we lived in a world where the use of prediction markets were commonplace standard practice, eventually, decision makers would face flack for acting against the predictions of the market, and pundits would have a lot less leeway to make inaccurate, politically-motivated predictions. Hanson, in a recent interview, > I’d say if you look at the example of cost accounting, you can imagine a world where nobody does cost accounting. You say of your organization, “Let’s do cost accounting here.” > That’s a problem because you’d be heard as saying, “Somebody around here is stealing and we need to find out who.” So that might be discouraged. > In a world where everybody else does cost accounting, you say, “Let’s not do cost accounting here.” That will be heard as saying, “Could we steal and just not talk about it?” which will also seem negative. > Similarly, with prediction markets, you could imagine a world like ours where nobody does them, and then your proposing to do it will send a bad signal. You’re basically saying, “People are bullshitting around here. We need to find out who and get to the truth.” > But in a world where everybody was doing it, it would be similarly hard not to do it. If every project with a deadline had
cb5a1490-9273-4799-bbc2-826f0a988072
trentmkelly/LessWrong-43k
LessWrong
Monopoly: A Manifesto and Fact Post Epistemic Status: exploratory. I am REALLY not an economist, I don’t even play one on TV. [Edit: After some discussion in the comments I've updated that that GDP-to-gold, or GDP-to-oil, are bad proxy measures for economic growth. Further thoughts on this in Oops on Commodity Prices] You can call it by a lot of names.  You can call it crony capitalism, the mixed economy,  or corporatism. Cost disease is an aspect of the problem, as are rent-seeking, regulatory capture, and oligopoly. If Scrooge McDuck’s downtown Duckburg apartment rises in price, and Scrooge’s net worth rises equally, but nothing else changes, the distribution of purchasing power is now more unequal — fewer people can afford that apartment.  But nobody is richer in terms of actual material wealth, not even Scrooge.  Scrooge is only “richer” on paper.  The total material wealth of Duckburg hasn’t gone up at all. I’m concerned that something very like this is happening to developed countries in real life.  When many goods become more expensive without materially improving, the result is increased wealth inequality without increased material abundance. The original robber barons  (Raubritter) were medieval German landowners who charged illegal private tolls to anyone who crossed their stretch of the Rhine.  Essentially, they profited by restricting access to goods, holding trade hostage, rather than producing anything.  The claim is that people in developed countries today are getting sucked dry by this kind of artificial access-restriction behavior.  A clear-cut example is closed-access academic journals, which many scientists have begun to boycott; the value in a journal is produced by the scholars who author, edit, and referee papers, while the online journal’s only contribution is its ability to restrict access to those papers. Scott Alexander said it right: > LOOK, REALLY OUR MAIN PROBLEM IS THAT ALL THE MOST IMPORTANT THINGS COST TEN TIMES AS MUCH AS THEY USED TO FOR NO REASON, PLUS THEY SE
c49d37c2-12c2-4a63-86b6-82a7505fbb5c
trentmkelly/LessWrong-43k
LessWrong
Open thread 7th september - 13th september If it's worth saying, but not worth its own post (even in Discussion), then it goes here. ---------------------------------------- Notes for future OT posters: 1. Please add the 'open_thread' tag. 2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.) 3. Open Threads should be posted in Discussion, and not Main. 4. Open Threads should start on Monday, and end on Sunday.
e8a8c861-fa3d-4ec4-b2c1-3fccf619d251
trentmkelly/LessWrong-43k
LessWrong
Possible miracles Epistemic status: Speculative and exploratory. Contributions: Akash wrote the initial list; Thomas reviewed the list and provided additional points. Unless specified otherwise, writing in the first person is by Akash and so are the opinions. Thanks to Joshua Clymer, Tom Shlomi, and Eli Lifland for comments. Thanks to many others for relevant conversations. If we need a miracle, where might it come from? What would it look like?  Many of the arguments presented in List of Lethalities are compelling to me. Some of my colleagues and I spend many hours thinking about how hard the alignment problem is, analyzing various threat models, and getting familiar with all the ways we could fail. I thought it would be a useful exercise to intentionally try to think in the opposite way. How might we win?  I have found “miracles” to be a helpful frame, but it’s misleading in some ways. For example, I think the “miracles” frame implies an extremely low chance of success (e.g., <1%) and fosters a “wait and hope” mentality (as opposed to a “proactively make things happen” mentality). I was considering titling this something else (e.g., “Reasons for Hope” or “Possible Victory Conditions”), but these frames also didn’t feel right. With the phrase “miracles,” I’m trying to convey that (a) I don’t feel particularly confident in these ideas, (b) I am aware of many counterarguments to these claims [and indeed some are already presented in List of Lethalities], and (c) I don’t think “hope” or “victory” sets the right tone-- the right tone is “ah gosh, things seem really hard, but if we win, maybe we’ll win because of something like this.”  I have found it helpful to backchain from these miracles to come up with new project ideas. If you think it’s plausible that we need a miracle, I encourage you to form your own list of possible miracles & think carefully about what kinds of projects might make each one more likely to occur (or make it more likely that we notice one in time). With th
53c4095d-74a6-4e7c-bb83-8841fc9bea68
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Rationalists complain that most people are too willing to make excuses for their positions, and too unwilling to abandon those positions for ones that better fit the evidence. And most people really are pretty bad at this. But certain stroke victims called anosognosiacs are much, much worse.Anosognosia is the condition of not being aware of your own disabilities. To be clear, we're not talking minor disabilities here, the sort that only show up during a comprehensive clinical exam. We're talking paralysis or even blindness1. Things that should be pretty hard to miss.Take the example of the woman discussed in Lishman's Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".Why won't these patients admit they're paralyzed, and what are the implications for neurotypical humans? Dr. Vilayanur Ramachandran, leading neuroscientist and current holder of the world land-speed record for hypothesis generation, has a theory. One immediately plausible hypothesis: the patient is unable to cope psychologically with the possibility of being paralyzed, so he responds with denial. Plausible, but according to Dr. Ramachandran, wrong. He notes that patients with left-side strokes almost never suffer anosognosia, even though the left side controls the right half of the body in about the same way the right side controls the left half. There must be something special about the right hemisphere.Another plausible hypothesis: the part of the brain responsible for thinking about the affected area was damaged in the stroke. Therefore, the patient has lost access to the area, so to speak. Dr. Ramachandran doesn't like this idea either. The lack of right-sided anosognosia in left-hemisphere stroke victims argues against it as well. But how can we disconfirm it?Dr. Ramachandran performed an experiment2 where he "paralyzed" an anosognosiac's good right arm. He placed it in a clever system of mirrors that caused a research assistant's arm to look as if it was attached to the patient's shoulder. Ramachandran told the patient to move his own right arm, and the false arm didn't move. What happened? The patient claimed he could see the arm moving - a classic anosognosiac response. This suggests that the anosognosia is not specifically a deficit of the brain's left-arm monitoring system, but rather some sort of failure of rationality. Says Dr. Ramachandran: The reason anosognosia is so puzzling is that we have come to regard the 'intellect' as primarily propositional in character and one ordinarily expects propositional logic to be internally consistent. To listen to a patient deny ownership of her arm and yet, in the same breath, admit that it is attached to her shoulder is one of the most perplexing phenomena that one can encounter as a neurologist. So what's Dr. Ramachandran's solution? He posits two different reasoning modules located in the two different hemispheres. The left brain tries to fit the data to the theory to preserve a coherent internal narrative and prevent a person from jumping back and forth between conclusions upon each new data point. It is primarily an apologist, there to explain why any experience is exactly what its own theory would have predicted. The right brain is the seat of the second virtue. When it's had enough of the left-brain's confabulating, it initiates a Kuhnian paradigm shift to a completely new narrative. Ramachandran describes it as "a left-wing revolutionary".Normally these two systems work in balance. But if a stroke takes the revolutionary offline, the brain loses its ability to change its mind about anything significant. If your left arm was working before your stroke, the little voice that ought to tell you it might be time to reject the "left arm works fine" theory goes silent. The only one left is the poor apologist, who must tirelessly invent stranger and stranger excuses for why all the facts really fit the "left arm works fine" theory perfectly well.It gets weirder. For some reason, squirting cold water into the left ear canal wakes up the revolutionary. Maybe the intense sensory input from an unexpected source makes the right hemisphere unusually aroused. Maybe distoring the balance sense causes the eyes to move rapidly, activating a latent system for inter-hemisphere co-ordination usually restricted to REM sleep3. In any case, a patient who has been denying paralysis for weeks or months will, upon having cold water placed in the ear, admit to paralysis, admit to having been paralyzed the past few weeks or months, and express bewilderment at having ever denied such an obvious fact. And then the effect wears off, and the patient not only denies the paralysis but denies ever having admitted to it.This divorce between the apologist and the revolutionary might also explain some of the odd behavior of split-brain patients. Consider the following experiment: a split-brain patient was shown two images, one in each visual field. The left hemisphere received the image of a chicken claw, and the right hemisphere received the image of a snowed-in house. The patient was asked verbally to describe what he saw, activating the left (more verbal) hemisphere. The patient said he saw a chicken claw, as expected. Then the patient was asked to point with his left hand (controlled by the right hemisphere) to a picture related to the scene. Among the pictures available were a shovel and a chicken. He pointed to the shovel. So far, no crazier than what we've come to expect from neuroscience.Now the doctor verbally asked the patient to describe why he just pointed to the shovel. The patient verbally (left hemisphere!) answered that he saw a chicken claw, and of course shovels are necessary to clean out chicken sheds, so he pointed to the shovel to indicate chickens. The apologist in the left-brain is helpless to do anything besides explain why the data fits its own theory, and its own theory is that whatever happened had something to do with chickens, dammit!The logical follow-up experiment would be to ask the right hemisphere to explain the left hemisphere's actions. Unfortunately, the right hemisphere is either non-linguistic or as close as to make no difference. Whatever its thoughts, it's keeping them to itself....you know, my mouth is still agape at that whole cold-water-in-the-ear trick. I have this fantasy of gathering all the leading creationists together and squirting ice cold water in each of their left ears. All of a sudden, one and all, they admit their mistakes, and express bafflement at ever having believed such nonsense. And then ten minutes later the effect wears off, and they're all back to talking about irreducible complexity or whatever. I don't mind. I've already run off to upload the video to YouTube.This is surely so great an exaggeration of Dr. Ramachandran's theory as to be a parody of it. And in any case I don't know how much to believe all this about different reasoning modules, or how closely the intuitive understanding of it I take from his paper matches the way a neuroscientist would think of it. Are the apologist and the revolutionary active in normal thought? Do anosognosiacs demonstrate the same pathological inability to change their mind on issues other than their disabilities? What of the argument that confabulation is a rather common failure mode of the brain, shared by some conditions that have little to do with right-hemisphere failure? Why does the effect of the cold water wear off so quickly? I've yet to see any really satisfying answers to any of these questions. But whether Ramachandran is right or wrong, I give him enormous credit for doing serious research into the neural correlates of human rationality. I can think of few other fields that offer so many potential benefits. Footnotes 1: See Anton-Babinski syndrome 2: See Ramachandran's "The Evolutionary Biology of Self-Deception", the link from "posits two different reasoning modules" in this article. 3: For Ramachandran's thoughts on REM, again see "The Evolutionary Biology of Self Deception"" The excerpt provided comes from a user post on LessWrong. Can you break this down into the top 5 most important points for me. 1. Anosognosia is a condition where stroke victims are not aware of their own severe disabilities, such as paralysis or blindness. They are often in denial about their condition, making up excuses to explain why they can't perform certain tasks. 2. Neuroscientist Dr. Vilayanur Ramachandran suggests that this denial is not a psychological coping mechanism, but rather a failure of rationality. He theorizes that there are two reasoning modules in the brain, one in each hemisphere. The left brain is an apologist, trying to fit data to the theory to maintain a coherent narrative, while the right brain is a revolutionary, initiating paradigm shifts when it has had enough of the left brain's confabulations. 3. When a stroke damages the right hemisphere, the revolutionary is taken offline, and the brain loses its ability to change its mind about anything significant. The left brain continues to make excuses, fitting facts into its pre-existing narrative. 4. Intriguingly, squirting cold water into the left ear canal can temporarily wake up the revolutionary. During this period, the patient will admit to their paralysis and express confusion about their previous denial. However, once the effect wears off, the patient reverts to denial and even denies having admitted to the paralysis. 5. This theory could also explain the behavior of split-brain patients, who have had the connections between their two brain hemispheres severed. When asked to explain their actions, these patients often come up with explanations that fit the narrative of the left brain, even when the actions were controlled by the right brain. Can you elaborate on the mechanism by which cold water in the left ear canal temporarily "wakes up" the right hemisphere, allowing anosognosiacs to recognize their paralysis? The exact mechanism by which cold water in the left ear canal affects the right hemisphere of the brain in anosognosiacs is not fully understood. However, this phenomenon, known as "caloric stimulation," is believed to work by creating an imbalance in the vestibular system, which is responsible for our sense of balance and spatial orientation. When cold water is introduced into the left ear canal, it cools the surrounding inner ear structures, including the vestibular apparatus. This cooling effect causes the fluid within the semicircular canals (part of the vestibular system) to move. This movement is interpreted by the brain as a signal that the head is moving, even though it's not. This sudden and unexpected signal is thought to cause a heightened level of activity or arousal in the right hemisphere of the brain, which in the case of anosognosiacs, is often underactive or damaged due to stroke. This increased activity allows the right hemisphere to temporarily regain its function as the "revolutionary," acknowledging the reality of the paralysis that the left hemisphere, the "apologist," has been denying. It's important to note that this is a temporary effect and once the stimulation stops, the anosognosiac patient typically reverts back to their state of denial. It's also worth mentioning that caloric stimulation is a standard test used in neurology to assess the function of the vestibular system and the neural pathways that connect it to the brain. Given the temporary effect of caloric stimulation in anosognosiacs, how might one develop a more lasting solution to counteract the denial of paralysis, considering the distinct roles of the left and right hemispheres in rationality and perception? Developing a lasting solution to counteract the denial of paralysis in anosognosiacs is a complex challenge, as it involves addressing the underlying neurological damage caused by stroke or other brain injuries. The roles of the left and right hemispheres in rationality and perception, as described by Dr. Ramachandran, suggest that a successful approach would need to stimulate or restore the right hemisphere's function as the "revolutionary" that can acknowledge the reality of the paralysis. One potential approach could be through neurorehabilitation, which involves a range of therapies designed to help retrain or rewire the brain following injury. This could include physical, occupational, and speech therapies, as well as cognitive and psychological therapies. Neurorehabilitation often involves repetitive and intensive exercises that aim to stimulate neuroplasticity, the brain's ability to form new neural connections. In the context of anosognosia, cognitive therapies could potentially be used to help patients confront and acknowledge their paralysis. This might involve exercises designed to stimulate the right hemisphere, or techniques aimed at improving the patient's insight into their condition. Another approach could be through the use of medications or brain stimulation techniques. Certain drugs can influence brain activity and might be used to enhance the function of the right hemisphere. Similarly, techniques such as transcranial magnetic stimulation or deep brain stimulation, which use magnetic or electrical fields to stimulate brain activity, could potentially be used to target the right hemisphere. However, these approaches are largely speculative and would need to be tested in rigorous clinical trials. Anosognosia is a complex condition that is not fully understood, and its treatment presents significant challenges. Further research is needed to better understand the condition and develop effective treatments.
149c1393-67aa-4fcf-9054-779015bae281
trentmkelly/LessWrong-43k
LessWrong
Meetup : Durham LW Meetup: Zendo Discussion article for the meetup : Durham LW Meetup: Zendo WHEN: 03 January 2013 07:00:00PM (-0500) WHERE: Francesca's cafe, 706 9th Street, Durham, NC 27705 We'll be meeting to play Zendo and chat about general rationality topics. Come join us and play or learn to play if you haven't before! Discussion article for the meetup : Durham LW Meetup: Zendo
9c433b1d-b1b4-4055-b814-a74db0e0c478
trentmkelly/LessWrong-43k
LessWrong
Shifting Headspaces - Transitional Beast-Mode I was sitting in a tiny rental lodge, feeling resistance. It was about dinner time — I knew I should go make some food. I just wanted to sleep, sink into a bed and stay passive. It felt similar to when I’m recently awake, lying in bed, and procrastinating getting up. On the one hand, making food would shift me into a new state of being, getting going and maybe feeling happier. For part of me, this promise didn’t feel real — not in the way the bed did. I realized I was stuck in a tie between Pragmatic-Analysis and Akrasia.1 I shifted out of this impasse by going into Beast-Mode. Practically, I acted out the first hedonistic impulse to appear — grabbing a date and eating it. Shifting my headspace into Beast-Mode helped ease the short-term resistance — the Beast-Mode shift made the possibility of future state-shifting more real. If I could go into temporary Beast-Mode, then surely I can enter a happy salad-making headspace. My Akrasia headspace is quite stupid, lacking the theory of mind required to understand that my experience and headspace can shift to enjoy many things that temporarily feel “too much” — such as early-morning cold showers. Other headspaces of mine — including Beast-Mode & Pragmatic-Analysis — are much more mature, and able to account for the preferences & goals I hold in other headspaces. When I’m in these mature states, I can control my reactions and mindset to a large degree — making mental moves to shift how I relate to things. When I’m in Akrasia, I feel resistance that makes everyday things hard to do — washing dishes becomes a slog through a nasty marsh. When I’m in Beast-Mode or Pragmatic-Analysis, washing dishes can be great fun — accompanied by singing, taking my time to make things sparkle, and enjoying the repetition. Unfortunately, I easily forget that I can shift mindsets around. I “meld” with the negative thought patterns, forgetting other ways of being. When I’m anxious, I resist taking steps to improve the situation, fearful tha
ffa2480d-27e5-4046-bd56-7b0ab54261a6
trentmkelly/LessWrong-43k
LessWrong
Consciousness, Free Will and Scientific Objectivity in Perspective-Based Reasoning I have been avoiding this subject since it is too metaphysical for my taste. My interest was in specific problems such as anthropic paradoxes. However, my solution to them does have clear dispositions on these topics. So I will lay it out here. Fundamental Perspectives I argued that our rationality is not able to think about things as they are, by themselves. Instead, we would inevitably take a certain perspective or viewpoint when reasoning. Each of us has a natural perspective due to first-person subjective experiences. “I am this particular person, living in this particular time ” is inherently clear to each. It is the primary fact that has none, nor needs any, logical explanation. Though we cannot think without a perspective, we are capable of putting ourselves into others’ shoes. In another word, we can imagine thinking from different perspectives. All viewpoints are parallel to each other, none is inherently logically superior. But people often regard a god’s eye view as something else, something that transcends the limit of perspectives. It’s treated as absolute thinking, a fundamental conception that objective reasoning ought to be conducted from. Why we have this intuition would be discussed later. I think anthropic paradoxes are caused by it: trying to conduct the argument from a god’s eye view yet unavoidably also use “I” or “now” from the first-person perspective.   Consciousness and Free Will The above has some metaphysical commitments attached. Consciousness, though a rather mysterious concept, is undeniably a first-person experience. E.g. from my perspective I only know that I am conscious, whether others are conscious like I do or just some mindless NPCs can never be verified.  It is also instantiated by subjective experience so it is irreducible just like perspectives.  Perspective-based reasoning presupposes free will. For thinking from someone’s perspective to be meaningful at all it is a necessary presumption. It should be noted like cons
7cced9aa-eac7-4a04-90cb-a37f3f06e439
trentmkelly/LessWrong-43k
LessWrong
On Fables and Nuanced Charts Written by Spencer Greenberg & Amber Dawn Ace for Asimov Press. In 1994, the U.S. Congress passed the largest crime bill in U.S. history, called the Violent Crime Control and Law Enforcement Act. The bill allocated billions of dollars to build more prisons and hire 100,000 new police officers, among other things. In the years following the bill’s passage, violent crime rates in the U.S. dropped drastically, from around 750 offenses per 100,000 people in 1990 to under 400 in 2018. A chart showing U.S. crime rates over time. The data and annotation are real, but the implied story is not. Credit: Authors. But can we infer, as this chart seems to ask us to, that the bill caused the drop in crime? As it turns out, this chart wasn’t put together by sociologists or political scientists who’ve studied violent crime. Rather, we—a mathematician and a writer—devised it to make a point: Although charts seem to reflect reality, they often convey narratives that are misleading or entirely false. Upon seeing that violent crime dipped after 1990, we looked up major events that happened right around that time—selecting one, the 1994 Crime Bill, and slapping it on the graph. There are other events we could have stuck on the graph just as easily that would likely have invited you to construct a completely different causal story. In other words, the bill and the data in the graph are real, but the story is manufactured. Perhaps the 1994 Crime Bill really did cause the drop in violent crime, or perhaps the causality goes the other way: the spike in violent crime motivated politicians to pass the act in the first place. (Note that the act was passed slightly after the violent crime rate peaked!)  Charts are a concise way not only to show data but also to tell a story. Such stories, however, reflect the interpretations of a chart’s creators and are often accepted by the viewer without skepticism. As Noah Smith and many others have argued, charts contain hidden assumptions that can d
378de303-fe6e-4128-b0e7-bca306776a41
trentmkelly/LessWrong-43k
LessWrong
Better and Worse Ways of Stating SIA This post is motivated by Joe Carlsmith's post in which he argued some ways of understanding SIA undersells its appeal. He purposed a better way of stating SIA. I disagree with some of the assessments. Before I start, it should be pointed out that all different ways of stating SIA are computationally equivalent. So it is unlikely that one interpretation is "correct" while the other is "wrong". But it may help us in evaluating SIA's validity.    The Two Ways: First is the ordinary SIA statement. If you check on Wikipedia or the Lesswrong Wiki you can find SIA as: > All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers. This is actually slightly different from Nick Bostrom's original statement of SIA in his book "Anthropic Principle". But it is logically the same. And since then it has been most widely used including Bostrom himself in some interviews (if my memory is correct). Furthermore, the criticism presented by Carlsmith applies to this formulation. So I will use this statement.  Carlsmith presented SIA as:  > You’re more likely to exist in worlds with more people in your epistemic situation. Or update the prior of objective worlds in proportion to n (the number of people in your epistemic situation). I intend to compare the two and argue why the ordinary statement in my opinion is still better.    The Reference Class Problem Carlsmith argued the ordinary statement uses the notion of reference-class. Yet its definition is arbitrary. E.g. who are the "observers"? How do you define them? Even though the choice of reference-class will not affect the answer in many anthropic problems, due to its effect cancels out, this "inflate-and-claw-back" method is undesirable. The appeal of SIA is not "you can use whatever reference class you like" but you don't have to think in terms of any made-up reference class at all. You only need to consider yourself a member of "people in your epistemic sit
517cfe64-4586-4b14-b43a-c1ba7298ee89
StampyAI/alignment-research-dataset/arxiv
Arxiv
Neural Machine Translation by Jointly Learning to Align and Translate \thesection Introduction ------------------------- Neural machine translation is a newly emerging approach to machine translation, recently proposed by \citetKalchbrenner2013, \citetSutskever2014 and \citetCho2014a. Unlike the traditional phrase-based translation system \citep[see, e.g.,][]Koehn2003 which consists of many small sub-components that are tuned separately, neural machine translation attempts to build and train a single, large neural network that reads a sentence and outputs a correct translation. Most of the proposed neural machine translation models belong to a family of encoder–decoders \citepSutskever2014,Cho2014, with an encoder and a decoder for each language, or involve a language-specific encoder applied to each sentence whose outputs are then compared \citepHermann2014. An encoder neural network reads and encodes a source sentence into a fixed-length vector. A decoder then outputs a translation from the encoded vector. The whole encoder–decoder system, which consists of the encoder and the decoder for a language pair, is jointly trained to maximize the probability of a correct translation given a source sentence. A potential issue with this encoder–decoder approach is that a neural network needs to be able to compress all the necessary information of a source sentence into a fixed-length vector. This may make it difficult for the neural network to cope with long sentences, especially those that are longer than the sentences in the training corpus. \citetCho2014a showed that indeed the performance of a basic encoder–decoder deteriorates rapidly as the length of an input sentence increases. In order to address this issue, we introduce an extension to the encoder–decoder model which learns to align and translate jointly. Each time the proposed model generates a word in a translation, it (soft-)searches for a set of positions in a source sentence where the most relevant information is concentrated. The model then predicts a target word based on the context vectors associated with these source positions and all the previous generated target words. The most important distinguishing feature of this approach from the basic encoder–decoder is that it does not attempt to encode a whole input sentence into a single fixed-length vector. Instead, it encodes the input sentence into a sequence of vectors and chooses a subset of these vectors adaptively while decoding the translation. This frees a neural translation model from having to squash all the information of a source sentence, regardless of its length, into a fixed-length vector. We show this allows a model to cope better with long sentences. In this paper, we show that the proposed approach of jointly learning to align and translate achieves significantly improved translation performance over the basic encoder–decoder approach. The improvement is more apparent with longer sentences, but can be observed with sentences of any length. On the task of English-to-French translation, the proposed approach achieves, with a single model, a translation performance comparable, or close, to the conventional phrase-based system. Furthermore, qualitative analysis reveals that the proposed model finds a linguistically plausible (soft-)alignment between a source sentence and the corresponding target sentence. \thesection Background: Neural Machine Translation --------------------------------------------------- From a probabilistic perspective, translation is equivalent to finding a target sentence \vy that maximizes the conditional probability of \vy given a source sentence \vx, i.e., \argmax\vyp(\vy∣\vx). In neural machine translation, we fit a parameterized model to maximize the conditional probability of sentence pairs using a parallel training corpus. Once the conditional distribution is learned by a translation model, given a source sentence a corresponding translation can be generated by searching for the sentence that maximizes the conditional probability. Recently, a number of papers have proposed the use of neural networks to directly learn this conditional distribution \citep[see, e.g.,][]Kalchbrenner2013,Cho2014,Sutskever2014,Cho2014a,Forcada1997. This neural machine translation approach typically consists of two components, the first of which encodes a source sentence \vx and the second decodes to a target sentence \vy. For instance, two recurrent neural networks (RNN) were used by \citepCho2014 and \citepSutskever2014 to encode a variable-length source sentence into a fixed-length vector and to decode the vector into a variable-length target sentence. Despite being a quite new approach, neural machine translation has already shown promising results. \citetSutskever2014 reported that the neural machine translation based on RNNs with long short-term memory (LSTM) units achieves close to the state-of-the-art performance of the conventional phrase-based machine translation system on an English-to-French translation task.111 We mean by the state-of-the-art performance, the performance of the conventional phrase-based system without using any neural network-based component. Adding neural components to existing translation systems, for instance, to score the phrase pairs in the phrase table \citepCho2014 or to re-rank candidate translations \citepSutskever2014, has allowed to surpass the previous state-of-the-art performance level. ### \thesubsection RNN Encoder–Decoder Here, we describe briefly the underlying framework, called RNN Encoder–Decoder, proposed by \citetCho2014 and \citetSutskever2014 upon which we build a novel architecture that learns to align and translate simultaneously. In the Encoder–Decoder framework, an encoder reads the input sentence, a sequence of vectors \vx=(x1,⋯,xTx), into a vector c.222 Although most of the previous works \citep[see, e.g.,][]Cho2014,Sutskever2014,Kalchbrenner2013 used to encode a variable-length input sentence into a fixed-length vector, it is not necessary, and even it may be beneficial to have a variable-length vector, as we will show later. The most common approach is to use an RNN such that {align} h\_t = f( x\_t, h\_t-1 ) and {align\*} c = q({ h\_1, ⋯, h\_T\_x }), where ht∈\RRn is a hidden state at time t, and c is a vector generated from the sequence of the hidden states. f and q are some nonlinear functions. \citetSutskever2014 used an LSTM as f and q({h1,⋯,hT})=hT, for instance. The decoder is often trained to predict the next word yt′ given the context vector c and all the previously predicted words {y1,⋯,yt′−1}. In other words, the decoder defines a probability over the translation \vy by decomposing the joint probability into the ordered conditionals: {align} p(\vy) = ∏\_t=1^T p(y\_t ∣{ y\_1, ⋯, y\_t-1 }, c), where \vy=(y1,⋯,yTy). With an RNN, each conditional probability is modeled as {align} p(y\_t ∣{ y\_1, ⋯, y\_t-1 }, c) = g(y\_t-1, s\_t, c), where g is a nonlinear, potentially multi-layered, function that outputs the probability of yt, and st is the hidden state of the RNN. It should be noted that other architectures such as a hybrid of an RNN and a de-convolutional neural network can be used \citepKalchbrenner2013. \thesection Learning to Align and Translate -------------------------------------------- In this section, we propose a novel architecture for neural machine translation. The new architecture consists of a bidirectional RNN as an encoder (Sec. Document) and a decoder that emulates searching through a source sentence during decoding a translation (Sec. Document). ### \thesubsection Decoder: General Description {wrapfigure} R0.3 \includegraphics[width=0.29]rnnsearch.pdf The graphical illustration of the proposed model trying to generate the t-th target word yt given a source sentence (x1,x2,…,xT). In a new model architecture, we define each conditional probability in Eq. \eqrefeq:decoder\_prob as: {align} p(y\_i—y\_1, …, y\_i-1, \vx) = g(y\_i-1, s\_i, c\_i), where si is an RNN hidden state for time i, computed by | | | | | --- | --- | --- | | | si=f(si−1,yi−1,ci). | | It should be noted that unlike the existing encoder–decoder approach (see Eq. \eqrefeq:decoder\_prob), here the probability is conditioned on a distinct context vector ci for each target word yi. The context vector ci depends on a sequence of annotations (h1,⋯,hTx) to which an encoder maps the input sentence. Each annotation hi contains information about the whole input sequence with a strong focus on the parts surrounding the i-th word of the input sequence. We explain in detail how the annotations are computed in the next section. The context vector ci is, then, computed as a weighted sum of these annotations hi: {align} c\_i = ∑\_j=1^T\_x α\_ij h\_j. The weight αij of each annotation hj is computed by {align} α\_ij = exp(eij)∑k=1Txexp(eik), where | | | | | --- | --- | --- | | | eij=a(si−1,hj) | | is an alignment model which scores how well the inputs around position j and the output at position i match. The score is based on the RNN hidden state si−1 (just before emitting yi, Eq. \eqrefeq:generate\_y) and the j-th annotation hj of the input sentence. We parametrize the alignment model a as a feedforward neural network which is jointly trained with all the other components of the proposed system. Note that unlike in traditional machine translation, the alignment is not considered to be a latent variable. Instead, the alignment model directly computes a soft alignment, which allows the gradient of the cost function to be backpropagated through. This gradient can be used to train the alignment model as well as the whole translation model jointly. We can understand the approach of taking a weighted sum of all the annotations as computing an expected annotation, where the expectation is over possible alignments. Let αij be a probability that the target word yi is aligned to, or translated from, a source word xj. Then, the i-th context vector ci is the expected annotation over all the annotations with probabilities αij. The probability αij, or its associated energy eij, reflects the importance of the annotation hj with respect to the previous hidden state si−1 in deciding the next state si and generating yi. Intuitively, this implements a mechanism of attention in the decoder. The decoder decides parts of the source sentence to pay attention to. By letting the decoder have an attention mechanism, we relieve the encoder from the burden of having to encode all information in the source sentence into a fixed-length vector. With this new approach the information can be spread throughout the sequence of annotations, which can be selectively retrieved by the decoder accordingly. ### \thesubsection Encoder: Bidirectional RNN for Annotating Sequences The usual RNN, described in Eq. \eqrefeq:forward\_state, reads an input sequence \vx in order starting from the first symbol x1 to the last one xTx. However, in the proposed scheme, we would like the annotation of each word to summarize not only the preceding words, but also the following words. Hence, we propose to use a bidirectional RNN \citep[BiRNN, ][]Schuster1997, which has been successfully used recently in speech recognition \citep[see, e.g.,][]Graves2013asru. A BiRNN consists of forward and backward RNN’s. The forward RNN \oraf reads the input sequence as it is ordered (from x1 to xTx) and calculates a sequence of forward hidden states (\orah1,⋯,\orahTx). The backward RNN \olaf reads the sequence in the reverse order (from xTx to x1), resulting in a sequence of backward hidden states (\olah1,⋯,\olahTx). We obtain an annotation for each word xj by concatenating the forward hidden state \orahj and the backward one \olahj, i.e., hj=[\orah⊤j;\olah⊤j]⊤. In this way, the annotation hj contains the summaries of both the preceding words and the following words. Due to the tendency of RNNs to better represent recent inputs, the annotation hj will be focused on the words around xj. This sequence of annotations is used by the decoder and the alignment model later to compute the context vector (Eqs. \eqrefeq:context\_vector–\eqrefeq:annotation\_weight). See Fig. Document for the graphical illustration of the proposed model. \thesection Experiment Settings -------------------------------- We evaluate the proposed approach on the task of English-to-French translation. We use the bilingual, parallel corpora provided by ACL WMT ’14.333 <http://www.statmt.org/wmt14/translation-task.html> As a comparison, we also report the performance of an RNN Encoder–Decoder which was proposed recently by \citetCho2014. We use the same training procedures and the same dataset for both models.444 Implementations are available at <https://github.com/lisa-groundhog/GroundHog>. \includegraphics [width=clip=True,trim=15 20 15 20]bleu.pdf Figure \thefigure: The BLEU scores of the generated translations on the test set with respect to the lengths of the sentences. The results are on the full test set which includes sentences having unknown words to the models. ### \thesubsection Dataset WMT ’14 contains the following English-French parallel corpora: Europarl (61M words), news commentary (5.5M), UN (421M) and two crawled corpora of 90M and 272.5M words respectively, totaling 850M words. Following the procedure described in \citetCho2014, we reduce the size of the combined corpus to have 348M words using the data selection method by \citetAxelrod2011.555 Available online at <http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/>. We do not use any monolingual data other than the mentioned parallel corpora, although it may be possible to use a much larger monolingual corpus to pretrain an encoder. We concatenate news-test-2012 and news-test-2013 to make a development (validation) set, and evaluate the models on the test set (news-test-2014) from WMT ’14, which consists of 3003 sentences not present in the training data. After a usual tokenization666 We used the tokenization script from the open-source machine translation package, Moses. , we use a shortlist of 30,000 most frequent words in each language to train our models. Any word not included in the shortlist is mapped to a special token ([UNK]). We do not apply any other special preprocessing, such as lowercasing or stemming, to the data. \includegraphics [width=1.]EEA.pdf \includegraphics [width=1.]1511.pdf (a) (b) \includegraphics [width=1.]2799.pdf \includegraphics [width=1.]374.pdf (c) (d) Figure \thefigure: Four sample alignments found by RNNsearch-50. The x-axis and y-axis of each plot correspond to the words in the source sentence (English) and the generated translation (French), respectively. Each pixel shows the weight αij of the annotation of the j-th source word for the i-th target word (see Eq. \eqrefeq:annotation\_weight), in grayscale (0: black, 1: white). (a) an arbitrary sentence. (b–d) three randomly selected samples among the sentences without any unknown words and of length between 10 and 20 words from the test set. ### \thesubsection Models We train two types of models. The first one is an RNN Encoder–Decoder \citep[RNNencdec,][]Cho2014, and the other is the proposed model, to which we refer as RNNsearch. We train each model twice: first with the sentences of length up to 30 words (RNNencdec-30, RNNsearch-30) and then with the sentences of length up to 50 word (RNNencdec-50, RNNsearch-50). The encoder and decoder of the RNNencdec have 1000 hidden units each.777 In this paper, by a ’hidden unit’, we always mean the gated hidden unit (see Appendix LABEL:sec:gatedrnn). The encoder of the RNNsearch consists of forward and backward recurrent neural networks (RNN) each having 1000 hidden units. Its decoder has 1000 hidden units. In both cases, we use a multilayer network with a single maxout \citepGoodfellow2013 hidden layer to compute the conditional probability of each target word \citepPascanu2014rec. We use a minibatch stochastic gradient descent (SGD) algorithm together with Adadelta \citepZeiler2012 to train each model. Each SGD update direction is computed using a minibatch of 80 sentences. We trained each model for approximately 5 days. Once a model is trained, we use a beam search to find a translation that approximately maximizes the conditional probability \citep[see, e.g.,][]Graves2012,Boulanger2013. \citetSutskever2014 used this approach to generate translations from their neural machine translation model. For more details on the architectures of the models and training procedure used in the experiments, see Appendices LABEL:sec:model\_detail and LABEL:sec:training\_detail. \thesection Results -------------------- ### \thesubsection Quantitative Results Model All No UNK∘ RNNencdec-30 13.93 24.19 RNNsearch-30 21.50 31.44 RNNencdec-50 17.82 26.71 RNNsearch-50 26.75 34.16 RNNsearch-50⋆ 28.45 36.15 Moses 33.30 35.63 Table \thetable: BLEU scores of the trained models computed on the test set. The second and third columns show respectively the scores on all the sentences and, on the sentences without any unknown word in themselves and in the reference translations. Note that RNNsearch-50⋆ was trained much longer until the performance on the development set stopped improving. (∘) We disallowed the models to generate [UNK] tokens when only the sentences having no unknown words were evaluated (last column). In Table Document, we list the translation performances measured in BLEU score. It is clear from the table that in all the cases, the proposed RNNsearch outperforms the conventional RNNencdec. More importantly, the performance of the RNNsearch is as high as that of the conventional phrase-based translation system (Moses), when only the sentences consisting of known words are considered. This is a significant achievement, considering that Moses uses a separate monolingual corpus (418M words) in addition to the parallel corpora we used to train the RNNsearch and RNNencdec. One of the motivations behind the proposed approach was the use of a fixed-length context vector in the basic encoder–decoder approach. We conjectured that this limitation may make the basic encoder–decoder approach to underperform with long sentences. In Fig. Document, we see that the performance of RNNencdec dramatically drops as the length of the sentences increases. On the other hand, both RNNsearch-30 and RNNsearch-50 are more robust to the length of the sentences. RNNsearch-50, especially, shows no performance deterioration even with sentences of length 50 or more. This superiority of the proposed model over the basic encoder–decoder is further confirmed by the fact that the RNNsearch-30 even outperforms RNNencdec-50 (see Table Document). ### \thesubsection Qualitative Analysis #### \thesubsubsection Alignment The proposed approach provides an intuitive way to inspect the (soft-)alignment between the words in a generated translation and those in a source sentence. This is done by visualizing the annotation weights αij from Eq. \eqrefeq:annotation\_weight, as in Fig. Document. Each row of a matrix in each plot indicates the weights associated with the annotations. From this we see which positions in the source sentence were considered more important when generating the target word. We can see from the alignments in Fig. Document that the alignment of words between English and French is largely monotonic. We see strong weights along the diagonal of each matrix. However, we also observe a number of non-trivial, non-monotonic alignments. Adjectives and nouns are typically ordered differently between French and English, and we see an example in Fig. Document (a). From this figure, we see that the model correctly translates a phrase [European Economic Area] into [zone économique européen]. The RNNsearch was able to correctly align [zone] with [Area], jumping over the two words ([European] and [Economic]), and then looked one word back at a time to complete the whole phrase [zone économique européenne]. The strength of the soft-alignment, opposed to a hard-alignment, is evident, for instance, from Fig. Document (d). Consider the source phrase [the man] which was translated into [l’ homme]. Any hard alignment will map [the] to [l’] and [man] to [homme]. This is not helpful for translation, as one must consider the word following [the] to determine whether it should be translated into [le], [la], [les] or [l’]. Our soft-alignment solves this issue naturally by letting the model look at both [the] and [man], and in this example, we see that the model was able to correctly translate [the] into [l’]. We observe similar behaviors in all the presented cases in Fig. Document. An additional benefit of the soft alignment is that it naturally deals with source and target phrases of different lengths, without requiring a counter-intuitive way of mapping some words to or from nowhere ([NULL]) \citep[see, e.g., Chapters 4 and 5 of][]Koehn2010. #### \thesubsubsection Long Sentences As clearly visible from Fig. Document the proposed model (RNNsearch) is much better than the conventional model (RNNencdec) at translating long sentences. This is likely due to the fact that the RNNsearch does not require encoding a long sentence into a fixed-length vector perfectly, but only accurately encoding the parts of the input sentence that surround a particular word. As an example, consider this source sentence from the test set: > > An admitting privilege is the right of a doctor to admit a patient to a hospital > or a medical centre \ulineto carry out a diagnosis or a procedure, based on > his status as a health care worker at a hospital. > > > The RNNencdec-50 translated this sentence into: > > Un privilège d’admission est le droit d’un médecin de reconnaître un patient à > l’hôpital ou un centre médical \ulined’un diagnostic ou de prendre un diagnostic > en fonction de son état de santé. > > > The RNNencdec-50 correctly translated the source sentence until [a medical center]. However, from there on (underlined), it deviated from the original meaning of the source sentence. For instance, it replaced [based on his status as a health care worker at a hospital] in the source sentence with [en fonction de son état de santé] (“based on his state of health”). On the other hand, the RNNsearch-50 generated the following correct translation, preserving the whole meaning of the input sentence without omitting any details: > > Un privilège d’admission est le droit d’un médecin d’admettre un patient à un > hôpital ou un centre médical \ulinepour effectuer un diagnostic ou une > procédure, selon son statut de travailleur des soins de santé à > l’hôpital. > > > Let us consider another sentence from the test set: > > This kind of experience is part of Disney’s efforts to ”extend the lifetime of > its series and build new relationships with audiences \ulinevia digital platforms that > are becoming ever more important,” he added. > > > The translation by the RNNencdec-50 is > > Ce type d’expérience fait partie des initiatives du Disney pour ”prolonger la > durée de vie de ses nouvelles et de développer des liens avec les > \ulinelecteurs numériques qui deviennent plus complexes. > > > As with the previous example, the RNNencdec began deviating from the actual meaning of the source sentence after generating approximately 30 words (see the underlined phrase). After that point, the quality of the translation deteriorates, with basic mistakes such as the lack of a closing quotation mark. Again, the RNNsearch-50 was able to translate this long sentence correctly: > > Ce genre d’expérience fait partie des efforts de Disney pour ”prolonger la > durée de vie de ses séries et créer de nouvelles relations avec des > publics \ulinevia des plateformes numériques de plus en plus importantes”, a-t-il > ajouté. > > > In conjunction with the quantitative results presented already, these qualitative observations confirm our hypotheses that the RNNsearch architecture enables far more reliable translation of long sentences than the standard RNNencdec model. In Appendix LABEL:sec:long\_translation, we provide a few more sample translations of long source sentences generated by the RNNencdec-50, RNNsearch-50 and Google Translate along with the reference translations. \thesection Related Work ------------------------- ### \thesubsection Learning to Align A similar approach of aligning an output symbol with an input symbol was proposed recently by \citetGraves2013 in the context of handwriting synthesis. Handwriting synthesis is a task where the model is asked to generate handwriting of a given sequence of characters. In his work, he used a mixture of Gaussian kernels to compute the weights of the annotations, where the location, width and mixture coefficient of each kernel was predicted from an alignment model. More specifically, his alignment was restricted to predict the location such that the location increases monotonically. The main difference from our approach is that, in \citepGraves2013, the modes of the weights of the annotations only move in one direction. In the context of machine translation, this is a severe limitation, as (long-distance) reordering is often needed to generate a grammatically correct translation (for instance, English-to-German). Our approach, on the other hand, requires computing the annotation weight of every word in the source sentence for each word in the translation. This drawback is not severe with the task of translation in which most of input and output sentences are only 15–40 words. However, this may limit the applicability of the proposed scheme to other tasks. ### \thesubsection Neural Networks for Machine Translation Since \citetBengio2003lm introduced a neural probabilistic language model which uses a neural network to model the conditional probability of a word given a fixed number of the preceding words, neural networks have widely been used in machine translation. However, the role of neural networks has been largely limited to simply providing a single feature to an existing statistical machine translation system or to re-rank a list of candidate translations provided by an existing system. For instance, \citetSchwenk2012 proposed using a feedforward neural network to compute the score of a pair of source and target phrases and to use the score as an additional feature in the phrase-based statistical machine translation system. More recently, \citetKalchbrenner2013 and \citetDevlin2014 reported the successful use of the neural networks as a sub-component of the existing translation system. Traditionally, a neural network trained as a target-side language model has been used to rescore or rerank a list of candidate translations \citep[see, e.g.,][]Schwenk2006t. Although the above approaches were shown to improve the translation performance over the state-of-the-art machine translation systems, we are more interested in a more ambitious objective of designing a completely new translation system based on neural networks. The neural machine translation approach we consider in this paper is therefore a radical departure from these earlier works. Rather than using a neural network as a part of the existing system, our model works on its own and generates a translation from a source sentence directly. \thesection Conclusion ----------------------- The conventional approach to neural machine translation, called an encoder–decoder approach, encodes a whole input sentence into a fixed-length vector from which a translation will be decoded. We conjectured that the use of a fixed-length context vector is problematic for translating long sentences, based on a recent empirical study reported by \citetCho2014a and \citetPouget2014. In this paper, we proposed a novel architecture that addresses this issue. We extended the basic encoder–decoder by letting a model (soft-)search for a set of input words, or their annotations computed by an encoder, when generating each target word. This frees the model from having to encode a whole source sentence into a fixed-length vector, and also lets the model focus only on information relevant to the generation of the next target word. This has a major positive impact on the ability of the neural machine translation system to yield good results on longer sentences. Unlike with the traditional machine translation systems, all of the pieces of the translation system, including the alignment mechanism, are jointly trained towards a better log-probability of producing correct translations. We tested the proposed model, called RNNsearch, on the task of English-to-French translation. The experiment revealed that the proposed RNNsearch outperforms the conventional encoder–decoder model (RNNencdec) significantly, regardless of the sentence length and that it is much more robust to the length of a source sentence. From the qualitative analysis where we investigated the (soft-)alignment generated by the RNNsearch, we were able to conclude that the model can correctly align each target word with the relevant words, or their annotations, in the source sentence as it generated a correct translation. Perhaps more importantly, the proposed approach achieved a translation performance comparable to the existing phrase-based statistical machine translation. It is a striking result, considering that the proposed architecture, or the whole family of neural machine translation, has only been proposed as recently as this year. We believe the architecture proposed here is a promising step toward better machine translation and a better understanding of natural languages in general. One of challenges left for the future is to better handle unknown, or rare words. This will be required for the model to be more widely used and to match the performance of current state-of-the-art machine translation systems in all contexts. Acknowledgments --------------- The authors would like to thank the developers of Theano \citepbergstra+al:2010-scipy,Bastien-Theano-2012. We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. Bahdanau thanks the support from Planet Intelligent Systems GmbH. We also thank Felix Hill, Bart van Merriénboer, Jean Pouget-Abadie, Coline Devin and Tae-Ho Kim.
88bb7c96-8e2a-47da-9bba-fdbea9630ec3
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Before smart AI, there will be many mediocre or specialized AIs **Summary:** * In the current paradigm, training is much more expensive than inference. So whenever we finish end-to-end training a language model, we can run a lot of them in parallel. + If a language model was trained with Chinchilla scaling laws on the FLOP-equivalent of a large fraction of the world’s current GPU and TPUs: I estimate that the training budget could produce at least ~20 million tokens per second. + Larger models trained on more data would support *more* tokens per second. * Language models can also run faster than humans. Current models generate 10-100 tokens per second. It’s unclear whether future models will be slower or faster. * This suggests that, before AI changes the world via being broadly superior to human experts, it will change the world via providing *a lot* of either mediocre (by the standard of human experts) or specialized thinking. * This might make the early alignment problem easier. But the full alignment problem will come soon thereafter, in calendar-time, so this mainly matters if we can use the weaker AI to buy time or make progress on alignment. More expensive AI → you can run more AIs with your training budget ------------------------------------------------------------------ (...assuming that we’re making them more expensive by increasing parameter-count and training data.) We’re currently in a paradigm where: * Training isn’t very sample-efficient. * When increasing capabilities, training costs increase faster (~squared) than inference costs. * Training is massively parallelizable.[[1]](#fn-nvcjdBmk3HRfbZFJd-1) While this paradigm holds, it implies that the most capable models will be trained using massively parallelized training schemes, equivalent to running a large number of models in parallel. The larger the model, the more data it needs, and so more copies of them will have to be run in parallel during training, in order to finish within a reasonable time-frame.[[2]](#fn-nvcjdBmk3HRfbZFJd-2) This means that, once you have trained a highly capable model, you are guaranteed to have the resources to run a huge number of them in parallel. And the bigger and more expensive the model was — the more of them can run in parallel on your training cluster. Here’s a rough calculation of how many language models you can run in parallel using just your training cluster: * Let’s say you use p parameters. * Running the model for one token takes kp FLOP, for some k. * Chinchilla scaling laws say training data is proportional to parameters, implying that the model is trained for mp tokens. + For Chinchilla, m=20 tokens / parameter. * Total training costs are 3kmp^2. + The 3 is there because backpropagation is [~2x as expensive as forward propagation](https://epochai.org/blog/backward-forward-FLOP-ratio). * You spend N seconds training your model. * During training, you use (3kmp^2/N) FLOP/s, and at inference you can run one model for kp FLOP/s. **So using just your training compute, you can run** (3kmp^2/N)/(kp) = **3mp/N tokens per second**, just by reallocating your training compute to inference. If you take a [horizon-length framework](https://docs.google.com/document/d/1PaYOh_9BAYEm3RfpeX0G-cvs5JxGns98IsVK061jqRQ/edit#heading=h.56fpc2pq7ziw) seriously, you might expect that we’ll need more training data to handle longer-horizon tasks. Let’s introduce a parameter H that describes how many token-equivalents correspond to one data-point. * Total training costs are now 3kmHp^2. * So with the compute you used to train your models, you can process 3mpH/N token-equivalents per second. Some example numbers (bolded ones are changed from the top one): * For p=1e14, N=1y, H=1, m=20, the above equation says you can process 200 million token-equivalents per second, with just your training budget. * For **p=1e15**, N=1y, H=1, m=20, it’s ~2 billion token-equivalents/second. * For p=1e14, **N=3 months**, **H=1 hour**, m=20, it’s ~1 trillion token-equivalents/second.. In addition, there are various tricks for lowering inference costs. For example, reducing precision (which is less important during training than inference) and knowledge distillation; see [here](https://lilianweng.github.io/posts/2023-01-10-inference-optimization/) for more discussion. These would further increase the number of models you can run in parallel. A rough lower bound for number of AIs the world could run --------------------------------------------------------- The bigger the training run, the more AIs you can run with your training cluster. Conversely, if human-level AI comes earlier, with smaller training runs, you’ll be able to run fewer of them with your training cluster. On the other hand, if a training run is very small, then it’s only using a small fraction of the world’s compute. This means that there’s a lot of room to run many models in parallel just by acquiring more compute. (It would certainly be economically efficient for a large fraction of the world’s compute to run AI systems, if we did have human-level AI — whether that happens via the developers+investors buying more compute, the developers selling their software, a government seizing the software, or some other way.) Today, there’s about 4e21 FLOP/s out there in the form of GPUs and TPUs ([source](https://wiki.aiimpacts.org/doku.php?id=ai_timelines:hardware_and_ai_timelines:computing_capacity_of_all_gpus_and_tpus)). Let’s assume that the world would want to run ~human-level AI systems on at least 25% of that (1e21 FLOP/s), given the option. If so, we can get a rough lower bound on how many ~human-level AIs could be run shortly after training by looking at the number of AIs you could run after training an AI on 1e21 FLOP/s, run for a year: * Let’s say… + k = 4 FLOP/parameter/token. - [This](https://epochai.org/blog/estimating-training-compute) suggests 2 FLOP/parameter. - I increase to 4 to account for GPUs only having 50% utilization. + m = 20 datapoint/parameter. (Based on Chinchilla.) + H = 1 token-equivalent/datapoint. + N = 3e7 seconds. (A year.) * This means that… + 3kmHp^2 = 1e21*N ⇔ p = sqrt(1e21*N/(3kmH)) ~= 1.1e13 * And the number of models you can run in parallel is: + 1e21/kp = 1e21/(4\*1.6e13) ~= 23 million token-equivalents per second. Some caveats in the footnote.[[3]](#fn-nvcjdBmk3HRfbZFJd-3) Serial vs parallel ------------------ It’s not clear that you can parallelize tasks well enough to make efficient use of 23 million parallel models. To what degree is it possible to run these AIs *fast*, so that we get them in series after each other? I don’t understand this super-well. Some relevant information: * Jacob Steinhardt [suggests](https://bounded-regret.ghost.io/how-fast-can-we-perform-a-forward-pass/) 1400 tokens per second for the Chinchilla model (assuming at least 40% GPU utilization), and that increased depth would make this linearly slower, but that width wouldn’t change it at all. + Has an erratum saying “I believe that the overall asymptotics below are correct, but the final numbers could plausibly be off by up to an order of magnitude.“ * I think [Pope et al. (2022)](https://arxiv.org/pdf/2211.05102.pdf) is the public state of the art in inference speed, with minimum reported latency for PaLM 540B being 29ms ~= 34 tokens per second. + The speed is mainly bottlenecked by bandwidth. I’m unsure if the analysis says that latency would only increase with depth or also somewhat with width.[[4]](#fn-nvcjdBmk3HRfbZFJd-4) + Palm only has 1.5x as many layers as Chinchilla,[[5]](#fn-nvcjdBmk3HRfbZFJd-5) so this is much slower than Steinhardt’s analysis suggests. + Anecdotal reports about the GPT API are consistent with these slower speeds. The GPT-4 API typically delivers 20 tokens or less per second. (Though potentially up to 40 sometimes?) Though GPT-3.5 Turbo is much faster. * How much will depth increase in the future? + According to [Levine et al. (2021)](https://arxiv.org/pdf/2006.12467.pdf), transformers can be scaled a lot without getting much deeper, e.g. it would be fine to increase parameter-count by a factor of 100x while increasing depth by less than 2x. (I’ve done no due diligence on whether the paper is good, but its results are used by the Chinchilla authors.) + [Kaplan (2020)](https://arxiv.org/pdf/2001.08361.pdf) says “width/depth should remain fixed” which would imply that depth is proportional to the p^(1/3), because parameters are proportional to the depth\*width^2. - However, it continues: “But more importantly, we find that the precise architectural hyperparameters are unimportant compared to the overall scale of the language model”, which suggests that people could hold off on scaling depth if they were concerned about latency. + So depth will probably increase somewhere between “not at all” and as p(1/3). * Better hardware will probably lead to lower latency. E.g. the newest generation of NVIDIA hardware has increased bandwidth as well as some other potentially speed-increasing improvements. (E.g. supporting FP-8 computation.) * The [above-mentioned tricks](https://lilianweng.github.io/posts/2023-01-10-inference-optimization/) for reducing inference cost could also give you faster inference speeds. In addition to those, there’s also the option of running faster models to predict easy tokens, and then running larger models on multiple tokens at once. And if you’re willing, you can reduce hardware utilization to get further speed-ups in latency. (The Steinhardt post claims that reducing utilization by k gets you a k^2.) In short: We’re currently at 30-40 tokens per second, which will be reduced by bigger model sizes, increased by future hardware, and increased by better techniques. This is all for *generating* tokens. Reading content into the context window doesn’t add latency, since the entire context window can be processed in parallel. (Combining this with parallelism is interesting. An AI could split into 10 copies, investigate 10 different lines of thoughts, and then instantly merge and read all thoughts so-far — and then repeat.) I feel pretty unsure about how that adds up. But if well-optimized future models (running on future hardware) could operate at, say, ~50 tokens per second, then 23 million tokens per second would correspond to ~500,000 separate streams of 50 tokens/second. Implications ------------ The above numbers suggest that (as long as sample efficiency doesn’t significantly improve) the world will always have enough compute to produce at least 23 million token-equivalents per second from any model that the world can afford to train (end-to-end, chinchilla-style). Notably, these are many more token-equivalents per second than we currently have human-AI-researcher-seconds per second. (And the AIs would have the further advantage of having much faster serial speeds.) So once an AI system trained end-to-end can produce similarly much value per token as a human researcher can produce per second, AI research will be *more* than fully automated. This means that, when AI *first* contributes more to AI research than humans do, the average research progress produced by 1 token of output will be significantly less than an average human AI researcher produces in a second of thinking.[[6]](#fn-nvcjdBmk3HRfbZFJd-6) Instead, the collective’s intelligence will largely come from a combination of things like: * Individual systems “thinking” for a long time, churning through many more explicit thoughts than a skilled human would need to solve a problem.[[7]](#fn-nvcjdBmk3HRfbZFJd-7) * Splitting up things in more granular subtasks, delegating them to other AI systems. * Generating huge numbers of possible solutions, and evaluating them all before picking one. Assuming that much of this happens “behind the scenes”, a human interacting with this system might just perceive it as a single super-smart AI. Nevertheless, I think this means that AI will be more alignable at a fixed level of productivity. (Eventually, we’ll face the full alignment problem — but “more alignable at a fixed level of productivity” helps if we can use that productivity for something useful, such as giving us more time or [helping us with alignment research](https://www.lesswrong.com/posts/KwQYsF4XFtPqjgwvH/some-thoughts-on-automating-alignment-research-1).) Most obviously, the token-by-token output of a single AI system should be quite easy for humans to supervise and monitor for danger. It will rarely contain any implicit cognitive leaps that a human couldn’t have generated themselves. (C.f. [visible thoughts project](https://www.lesswrong.com/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement) and [translucent thoughts hypothesis](https://www.lesswrong.com/posts/r3xwHzMmMf25peeHE/the-translucent-thoughts-hypotheses-and-their-implications).) But what about *collectives* of AIs, or AIs thinking for a long period of time? If people get capability-boosts by fine-tuning such systems end-to-end, then the situation looks quite different. Perhaps it will prove beneficial to finetune such systems to communicate with each other using uninterpretable vector embeddings. Or even if they keep using English, they might start using [steganography](https://www.lesswrong.com/posts/yDcMDJeSck7SuBs24/steganography-in-chain-of-thought-reasoning). There are still a few reasons for why this situation seems safer (at a fixed level of AI capability) than it could have been: * Perhaps end-to-end SGD won’t have a big advantage over [process-based](https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes) methods, where humans fine-tune networks individually and glue them together in a way where each network’s output remains interpretable. After all, you can’t afford to do a lot of end-to-end training on the large collectives, since they’re so expensive to run. + Supervised learning is generally more sample-efficient than RL, which is a good sign. + The AI systems themselves might be able to help with designing such collectives in a maximally efficient way.[[8]](#fn-nvcjdBmk3HRfbZFJd-8) * Even if people do end-to-end training, the representations passed between models need not *immediately* become useless. Perhaps there are ways to fight steganography. Intuitively, it at least seems like interpreting the almost-English should be easier than mechanistic interpretability of the neural networks. (Though that isn’t a high bar.) * Even if you ignore the internals of the collectives, it seems like process-based feedback might work unusually well in this regime. This one requires a bit more explanation. + Above, I gestured at “[process-based](https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes)” as distinct from end-to-end training. But a weaker definition of process-based feedback (as distinct from outcomes-based feedback) is: You only ever train your AI to recommend suggested actions, and when deciding what feedback to give, you never test its suggestions in the real world. Instead, you make a decision by thinking carefully, potentially informed by a long investigation, including AI advice. (On episodes when you’re *not* providing feedback, you can implement the suggested actions without such detailed oversight.)[[9]](#fn-nvcjdBmk3HRfbZFJd-9) - Importantly, this outer objective doesn’t incentivize the AI collective to optimize the world in any way (other than via incentivizing solutions that look good to humans, who have human preferences about the world). - Ideally, it would get you a myopic/[act-based](https://ai-alignment.com/act-based-agents-8ec926c79e9c) agent. But it doesn’t come with a solution to inner alignment, so it definitely doesn’t guarantee safety. + The downside of this strategy is that it isn’t very competitive — e.g. if you’re serious about it, you might have to evaluate AI pull requests without testing the code, which is a serious downside. + But it seems like it should be unusually likely to be competitive when fine-tuning collectives of subhuman intelligences: - If the AI collective makes a good suggestion, there would typically exist a human-understandable decomposition of why that suggestion was good. (Or else how did the subhuman AIs generate it?) - The AI collective only needs fine-tuning data, so it’s not catastrophic if the human feedback is expensive to generate. - Most of the collective’s capabilities are already baked into the individual components. The purpose of the fine-tuning is just to make sure that those capabilities are directed in a productive direction. Intuitively, I feel like human feedback shouldn’t be much worse at this than outcomes-based feedback. A few caveats ------------- A big caveat to this is that AI and humans will have different distributions of capabilities.[[10]](#fn-nvcjdBmk3HRfbZFJd-10) If there are some topics on which AI is much, much better than humans, then humans might not understand AI’s reasoning about that when looking at token-by-token output (even *before* end-to-end training). And outcomes-based feedback might be necessary to elicit AI’s full capabilities on that topic. Indeed, it seems plausible that the story of AI automation won’t be one where many low-capability AIs combine to be human-ish. Instead, it might be that AI automates one task at a time, and that use cases where AI isn’t at least as good as humans aren’t ever that important (c.f. Tom Davidson’s [takeoff speeds model](https://www.lesswrong.com/posts/Gc9FGtdXhK9sCSEYu/what-a-compute-centric-framework-says-about-ai-takeoff) and Richard Ngo’s [framework](https://www.lesswrong.com/posts/BoA3agdkAzL6HQtQP/clarifying-and-predicting-agi)). This would also have implications for the shape of early alignment, and whether early AI systems would help with later alignment — but the analysis might be quite different, and involve thinking in detail about what sort of tasks are likely to be automated in what order. I’d be interested in such analysis. … **Acknowledgements**: Thanks to Tom Davidson and Daniel Kokotajlo for comments. I work at Open Philanthropy but the views here are my own. Notes ----- --- 1. Non-parallelizable training wouldn’t exactly contradict the conclusions here, but it would change what arguments I’d use for them, and it would make the world into a weirder place. (E.g. extra compute wouldn’t help to make smarter models, beyond a point, and AI progress would instead be mostly driven by software, serial time (!) necessary to train models, and maybe inference-time compute, if that was more parallelizable.) [↩︎](#fnref-nvcjdBmk3HRfbZFJd-1) 2. According to [The longest training run](https://www.lesswrong.com/posts/RihYwmskuJT9Rkbjq/the-longest-training-run): “Training runs of large Machine Learning systems are likely to last less than 14-15 months. This is because longer runs will be outcompeted by runs that start later and therefore use better hardware and better algorithms. “. [↩︎](#fnref-nvcjdBmk3HRfbZFJd-2) 3. In practice, many of the world’s GPUs wouldn’t be able to efficiently run large models like this, e.g. because of a lack of memory. 25% of the world’s compute is probably an overestimate. On the other hand, specialized hardware is much more important for training than for inference. So if FLOP-supply keeps being dominated by non-specialized hardware, this pushes for *more* token-equivalents per second, because there would probably be many GPUs you could run your model on that you couldn’t train them on. [↩︎](#fnref-nvcjdBmk3HRfbZFJd-3) 4. See page 6 for formula T<sub>comm</sub> = (√ BLF / √nchips) × 4E / network bandwidth. B is batch size; L is sequence length; F is the width-dimension of the feed-forward networks. E is the embedding/activation size. That’s *per layer*, so latency straightforwardly increases with more layers. But if you simultaneously scale the embedding dimension and the width of the feed-forward networks by 2x, I think you increase overall computation by 2^2=4x. That justifies increasing chips by 4x. But that leads to an overall change in T by (√2/√4) \* 2 = √2? So maybe scaling width by 2x increases latency by √2? [↩︎](#fnref-nvcjdBmk3HRfbZFJd-4) 5. Chinchilla has 80 [(Hoffmann et al., 2022)](https://arxiv.org/pdf/2203.15556.pdf). PaLM has 118 [(Chowdhery et al., 2022)](https://arxiv.org/pdf/2204.02311.pdf). [↩︎](#fnref-nvcjdBmk3HRfbZFJd-5) 6. This relies on an assumption that you can make up for lack-of-intelligence by numbers or speed. Without that assumption, you could expect that AI research will be dominated by humans until AIs finally “get it”, after which they’ll take over with a huge margin. [↩︎](#fnref-nvcjdBmk3HRfbZFJd-6) 7. Typical reading is ~300 wpm = 5 words per second. Typical speaking might be ~half that. [↩︎](#fnref-nvcjdBmk3HRfbZFJd-7) 8. One framing of this is: The reason why [the bitter lesson](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) applied so strongly in the last few decades is plausibly that compute increased very quickly compared to researcher labor. If AI systems start contributing to AI research, that will correspond to a massive increase in researcher labor, which might reverse the trend. [↩︎](#fnref-nvcjdBmk3HRfbZFJd-8) 9. C.f. [this comment](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very?commentId=GMhJLt9FKL2GtJEbs). [↩︎](#fnref-nvcjdBmk3HRfbZFJd-9) 10. Though as long as the best pre-training task is to predict human text, they’ll be more similar than you might otherwise have expected. [↩︎](#fnref-nvcjdBmk3HRfbZFJd-10)
1645cce4-d588-4fe6-a7f1-b2ffc4be4b19
trentmkelly/LessWrong-43k
LessWrong
Human Mimicry Mainly Works When We’re Already Close What if we just simulate a bunch of alignment researchers, and have them solve the problem for us? Of all the Dumb Alignment Ideas, this one is easily the best. Simple argument in favor: well, it’s not going to do any worse than the researchers would have done. In other words, it will probably do at least as well as we would have done without it, and possibly better, insofar as it can run faster than realtime. Another angle: human mimicry is a simple objective to train against, and is about as outer-aligned as the humans being mimicked. Which isn’t necessarily perfect, but it’s as aligned as our alignment researchers were going to be anyway (assuming inner alignment issues are handled, which we will indeed assume for the entirety of this post). Those are pretty good arguments. But man, there are some subtle devils in the details. Simulation vs Prediction The ideal version of human mimicry is mind uploads: directly simulate our researchers in a stable, research-friendly environment for a long time. The operationalization which people usually actually have in mind is to train an ML system to predict research outputs - e.g. I might prompt GPT for a johnswentworth post from the year 2050. Even setting aside inner alignment issues, these two are radically different. Generalization Problems In order for GPT to generate a realistic johnswentworth post from the year 2050, it has to generalize way out of distribution. … Well, ok, maybe I turn into one of those old researchers who just repeats the same things over and over again for decades, and then GPT doesn’t need to generalize way out of distribution. But in that case it isn’t very helpful to prompt for one of my posts from 2050 anyways, and we should prompt for something else instead (Thane Ruthenis has been writing great stuff lately, maybe try him?). The whole point of asking for future research write-ups is to see useful stuff we have not yet figured out; that means generalizing way out of the distribution o
8e27e955-0afd-45e5-8c50-1a0dd28bca09
StampyAI/alignment-research-dataset/arxiv
Arxiv
Formalization, Mechanization and Automation of Gödel's Proof of God's Existence
e7bb8717-4883-4cfd-a6f6-8bb7a68be88a
trentmkelly/LessWrong-43k
LessWrong
Peter's COVID Consolidated Brief for 2 April Happy April! ...All I can think of is that it’s hard to imagine that just three weeks ago the world felt so different than it does today. I am still following COVID-19 a lot, so here’s my second semi-regular installment of a public consolidated brief that tries to consolidate everything I read into one short, actionable list so other people don’t have to re-create my work. This way I can save time and fight research debt. Maybe read this instead of spending a ton of your own time obsessing? (Though do be wary that I am not an expert by any means and may be off in my selection and interpretation.) I have a lot more reporting that I wanted to put in here but didn’t, due to lack of time. I will get them in the next issue and I will try to send out updates as fast as I can while maintaining a certain level of quality. I do hope news will slow down at some point. ...I certainly will slow down at some point. Previously: * 29 Mar Brief * My research questions (27 Mar) See also: * LessWrong links database * EA Coronavirus Facebook Group Doing Your Part! How You Can Stay Safe and Help the Fight! The Wikipedia page “2019-2020 Coronavirus pandemic” is currently the second most viewed Wikipedia page of all pages this week, with hundreds of thousands of page views. If you like reading links, it could be really helpful to add a few minutes to your day to join me in this fight and keep this and the associated pages up to date. Also update the LessWrong links database and the Coronavirus Tech Handbook. And if you see parallel efforts, make sure they are aware of each other. ~ I don’t want to wade too much into the Great Masks Debate, which feels too complicated for me to adequately analyze and summarize at the speed at which I am writing this. I am not an expert, but since some are asking, I will nonetheless briefly summarize my provisional opinion: * There is some evidence that the public should wear masks - even DIY cloth ones. See “The evidence for everyone we
ca7f4985-db92-4003-a70d-ceac7e680398
trentmkelly/LessWrong-43k
LessWrong
On The Current Status Of AI Dating In the past months there has been a number of posts and stories regarding AI dating. People tried (with various degrees of success) to simulate a romantic partner using LLMs[1] and the topic has been explored with various degrees of controversy in many discussion boards.  Even here on LW(https://www.lesswrong.com/posts/9kQFure4hdDmRBNdH/how-it-feels-to-have-your-mind-hacked-by-an-ai) there has been some discussion on the topic, and in my opinion we can classify the phenomenon in many different ways, but it's undeniable that many people started feeling (or believing to feel) emotional attachment of sorts to AIs.  And I, by reading all of this, slowly started becoming interested in the topic. I was curious about the details of the interaction, about whether I also could "fall for it", and about what it means exactly to be "dating" an AI.  Just for context: I'm currently living abroad alone. This means that my social interactions are happening mostly through online means; and I also have been single for the past three years. So I decided to perform some tests. The main contenders here are Replika, Character.ai and ChatGPT. There are a couple of other ones, but they all seemed way less developed, so I decided to focus my efforts on those three. Replika Replika was one of the first chatbots introduced on the market. It was marketed as a "virtual friend for lonely people" and used its own proprietary LLM. I remember seeing it a couple of years ago and trying it, but I uninstalled it after a couple of days because it felt dumb at the time. Now, while still being advertised as a virtual friend, the app clearly caters for a different audience. Their promotional materials are almost all oriented at a male audience, and they offer a girlfriend experience (with sexting) as a paid plan. The main feature that separates Replika from other bots is that it learns from the conversations you have with it. It learns to mimic your speech pattern, remembers some key moments from
c241de2d-9029-45a9-8a5b-695412017783
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
The space of systems and the space of maps When we're trying to do AI alignment, we're often studying systems which don't yet exist. This is a pretty weird epistemic activity, and seems really hard to get right. This post offers one frame for thinking about what we're actually doing when we're thinking about AI alignment: using parts of the space of maps to reason about parts of the space of intelligent systems. In this post, we: * Introduce a simple model of the epistemic situation, and * Share some desiderata for maps useful for alignment. We hope that the content is mostly the second kind of obvious: obvious once you see things in this way, which you maybe already do. In our experience, this comes with a risk: reading too fast, you may miss most of the nuance and useful insight the deceptively simple model brings, or come away with a version of the model which is rounded off to something less useful (i.e. "yeah, there is this map and territory distinction").  As a meta recommendation, we suggest reading this post slowly, and ideally immediately trying to apply the model to some confusion or disagreement about AI alignment.     ### The space of systems and the space of maps Imagine the space of possible intelligent systems: ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/d4W4inHhts5Y2szBf/ms4kxthr6qd3s9jjaqwx) Two things seem especially important about this space: * It’s very large; much larger than the space of current systems.[[1]](#fnmlx5ubehf4f) * We don’t get direct epistemic access to it. + This is obviously true of systems which don’t currently exist. + In a weaker sense, it also seems true of systems which do exist. Even when we get to directly interact with a system:[[2]](#fnp4uo1qhikpj) - Our thinking about these parts of the space is still filtered through our past experiences, priors, predictive models, cultural biases, theories… - We often don’t understand the emergent complexity of the systems in question. If we don’t get direct epistemic access to the space of systems, what are we doing when we reason about it? Let’s imagine a second space, this time a space of “maps”: ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/d4W4inHhts5Y2szBf/ufkojdf9cdixaxtbgpvp) The space of maps is an abstract representation of all the possible “maps” that can be constructed about the space of intelligent systems. **The maps are ways of thinking about (parts of) the space of systems.**For example: * Replicable descriptions of how a machine learning model works and was trained are a way of thinking about that model (a point in the space of intelligent systems). * An ethnographic study of a particular human community is a way of thinking about that community (another point in the space of systems). * The theory of evolution is a way of thinking about evolved creatures, including intelligent ones. * Expected utility theory is a way of thinking about some part of the space which may or may not include future AI systems. * Historical analysis of trends in technological development is a way of thinking about whichever parts of the space of intelligent systems are governed by similar dynamics to those governing past technological developments. **When we’re reasoning about intelligent systems, we’re using some part of the space of maps to think about some part of the space of intelligent systems:**[[3]](#fnnfn4cssro5) ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/d4W4inHhts5Y2szBf/rmb8lfz5ruwlskvtiagz) *Different maps correspond to different regions of the space of intelligent systems.*  Of course, thinking in terms of the space of systems and the space of maps is a simplification. Some of the ways that reality is more complicated: * The space of systems looks different on different maps. * Maps can affect which parts of the space of systems actually get developed.[[4]](#fnpbg1xg5phta) * Maps are themselves embedded in the space of systems. * Which maps and systems actually exist at a given time is evolving and dynamic. * AI will play a big role in both the space of maps and the space of systems.  We think that the space of systems and the space of maps is a useful simplification which helps us to think more clearly about future AI systems. Some salient examples of how this simplification can help us think about future AI systems: * Disagreements are often driven by using different maps, or talking about different parts of the space of systems. * Thinking about the distinction and the interplay between the space of maps and the space of systems makes it more obvious that our actions and research directions influence which systems end up getting built, which seems like an important strategic consideration. ### What sorts of maps do we need for AI alignment? When it comes to AI alignment, we need accurate maps which hold for systems which don’t exist yet, and which are good enough to help us build these systems in ways that are safe. There are few different properties it would be good for these maps to have: * Generality/robustness: maps which cover large parts of the space of systems. * Precision: maps which are very detailed. * Accuracy: maps which actually correspond well to the parts of the space of systems which they are mapping. * Usefulness: maps which help us to chart paths toward states we want. * Probably other things too. And there are trade-offs here between the properties. For example: * It would be great to have very precise maps of advanced AI systems in particular, but this seems hard to do robustly. * Some theories are very general and cover very large parts of the space of systems (e.g. information theory), but alone these theories don’t tell us much about how to chart paths towards states we want. ### Finding maps which are useful for AI alignment A lot of AI alignment work involves taking maps that have been developed for thinking about one part of the space of systems, and applying them to a part of the space of systems that we hope includes “potentially dangerous future AI systems”. For example: * Experimental work often involves developing research methods for looking at existing AI systems that will (hopefully) scale to future AI systems. * Decision theory is a region in the space of maps that was built to model a (heavily idealised) human decision maker. Many of its ideas have been applied to possible future AI systems. * “Convergence” comes from evolutionary biology - a set of maps which has been built to think about biological systems. In a future post, we’ll try to apply these ideas to AI systems. Being aware of which maps you are using and their potential limitations for the systems you want to study seems super useful for doing good research. [[5]](#fnfonjbu46nbk) We don’t know that much about where in the space of systems potentially dangerous AI will be. As a result, one good bet seems to be to try and find maps that are general enough to cover everywhere in the space of systems that future AI could be.  * One way of making general maps is trying to decontextualise / generalise existing maps, by unpicking which features are specific to (~contingent on) the map in question, and which could generalise beyond the context in which they were originally built. * Another is to start with maps that are already pretty general (whilst still being accurate, confirmed by experiment, and falsifiable). This is one of the reasons why we are excited about active inference. Given that we care about aligning AI to humans and human collectives, it also seems useful for maps to cover these areas of system-space as well (or more specifically, to cover relations between the human part of system space and the “possible future AI systems” part of system space).  Finding general maps isn’t the only promising approach here: * Finding precise maps can also be a useful tactic in some contexts. + As there’s a tradeoff between generality and precision, it’s important to try to identify what features general maps are likely to miss - and what work is needed to fill in these (contingent) features. * Another strategy, which seems at the core of all existing sensible approaches to alignment, is to try to skilfully combine insights from multiple maps.[[6]](#fnjtfijehweyh) *The ideas in this post come variously from Jan, Nora and Clem (some ideas come from one person; others were independently generated by multiple people) or from an older FHI project on AGI epistemics done by Jan with Chris van Merwijk and Ondřej Bajgar. Rose did most of the writing.* 1. **[^](#fnrefmlx5ubehf4f)** See also [Design space of minds in general](https://www.alignmentforum.org/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general). 2. **[^](#fnrefp4uo1qhikpj)**See [this](https://www.alignmentforum.org/posts/FQqcejhNWGG8vHDch/on-solving-problems-before-they-appear-the-weird) post for another discussion of this sort of epistemic challenge. 3. **[^](#fnrefnfn4cssro5)**[This](https://www.lesswrong.com/posts/FuToH2KHxKmJLGk2B/ai-alignment-as-navigating-the-space-of-intelligent) post implicitly argues something similar. Visualising the space of AI systems [here](https://www.lesswrong.com/posts/QskBy5uDd2oeEGkBB/risk-map-of-ai-systems#Visualizing_the_Space_of_AI_Systems) is also related. 4. **[^](#fnrefpbg1xg5phta)**Other ways of saying this: some maps are design paradigms/blueprints.  [This](https://www.alignmentforum.org/posts/zAwvyBJJNu4vHWvfk/maps-and-blueprint-the-two-sides-of-the-alignment-equation) post draws a distinction between maps (for understanding reality) and blueprints (for building new parts of reality). The way we’re using ‘maps’ here is broader and contains both of those kinds of map. 5. **[^](#fnreffonjbu46nbk)**C.f. Adam Shimi on [Epistemological Vigilance](https://www.alignmentforum.org/posts/72scWeZRta2ApsKja/epistemological-vigilance-for-alignment). 6. **[^](#fnrefjtfijehweyh)**C.f. Adam Shimi on [pluralism](https://www.alignmentforum.org/posts/wi3upQibefMcFs5to/levels-of-pluralism) and “[no one-size-fits-all epistemic strategy](https://www.alignmentforum.org/posts/du92yeHQn9iE5vorj/no-one-size-fit-all-epistemic-strategy)”.
8fcc010d-19b0-4481-ba01-d9fff2fe3d77
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Conservation of Expected Evidence Title: [SEQ RERUN] Conservation of Expected Evidence Tags: sequence_reruns Today's post, Conservation of Expected Evidence was originally published on 13 August 2007. A summary (taken from the LW wiki): > If you are about to make an observation, then the expected value of your posterior probability must equal your current prior probability. On average, you must expect to be exactly as confident as when you started out. If you are a true Bayesian, you cannot seek evidence to confirm your theory, because you do not expect any evidence to do that. You can only seek evidence to test your theory. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Absence of Evidence is Evidence of Absence, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
e6208c50-f199-4628-9cc0-8f7dc8d68fbb
trentmkelly/LessWrong-43k
LessWrong
(Almost) every moral theory can be represented by a utility function This was demonstrated, in a certain limited way, in Peterson (2009). See also Lowry & Peterson (2011). The Peterson result provides an "asymmetry argument" in favor of consequentialism: > Consequentialists can account for phenomena that are usually thought of in nonconsequentialist terms, such as rights, duties, and virtues, whereas the opposite is false of nonconsequentialist theories. Rights, duty or virtue-based theories cannot account for the fundamental moral importance of consequences. Because of this asymmetry, it seems it would be preferable to become a consequentialist – indeed, it would be virtually impossible not to be a consequentialist. Another argument in favor of consequentialism has to do with the causes of different types of moral judgments: see Are Deontological Moral Judgments Rationalizations? Update: see Carl's criticism.
c3adace1-d099-4a4c-b608-b546417d1554
trentmkelly/LessWrong-43k
LessWrong
"Field Patterns" as a new mathmatical construct.
f169ddf0-be6b-4bd9-9191-f8ca88ddb44a
trentmkelly/LessWrong-43k
LessWrong
Book Review: 'Predicting the Next President: The Keys to the White House' Part one: what is this book and should you read it? Predicting the Next President: The Keys to the White House (henceforth rendered “The Keys”) is an ambitious book. Penned by historian Allan Lichtman, The Keys explains his system for forecasting the outcome of US presidential elections. This system is also called “The Keys”. So, to help readers figure out whether I’m talking about the book or the system, I’m only going to italicize The Keys when I’m explicitly referring to the book. The Keys is a 13-strong checklist of “true-or-false” statements that measure how well the incumbent party has performed during its term in office. If six or more statements are judged to be “false” then the election goes to the challenger. Five or fewer, and the incumbent takes it. According to Lichtman, The Keys reflect a number of axiomatic truths about the democratic system in the US. And although he’s keen to caution against superposing these truths non-reflectively on democratic systems in other countries, he also believes The Keys speak to political processes in liberal democracies more generally.  In his introductory chapter, Lichtman describes a “pragmatic” American electorate that chooses a president “according to the performance of the party holding the White House” and theorises: “If candidates and the media could come to understand that governing, not campaigning, counts in presidential elections, we could have a new kind of presidential politics.” (Lichtman, 2016, pp. 8). This new kind of politics would dump the attack ads and go deep on policy promises, outlining substantial agendas of what candidates actually planned to do with their desired four years in office. I should be clear from the outset that I think Lichtman’s vision is an appealing one. I also think The Keys is well written, and The Keys is well conceived. Assuming you’re in the target audience, you will probably enjoy reading this book, and might come away from it with something useful. What is the target
2856d22b-4268-49c9-84dc-e721cffba581
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
The probability that Artificial General Intelligence will be developed by 2043 is extremely low. Summary ------- The many successes of Deep Learning over the past ten years have catapulted Artificial Intelligence into the public limelight in an unprecedented way. Nevertheless, both critics and advocates have expressed the opinion that deep learning alone is not sufficient for real human-like intelligence, which is filled with capabilities that require symbolic manipulation. The disagreement has now shifted to the way symbol manipulation should be construed. Critics of DL believe that *classical* symbolic architectures (with combinatorial syntax and semantics) should be imposed on top of deep learning systems while advocates believe in a new kind of *emergent* symbolic system which can be learned through gradient descent, like any other function. This is necessary, they argue, because classical systems have already proven to be intractable and therefore cannot be used to construct systems capable of human-like intelligence. In this essay I consider these two options and show that the DL advocates are mistaken, by considering the way in which deep learning systems solve the problem of generating software code to a natural language prompt. Programming languages like Python are *classical symbol systems*, yet deep learning models have become remarkably good at producing syntactically correct code. What this shows is that it is possible to mimic **some aspects** of a classical symbolic system with a neural network, and without an explicit *classical sub-system*. There is no reason to think this conclusion does not apply to other tasks where deep learning models perform well, like natural language tasks. In other words, if you looked at a neural network generating Python code, you might be tempted to conclude that Python is not a classical symbolic system. You would of course be wrong. Similalrly if you looked at a neural network generating natural language, you might be tempted to conclude that natural language is not a classical symbolic system. You might also be wrong: there is no compelling reason offered by DL networks that shows that something like a *non-classical* symbolic reasoning system is required for human cognition including language. **DL systems can learn statistical mappings where a classical symbolic system produces lots of examples, like language or Python. When the symbol system is used for planning, creativity, etc., this is where DL struggles to learn.** This leaves us to conclude that (a) deep learning alone won't lead to AGI, (b) nor will deep learning supplemented with non-classical symbolic systems, (c) nor, apparently will deep learning supplemented with classical symbolic systems. So, no AGI by 2043. Certainly not AI that includes "entirely AI-run companies, with AI managers and AI workers and everything being done by AIs."  I suggest that instead of bringing us AGI, modern deep learning has instead revitalized Licklider's vision of "Man-Computer Symbiosis", which is the most exciting prospect for the immediate future of AI. The AI Promise -------------- Artificial Intelligence has certainly had its ups and downs, from the heady days of "good old-fashioned AI" through the winter downturns, and now back on track to supposedly reach human level cognitive abilities in the not-too-distant future. The Future Fund has [estimated](https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize#YEqEe7ZNsGGTejFBQ) that there is a 20% chance that Artificial General Intelligence (AGI) will be developed by January 1, 2043, and a 60% chance by January 1, 2100 (in their "Future Fund worldview prize"). I am going to argue that the optimism is unwarranted, and both of these estimates are wildly inaccurate. In fact the probabilities should be closer to zero. Why am I so certain? In order to understand my conviction, we have to consider the source of the overwhelming confidence that AI researchers currently possess. The primary reason can relatively straightforwardly be attributed to the sudden breakthroughs achieved by the *neural network* or *connectionist* paradigm, which has an intuitive advantage over other paradigms because of its apparent similarity to real neural networks in the brain. This alternative to "good old fashioned" logic-based symbol manipulating AI systems has existed since the 1940's, but has had a patchy history of success, with a cycle of "hitting brick walls" and then overcoming [them](https://www.noemamag.com/what-ai-can-tell-us-about-intelligence/). That is, until around 2012 when a combination of widely available compute power, massive public datasets, and advancements in architectures enabled Deep Learning (DL) models to suddenly leapfrog existing state-of-the-art systems. The age of symbolic AI was quickly declared dead by some, and a view emerged that the key breakthroughs were essentially complete, and we just have to "scale up" now. For example, Tesla CEO Elon Musk and Nvidia CEO Jen-Hsun Huang declared in 2015 that the problem of building fully autonomous vehicles was essentially [solved](https://fortune.com/2015/03/18/tesla-elon-musk-and-nvidia-ceoself-driving-cars/). Kinds of Symbol Systems ----------------------- The key difference between a classical symbol manipulating system and a distributed neural "connectionist" system was clearly laid out by [Fodor and Pylyshyn](http://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/proseminars/Proseminar13/ConnectionistArchitecture.pdf) in 1988. They argued that the issue was not whether or not the systems were *representational*, or whether distributed systems can represent discrete concepts, but how the rules of the system are defined. That is, in what way do representations take part in the causal processes that transform them into other representations. Consider a simple example from propositional logic. The rule **A & B -> A** is a tautology because there is no assignment of truth values to **A** and **B** which will make it false. The implication can only be false if the precedent on the left-hand side is true but the consequent on the right-hand side is false. But this is not possible because if **A** on the right-hand side is false, it must also be false on the left-hand side, causing the conjunction to also be false. Importantly, this reasoning is only possible because the **A** on the right hand side **is the same symbol** as the **A** on the left-hand side. Classical systems have *combinatorial syntax and semantics*. I tried to see if GPT-3 has any conception about such combinatorial semantics with an example: "Is it true that if the moon is made of green cheese, and cows chew cud, then the moon is made of green cheese?", and GPT-3 answers "No, it is not true that if the moon is made of green cheese, and cows chew cud, then the moon is made of green cheese.", which is not correct. GPT-3 appears to focus on whether or not the consequent itself is true or false.   Perhaps the most vocal contemporary advocate for classical symbolic systems is Gary Marcus who argues that, in order to make further progress, AI needs to combine symbolic and DL solutions into [*hybrid systems*](https://nautil.us/deep-learning-is-hitting-a-wall-238440/). As a rebuttal to [Marcus, Jacob Browning and Yann LeCun argue](https://www.noemamag.com/what-ai-can-tell-us-about-intelligence/) that there is no need for such hybrids because symbolic representations can "emerge" from neural networks. They argue that "the neural network approach has traditionally held that we don’t need to hand-craft symbolic reasoning but can instead learn it: Training a machine on examples of symbols engaging in the right kinds of reasoning will allow it to be learned as a matter of abstract pattern completion. In short, the machine can learn to manipulate symbols in the world, despite not having hand-crafted symbols and symbolic manipulation rules built in." That is, symbols can be manipulated in the absence of specific rules in the classical sense. However, after making a strong case for this alternative kind of symbol manipulation, they then argue that it is not central to human cognition after all "... most of our complex cognitive capacities do not turn on symbolic manipulation; they make do, instead, with simulating various scenarios and predicting the best outcomes." They further clarify that, to the extent that symbol manipulation is important at all, it is primarily a "cultural invention" and "regards symbols as inventions we used to coordinate joint activities — things like words, but also maps, iconic depictions, rituals and even social roles." "The goal, for DL, isn’t symbol manipulation inside the machine, but the right kind of symbol-using behaviors emerging from the system in the world." Browning and LeCun argue that the critical insight from DL is to outlaw classical symbol manipulation as a genuine, generative process in the human mind, and that therefore hybrid systems have no place in a cognitive agent.   Language Models and Computer Code --------------------------------- While Browning and LeCun's argument may have a certain (though vague) appeal, it proves to be extraordinarily problematical in explaining the ever-increasing success that large neural *language models* are showing with generating computer code. While the language models were originally conceived for modeling natural language, it was discovered that if the training included some computer code from sources such as GitHub, then they could generate computer code from natural language specifications, sometimes at a [level nigher than humans](https://www.deepmind.com/blog/competitive-programming-with-alphacode). Language Models (LMs) in fact developed independently of neural models and are simply [joint probability distributions](https://nlp.stanford.edu/IR-book/) over sequences of words. Large Neural LMs are a subsequent advancement in that they learn probability distributions for sequences of real valued, continuous vector representations of words rather than discrete lexical items. The probability distribution is learned through a form of *language modeling*, where the task is to "predict the next word given the previous words" in word strings drawn from a corpus. Essentially LMs learn complex statistical properties of language and can perform at exceptional levels on a large number of tasks involving language including translation, inference, and even story telling. The models are very large indeed, with billions or more parameters. For example, Nvidia is proposing Megatron, a parallel architecture that can scale to 1 [trillion parameters](https://developer.nvidia.com/blog/scaling-language-model-training-to-a-trillion-parameters-using-megatron/). It turns out that LMs are competent learners of complex statistical distributions outside natural language. As previously mentioned, LMs that have received training on software code have become competent at generating syntactically well formed, functional code with relatively advanced programming problems. While there is a long way to go before they can write entire program implementations, it is clear that they already excel at generating syntactically well-formed productions. They almost never write code that contains syntax errors. But this is a real problem for the claim that neural architectures present an alternative model of symbol manipulation because well-formed software code is defined by classically understood symbolic, rule-based grammars. We must really try to understand how it is that a distributed neural network with an alternative method of manipulating symbols can perform so well with a straightforwardly classical symbolic task. In fact, high-level programming languages for digital computers, and theories of natural language have a curious historical connection. John W. Backus who led the Applied Science Division of IBM's Programming Research Group [took inspiration](https://betanews.com/2007/03/20/john-w-backus-1924-2007/) from Noam Chomsky's work on phrase structure grammars and conceived a *meta-language* that could specify the syntax of computer languages that were easier for programmers to write than assembler languages. The meta language later became known as Backus-Naur form (BNF), so called partly because it was originally co-developed by Peter Naur in a 1963 IBM [report](https://www.masswerk.at/algol60/report.htm) on the ALGOL 60 programming language". The BNF is a notation for context free grammars consisting of *productions* over *termina* and *nonterminal* symbols, which defines the grammar of programming languages required for writing [compilers and interpreters](https://www.goodreads.com/book/show/703102.Compilers).  BNF grammars can be invaluable for computer programmers. When a programmer is uncertain about the form of a programming construct, they can consult documentation which specifies the allowable syntax of expressions. The most complete reference is the syntax specification typically written in some form of BNF. For example, the syntax for the "if" statement in Python can be found in the [reference guide](https://docs.python.org/3/reference/expressions.html#grammar-token-python-grammar-assignment_expression) as shown below: if\_stmt ::=  "if" assignment\_expression ":" suite             ("elif" assignment\_expression ":" suite)\*             ["else" ":" suite] assignment\_expression ::=  [identifier ":="] expression expression             ::=  conditional\_expression | lambda\_expr conditional\_expression ::=  or\_test ["if" or\_test "else" expression] or\_test  ::=  and\_test | or\_test "or" and\_test and\_test ::=  not\_test | and\_test "and" not\_test not\_test ::=  comparison | "not" not\_test comparison    ::=  or\_expr (comp\_operator or\_expr)\* comp\_operator ::=  "<" | ">" | "==" | ">=" | "<=" | "!="                   | "is" ["not"] | ["not"] "in" identifier   ::=  xid\_start xid\_continue\* id\_start     ::=  <all characters in general categories Lu, Ll, Lt, Lm, Lo, Nl, the underscore, and characters with the Other\_ID\_Start property> id\_continue  ::=  <all characters in id\_start, plus characters in the categories Mn, Mc, Nd, Pc and others with the Other\_ID\_Continue property> xid\_start    ::=  <all characters in id\_start whose NFKC normalization is in "id\_start xid\_continue\*"> xid\_continue ::=  <all characters in id\_continue whose NFKC normalization is in "id\_continue\*"> suite         ::=  stmt\_list NEWLINE | NEWLINE INDENT statement+ DEDENT statement     ::=  stmt\_list NEWLINE | compound\_stmt stmt\_list     ::=  simple\_stmt (";" simple\_stmt)\* [";"] simple\_stmt ::=  expression\_stmt                 | assert\_stmt                 | assignment\_stmt                 | augmented\_assignment\_stmt                 | annotated\_assignment\_stmt                 | pass\_stmt                 | del\_stmt                 | return\_stmt                 | yield\_stmt                 | raise\_stmt                 | break\_stmt                 | continue\_stmt                 | import\_stmt                 | future\_stmt                 | global\_stmt                 | nonlocal\_stmt The Unicode category codes mentioned above stand for:    Lu - uppercase letters, Ll - lowercase letters, Lt - titlecase letters, Lm - modifier letters, Lo - other letters, Nl - letter numbers, Mn - nonspacing marks, Mc - spacing combining marks, Nd - decimal numbers, Pc - connector punctuations, Other\_ID\_Start - explicit list of characters in PropList.txt to support backwards compatibility, Other\_ID\_Continue - likewise        It is, however, not normally necessary to consult the reference, as it is generally sufficient to simply provide an abstract template for legal [expressions](https://pythonexamples.org/python-if-else-example/): if boolean\_expression:    statement(s) else:    statement(s) or even a typical example as in the Python [tutorial](https://docs.python.org/3/tutorial/controlflow.html#if-statements). >>> if x < 0: ...     x = 0 ...     print('Negative changed to zero') ... elif x == 0: ...     print('Zero') ... elif x == 1: ...     print('Single') ... else: ...     print('More')            However, there are cases where the less formal documentation is not sufficient. For example, notice the definition for the "if\_stmt" in the BNF, which includes an "assignment\_expression" following the "if". In turn, "assignment\_expression" is defined with the  "[identifier ":="] expression". The "expresion" part can be idenitified in the less formal documentation, corresponding to the "boolean\_expression" and "x < 0" in the other two definitions. However, the optional "[identifier ":="]" does not appear in these other definitions. The construct is in fact the "assignment expression" introduced in PEP 572, dated 28-Feb-2018 for Python 3.8. The assignment expression can be used to simplify code in some cases. For example by assigning the value of len(a) to the variable n, len(a) only needs to be calculated once in the following code fragment. if (n := len(a)) > 10:    print(f"List is too long ({n} elements, expected <= 10)") "If" statements of this form will not be found in code written prior to February 2018 and any LM trained on a corpus containing a majority of code written before that date will not be able to generate such statements and will have no information that the statement is legal. A human programmer, on the other hand, can simply consult the BNF and see that it is a legal production. Perhaps a powerful LM could learn about these statements after a few exposures through *few shot* learning, but this has its own difficulties. For example, consider if the new code included code from students who may have made a (very legitimate) mistake in using the wrong assignment operator, as in the modified code below: if (n = len(a)) > 10:    print(f"List is too long ({n} elements, expected <= 10)") Consulting the BNF instantly shows that this is not a well-formed statement, but a machine learning model that does not have access to the BNF cannot make this determination. **The power to generalize with few or even no examples, while constraining the generalization to only legal productions is the power of classical symbolic systems that non symbolic systems cannot replicate.** This is the power that the human mind appears to possess in abundance. Eliminative connectionism eliminates connectionism -------------------------------------------------- We must then consider how a LM can generate code which conforms to the syntax of a phrase structure language. One possibility is that the LM learns representations and operations that are isomorphic to the symbols and rules of the programming language and uses these representations to generate well-formed lines of code. This possibility is described by Pinker and Prince as *implementational connectionism,* since in this case the network acts as a physical implementation of the algorithm, as described by Marr. This possibility is stronger than Browning and LeCun's claim that the network learns a non-classical type of symbol manipulation. More importantly this strong version of implementational connectionism is of little concern for classical theorists because it does not change any knowledge we already had. Simply put, if we have already defined a language like Python completely through the BNF, **a neural network cannot reveal anything new about the language if all it does is to implement the rules in its neural hardware.** A second option is that the LMs have learned unpredictable, complex nonlinear mappings and latent variables which can generate well-formed code. Pinker and Prince call this possibility *eliminative connectionism*, which poses a more serious challenge for classical theories because they eliminate the need for rules and symbol manipulation. In *eliminative (neural) systems* it is impossible to find a principled mapping between the components of the distributed (vector) processing model and the steps involved in a symbol-processing theory. It is clear that the current bunch of deep learning models are advertised as eliminative systems. Browning and LeCun's neural symbol manipulations are specifically claimed to eliminate traditional rules of symbolic logic, and their rules of symbol manipulation are specifically not meant to be isomorphic to traditional rules. Other arguments for an eliminative intent include Bengio, LeCun and Hinton argued in their [Turing lecture](https://cacm.acm.org/magazines/2021/7/253464-deep-learning-for-ai/fulltext) that continuous representations in Deep Learning models fundamentally differentiate neural LMs from traditional symbolic systems such as grammar because they enable computations based on non-linear transformations between the representing vectors themselves.   While the compuational core of neural LMs is vector based, they can perform tasks involving symbols because they use a symbolic representation in the input and output layers. [Kautz](https://onlinelibrary.wiley.com/doi/10.1002/aaai.12036) enumerates different classes of *Neuro-Symbolic* hybrid systems which combine neural and symbolic approaches in different ways. He identifies the default paradigm in neural language models as the *Symbolic Neuro symbolic* (SNS) architecture, where sequences of words (symbolic) are converted to vectors which are passed to a neural network (neuro) whose output is computed by a softmax operation on the final layer of the network (symbolic). In the case of models trained for code generation, both natural language and code tokens are converted to- and from- vector representations and processed by the network. SNS systems can accurately generate well-formed productions of a rule governed symbolic system without access to the rules themselves, because they are extremely competent at generalizing the properties of observed productions. The problem is that we know for certain that this is not the right model for Python because the right model for Python is a classical symbolic system with a generative phrase structure grammar. **If the LM can mimic a classical symbolic Python, then why should we believe that it isn't mimicking a classical symbolic natural language?** We have now arrived at the real problem with neural networks as models of real or artificial intelligence. Judging by their performance on coding problems we could claim that they have essentially solved the problem and all they need is more scale or a "world model" to improve even further. But we would be wrong. What they really need is a symbol manipulation system. We are in a privileged position when it comes to knowing how Python works because "we" wrote it. The same is of course not the case for cognitive abilities such as language. Chomsky hypothesized that it involves a phrase structure grammar of some sort, but this is challenged by the impressive success of neural models. The problem is that there is no independent way to know if the challenges are valid, or if instead, the non-linear symbol transformations inherent in the SNS paradigm are sufficient to yield the results without "really" doing language, the same way they aren't "really" doing Python. The most likely hypothesis is that the brain contains a number of classical symbolic systems which generate highly structured productions and correlated features which can in turn be exploited by statistical methods to construct models that can reconstruct similar productions in novel circumstances. There is no doubt humans make use of this kind of statistical mechanism: programmers don't all consult the BNF, many of them simply look at a few example statements and copy the form. But we have seen that humans can also use classical symbolic structures in their reasoning when they use the BNF (or equivalent defining syntax). Some of these classical structures might operate at a level not accessible to conscious introspection, for example generative grammar. Neural language models have proven themselves unable to illuminate the formal structure of Python, and it is not reasonable to claim that they are any more able to enlighten us about English. **DL models can learn the mappings where the symbolic system produces lots of examples, like language. When the symbol system is used for planning, creativity, etc., this is where DL struggles to learn.** Conclusion ---------- Deep Learning is hardly the foundation for a general artificial intelligence. It is, however, a powerful foundation for systems that can exploit the productions of complex rule based systems in order to solve problems in the domain of that system. Combined with symbolic reasoning, deep learning has the potential for very powerful systems indeed. In particular the technology has provided a new set of tools to realize [Licklider's vision from 1960](http://guzdial.cc.gatech.edu/hci-seminar/uploads/1/Man-Computer%20Symbiosis.pdf), who similarly grappled with the future of AI. In 1960 the Air Force estimated that "... it would be 1980 before developments in artificial intelligence make it possible for machines alone to do much thinking or problem solving of military significance". That is, 20 years to something like AGI. He suggested that those 20 years would be well spent developing "man-machine" symbiosis, which was an approach to programming computers in a way that maximally augments human reasoning rather that replaces it. He estimated 5 years to develop and 15 years to use the systems. Mockingly, he added "... the 15 may be 10 or 500, but those years should be intellectually the most creative and exciting in the history of mankind."  I think we are no closer now than Licklider was to predict if AGI is 10 or 500 years away. But I [do think](https://www.researchgate.net/publication/326869586_Strong_Cognitive_Symbiosis_Cognitive_Computing_for_Humans) that we are much closer at achieving symbiotic systems. DL networks can compress the world's knowledge into a manageable collection of vectors but without symbolic systems, they don't know what to do with them. Humans have the creativity and insight to interact with the vectors to unleash breathtaking discoveries.
771b1281-4a95-4b65-8a0c-435be39e159f
trentmkelly/LessWrong-43k
LessWrong
Otherness and control in the age of AGI (Cross-posted from my website. PDF of the full series here. Audio version of the full series here; or search for "Joe Carlsmith Audio" on your podcast app.) > “With malice towards none; with charity towards all; with firmness in the right, as God gives us to see the right…”  > > - Abraham Lincoln Lincoln’s Second Inaugural (image source here) I’ve written a series of essays that I’m calling “Otherness and control in the age of AGI.” The series examines a set of interconnected questions about how agents with different values should relate to one another, and about the ethics of seeking and sharing power. They’re old questions – but I think that we will have to grapple with them in new ways as increasingly powerful AI systems come online. And I think they’re core to some parts of the discourse about existential risk from misaligned AI (hereafter, “AI risk”).[1] The series covers a lot of ground, but I’m hoping the individual essays can be read fairly well on their own. Here’s a brief summary of the essays that have been released thus far (I’ll update it as I release more): * The first essay, “Gentleness and the artificial Other,” discusses the possibility of “gentleness” towards various non-human Others – for example, animals, aliens, and AI systems. And it also highlights the possibility of “getting eaten,” in the way that Timothy Treadwell gets eaten by a bear in Werner Herzog’s Grizzly Man: that is, eaten in the midst of an attempt at gentleness. * The second essay, “Deep atheism and AI risk,” discusses what I call “deep atheism” – a fundamental mistrust both towards Nature, and towards “bare intelligence.” I take Eliezer Yudkowsky as a paradigmatic deep atheist, and I highlight the connection between his deep atheism and his concern about misaligned AI. I also connect deep atheism to the duality of “yang” (active, controlling) vs “yin” (receptive, letting-go). A lot of my concern, in the series, is about ways in which certain strands of the AI risk discour
4d856462-b867-4702-8bc0-fd76eeb7e322
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Applications open for AGI Safety Fundamentals: Alignment Course The [AGI Safety Fundamentals (AGISF)](https://www.agisafetyfundamentals.com/?utm_source=EA+Forum&utm_medium=launch+post&utm_campaign=AGISF): Alignment Course is designed to introduce the key ideas in AGI safety and alignment, and provide a space and support for participants to engage, evaluate and debate these arguments. Participants will meet others who are excited to help mitigate risks from future AI systems, and explore opportunities for their next steps in the field. The course is being run by the same team as for previous rounds, now under a new project called [BlueDot Impact](https://forum.effectivealtruism.org/posts/3EWpLid8tkyYJakfm/announcing-bluedot-impact). [**Apply here**](https://airtable.com/shrXRbqs1gZkOfx7c), by 5th January 2023. ### Time commitment The course will run from February-April 2023. It comprises 8 weeks of reading and virtual small-group discussions, followed by a 4-week capstone project. The time commitment is around 4 hours per week, so participants can engage with the course alongside full-time work or study. ### Course structure Participants are provided with structured content to work through, alongside weekly, facilitated discussion groups. Participants will be grouped depending on their ML experience and background knowledge about AI safety. In these sessions, participants will engage in activities and discussions with other participants, guided by the facilitator. The facilitator will be knowledgeable about AI safety, and can help to answer participants’ questions. The course is followed by a capstone project, which is an opportunity for participants to synthesise their views on the field and start thinking through how to put these ideas into practice, or start getting relevant skills and experience that will help them with the next step in their career. The course content is designed by Richard Ngo (Governance team at OpenAI, previously a research engineer on the AGI safety team at DeepMind). You can read the curriculum content [here](https://www.agisafetyfundamentals.com/ai-alignment-curriculum). ### Target audience We are most excited about applicants who would be in strong position to pursue technical alignment research in their career, such as professional software engineers and students studying technical subjects (e.g. CS/maths/physics/engineering). That said, we consider all applicants and expect 25-50% of the course to consist of people with a variety of other backgrounds, so we encourage you to apply regardless. This includes community builders who would benefit from a deeper understanding of the concepts in AI alignment. We will be running another course on AI Governance in early 2023 and expect a different distribution of target participants. ### Apply now! If you would like to be considered for the next round of the courses, starting in February 2023, **please** [**apply here**](https://airtable.com/shrXRbqs1gZkOfx7c) **by Thursday 5th January 2023**. More details can be found [here](https://www.agisafetyfundamentals.com/alignment-course-details). We will be evaluating applications on a rolling basis and we aim to let you know the outcome of your application by mid-January 2023. If you already have experience working on AI alignment and would be keen to join our community of facilitators, please [apply to facilitate](https://www.agisafetyfundamentals.com/alignment-facilitation-details). ### Who is running the course? AGISF is now being run by [BlueDot Impact](https://bluedotimpact.org/?utm_source=EA+Forum&utm_medium=AGISF+launch+post&utm_campaign=AGISF) - a new non-profit project running courses that support participants to develop the knowledge, community and network needed to pursue high-impact careers. BlueDot Impact spun out of Cambridge Effective Altruism, and was founded by the team who was primarily responsible for running previous rounds of AGISF. You can read more in our announcement post [here](https://forum.effectivealtruism.org/posts/3EWpLid8tkyYJakfm/announcing-bluedot-impact). We’re really excited about the amount of interest in the courses and think they have great potential to build awesome communities around key issues. As such we have spent the last few months: * Working with pedagogy experts to make discussion sessions more engaging * Formalising our course design process with greater transparency for participants and facilitators * Building systems to improve participant networking to create high-value connections * Collating downstream opportunities for participants to pursue after the courses * Forming a team that can continue to build, run and improve these courses over the long-term Applications for our other courses, including the AGISF: Governance Course, will open in early 2023!
a03bd6e2-ebee-4936-8a07-74591bbf0a70
StampyAI/alignment-research-dataset/lesswrong
LessWrong
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration *This post covers the set-up and results from our exploration in amplifying generalist research using predictions, in detail. It is accompanied by [a second post](https://www.lesswrong.com/posts/cLtdcxu9E4noRSons) with a high-level description of the results, and more detailed models of impact and challenges. For an introduction to the project, see that post.* \_\_\_ The rest of this post is structured as follows. First, we cover the basic set-up of the exploration. Second, we share some results, in particular focusing on the accuracy and cost-effectiveness of this method of doing research. Third, we briefly go through some perspectives on what we were trying to accomplish and why that might be impactful, as well as challenges with this approach. These are covered more in-depth in [a separate post](https://www.lesswrong.com/posts/cLtdcxu9E4noRSons/amplifying-generalist-research-models-of-impact-and). *Overall,* we *are very interested in feedback and comments on where to take this next.* Set-up of the experiment ======================== A note on the experimental design --------------------------------- To begin with, we note that this was not an “experiment” in the sense of designing a rigorous methodology with explicit controls to test a particular, well-defined hypothesis. Rather, this might be seen as an ”exploration” [3]. We tested several different ideas at once, instead of running a unique experiment for each separately. We also intended to uncover new ideas and inspiration as much as testing existing ones. Moreover, we proceeded in a startup-like fashion where several decisions were made ad-hoc. For example, a comparison group was introduced after the first experiment had been completed; this was not originally planned, but later became evidently useful. This came at the cost of worsening the rigor of the experiment. We think this trade-off was worth it for our situation. This kind of policy allows us to execute a large number of experiments in a shorter amount of time, quickly pivot away from bad ones, and notice low-hanging mistakes and learning points before scaling up good ones. This is especially helpful as we’re [shooting for tail-end outcomes](https://www.openphilanthropy.org/blog/hits-based-giving), and are looking for concrete mechanisms to implement in practice (rather than publishing particular results). We do not see it as a substitute for more rigorous studies, but rather as a complement, which might serve as inspiration for such studies in the future. To prevent this from biasing the data, all results from the experiment are public, and we try to note when decisions were made post-hoc. Mechanism design ---------------- The basic set-up of the project is shown in the following diagram, and described below. A two-sentence version would be: > Forecasters predicted the conclusions that would be reached by Elizabeth van Norstrand, a generalist researcher, before she conducted a study on the accuracy of various historical claims. We randomly sampled a subset of research claims for her to evaluate, and since we can set that probability arbitrarily low this method is not bottlenecked by her time. ![](https://i.imgur.com/Af15tQF.png)**1. Evaluator extracts claims from the book and submits priors** The evaluator for the experiment was Elizabeth Van Norstrand, an independent generalist researcher known for her “[Epistemic spot checks](https://www.lesswrong.com/users/pktechgirl)”. This is a series of posts assessing the trustworthiness of a book by evaluating some of it claims. We chose Elizabeth for the experiment as she has a reputation for reliable generalist research, and there was a significant amount of public data about her past evaluations of claims. She picked 10 claims from the book *The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present*, as well as a meta-claim about the reliability of the book as a whole. All claims were assigned an importance rating from 1-10 based on their relevance to the thesis of the book as a whole. We were interested in finding if this would influence forecaster effort between questions. Elizabeth also spent 3 minutes per claim submitting an initial estimate (referred to as a “prior”). ![](https://i.imgur.com/vCZp8Yr.png)Beliefs were typically encoded as distributions over the range 0% to 100%, representing where Elizabeth expected the mean of her posterior credence in the claim to be after 10 more hours of research*.* For more explanation, see this footnote [4]. **2. Forecasters make predictions** Forecasters predicted what they expected Elizabeth to say after ~45 minutes of research on the claim, and wrote comments explaining their reasoning. Forecasters’ payments for the experiment were proportional to how much their forecasts outperformed the aggregate in estimating her 45-minute distributions. In addition, forecasters were paid a base sum just for participating. You can see all forecasts and comments [here](https://www.foretold.io/c/f19015f5-55d8-4fd6-8621-df79ac072e15?state=closed), and an interactive tool for visualising and understanding the scoring scheme [here](https://observablehq.com/@jjj/amplification-experiment-scoring). A key part of the design was that that forecasters *did* *not know* which question Elizabeth would randomly sample to evaluate. Hence they were incentivised to do their best on *all* questions (weighted by importance). This has the important implication that we could easily extend the amount of questions predicted by forecasters -- even if Elizabeth can only judge 10 claims, we could have forecasting questions for 100 different claims [5]. Two groups of forecasters participated in the experiment: one based on a mailing list with participants interested in participating in forecasting experiments (recruited from effective altruism-adjacent events and other forecasting platforms) [6], and one recruited from Positly, an online platform for crowdworkers. The former group is here called “Network-adjacent forecasters” and the latter “Online crowdworkers”. ![](https://i.imgur.com/r5dZfM7.png) **3. The evaluator judges the claims** Elizabeth was given a time-budget of 6 hours, within which she randomly sampled claims to research and judge. At this point, we wanted to use the work done by forecasters to help Elizabeth, while avoiding anchoring and biasing her with their estimates. To solve this, Elizabeth was initially given [a filtered version of the comments section](https://docs.google.com/document/d/1H_AYIA1Sro_cHePZvcV0joXA4qgrVkw8xp916UxQYSQ/edit?usp=sharing) for each claim, which contained all sources and models used, but which had been stripped of any explicit predictions or subjective opinion generalising from the data. For example, for the question: > *Pre-Industrial Britain had a legal climate more favorable to industrialization than continental Europe [5].* One commenter wrote: > Seems more likely to be true than not. The English Civil War and Glorious Revolution both significantly curtailed the arbitrary power of the monarch/gentry and raised the power of merchants in Britain, making likely that government was more favourable to mercantile interests. Hard to judge the claim about haggling. And in Elizabeth’s initial briefing this was replaced by: > The English Civil War and Glorious Revolution both significantly curtailed the arbitrary power of the monarch/gentry and raised the power of merchants in Britain [...]. After arriving at a final estimate, Elizabeth was allowed to look at the full forecaster comments and predictions and optionally change her mind. In practice, she didn't change her mind in any of these cases. To summarise, the parts involved were: * We summarised all of the relevant comments into a list and removed any descriptions that referred to people’s probabilities. * We randomly chose 8 of the 10 claims for review by Elizabeth. * Elizabeth saw this list, and spent 6 hours evaluating the 8 claims and resolving them with probability distributions of the values she expected to have for them in a future possible evaluation round. She divided this time in accordance to what seemed the most useful; for instance, questions with answers that became obvious quickly got a relatively small proportion of this time. * Elizabeth got access to all predictions and all comments and was was allowed to change her resolutions. She decided not to in all of the cases. **4. The evaluator double-checks the evaluations** After having spent 6 hours researching 8 claims, Elizabeth randomly sampled two of those, each of which she spent an additional 3 hours researching. For the remaining claims, she waited until a week after the experiment, and then reread her notes and submitted new resolutions, to see if her way of converting beliefs into numbers was consistent over time. This part was intended to test the consistency and reliability of Elizabeth’s evaluations. The outcome of this was that Elizabeth appeared highly consistent and reliable. You can see the data and graphs [here](https://observablehq.com/@jjj/untitled/2). Elizabeth’s full notes explaining her reasoning in the evaluations can be found [here](https://acesounderglass.com/2019/12/03/epistemic-spot-check-the-unbound-prometheus/). Results and analysis ==================== *You can find all the data and interactive tools for exploring it yourself, [here](https://observablehq.com/@jjj/untitled/2).* Online crowdworkers ------------------- We were interested in comparing the performance of our pool of forecasters to “generic” participants with no prior interest or experience forecasting. Hence, after the conclusion of the original experiment, we reran a slightly modified form of the experiment with a group of forecasters recruited through an online platform that sources high quality crowdworkers (who perform microtasks like filling out surveys or labeling images for machine learning models). However, it should be mentioned that these forecasters were operating under a number of disadvantages relative to other participants, which means we should be careful when interpreting their performance. In particular: * They did not know that Elizabeth was the researcher who created the claims and would resolve them, and so they had less information to model the person whose judgments would ultimately decide the questions. * They did not use any [multimodal](https://observablehq.com/@oagr/foretold-inputs) or custom distributions, which is a way to increase tail-uncertainty and avoid large losses when forecasting with distributions. We expect this was because of the time-constraints set by their payment, as well as the general difficulty. Overall the experiment with these online crowdworkers produced poor accuracy results at predicting Elizabeth’s resolutions (as is discussed further below). Accuracy of predictions ----------------------- This section analyses how well forecasters performed, collectively, in amplifying Elizabeth's research. The aggregate prediction was computed as the average of all forecasters' final predictions. Accuracy was measured using [a version of the logarithmic scoring rule](https://observablehq.com/@jjj/foretold-scoring). The following graph shows how the aggregate performed on each question: ![](https://i.imgur.com/oMqTpfW.png)The opaque bars represent the scores from the crowdworkers, and the translucent bars, which have higher scores throughout, represent the scores from the network-adjacent forecasters. It's interesting that the order is preserved, that is, that the question difficulty was the same for both groups. Finally we don’t see any correlation between question difficulty and the importance weights Elizabeth assigned to the questions. However, the comparison is confounded by the fact that more effort was spent from the network-adjacent forecasters. The above graph also doesn’t compare performance to Elizabeth’s priors. Hence we also plot the evolution of the aggregate score over prediction number and time (the first data-point in the below graphs represent Elizabeth’s priors): ![](https://i.imgur.com/FdT0KWl.png) ![](https://i.imgur.com/Iwxhy0n.png) ![](https://i.imgur.com/iLqYZUE.png)For the last graph, the y-axis shows the score on a logarithmic scale, and the x-axis shows how far along the experiment is. For example, 14 out of 28 days would correspond to 50%. The thick lines show the average score of the aggregate prediction, across all questions, at each time-point. The shaded areas show the standard error of the scores, so that the graph might be interpreted as a guess of how the two communities would predict a random new question [10]. One of our key takeaways from the experiment is that simple average aggregation algorithm performed surprisingly well, but only for the network-adjacent forecasters. One way to see this qualitatively is by observing the graphs below, where we display Elizabeth’s priors, the final aggregate of the network-adjacent forecasters, and the final resolution, for a subset of questions [11]. **Question examples** The x-axis [12] refers to the Elizabeth’s best estimate of the accuracy of a claim, from 0% to 100% (see section “Mechanism design, 1. Evaluator extracts claims” for more detail). ![](https://i.imgur.com/dAmU1XU.png) ![](https://i.imgur.com/cT5Ahsx.png)Another way to understand the performance of the aggregate is to note that the aggregate of network-adjacent forecasters had an average log score of -0.5. To get a rough sense of what that means, it's the score you'd get by being 70% confident in a binary event, and being correct (though note that this binary comparison merely serves to provide intuition, there are technical details making the comparison to a distributional setting a bit tricky). By comparison, the crowdworkers and Elizabeth’s priors had a very poor log score of around -4. This is roughly similar to the score you’d get if you predict an event to be ~5% likely, and it still happens. Cost-effectiveness ================== ### High-level observations This experiment was run to get a sense of whether forecasters could do a competent job forecasting the work of Elizabeth (i.e. as an "existence proof"). It was not meant to show cost-effectiveness, which could involve many efficiency optimizations not yet undertaken. However, we realized that the network-adjacent forecasting may have been reasonably cost-effective and think that a cost-effectiveness analysis of this work could provide a baseline for future investigations. To compute the cost-effectiveness of doing research using amplification, we look at two measures: the information gain from predictors relative to the evaluator, and the cost of predictors relative to the evaluator. *Benefit/cost ratio = % information gain provided by forecasters relative to the evaluator / % cost of forecasters relative to the evaluator* If a benefit/cost ratio of significantly over 1 can be achieved, then this could mean that forecasting could be useful to partially augment or replace established evaluators. Under these circumstances, each unit of resources invested in gaining information from forecasters has higher returns than just asking the evaluator directly. Some observations about this. First, note that this does *not* require forecasters to be as accurate as the evaluator. For example, if they only provide 10% as much value, but at 1% of the opportunity cost, this is still a good return on investment. Second, amplification can still be worthwhile even if the benefit-cost ratio is < 1. In particular: 1. Forecasters can work in parallel and hence answer a much larger number of questions, within a set time-frame, than would be feasible for some evaluators. 2. Pre-work by forecasters might also improve the speed and quality of the evaluator's work, if she has access to their research [13]. 3. Having a low benefit-cost ratio can still serve as an existence proof that amplification of generalist research is possible, as long as the benefit is high. One might then run further optimised tests which try harder to reduce cost. ### Results The opportunity cost is computed using Guesstimate models linked below, based on survey data from participants collected after the experiment. We are attempting to include both hourly value of time and value of altruistic externalities. We did not include the time that our own team spent figuring out and organising this work. For example, the estimated cost ratio for the network-adjacent forecasters in this experiment was 120%, meaning that the cost of obtaining a final aggregate prediction for a question was 20% higher when asking this group of 19 forecasters than when asking Elizabeth directly, all things considered. The value is computed using the following model (interactive calculation linked below). We assume Elizabeth is an unbiased evaluator, and so the true value of a question is the mean of her resolution distribution. We then treat this point estimate as the *true* resolution, and compare to it the scores of Elizabeth's resolution, had it been a prediction, vs. her initial prior; and the final aggregate vs. her initial prior. All scores are weighed by the importance of the question, as assigned by Elizabeth on a 1-10 scale [14]. Results were as follows. ![](https://i.imgur.com/5MYrPpu.png)*(Links to models: network-adjacent [cost ratio](https://www.getguesstimate.com/models/14521) and [value ratio](https://observablehq.com/@jjj/amplification-effectiveness), online crowdworker [cost ratio](https://www.getguesstimate.com/models/14614) and [value ratio](https://observablehq.com/@jjj/amplification-effectiveness-positly).)* The negative value ratio for the control group indicates that they assigned a lower probability to the mean of Elizabeth’s resolution than she her herself did when submitting her prior. Hence just accepting the means from those forecasts would have made us worse off, epistemically, than trusting the priors. This observation is in tension with the some of the above graphs, which show a tiny increase in average log score between crowdworkers and Elizabeth’s priors. We are somewhat uncertain about the reason for this, though we think it is as follows: they were worse at capturing the resolution means than the prior, but they were sometimes better at capturing the resolution distribution (likely by the average of them adding on more uncertainty). And the value ratio only measures the former of those improvements. Another question to consider when thinking about cost-effectiveness is diminishing returns. The following graph shows how the information gain from additional predictions diminished over time. ![](https://i.imgur.com/Faec93I.png)The x-axis shows the number of predictions after Elizabeth’s prior (which would be prediction number 0). The y-axis shows how much closer to a perfect score each prediction moved the aggregate, as a percentage of the distance between the previous aggregate and the perfect log score of 0 [15]. We observe that for the network-adjacent forecasters, the majority of value came from the first two predictions, while the online crowdworkers never reliably reduced uncertainty. Several hypotheses might explain this, including that: * The first predictor on most questions was also one of the best participants in the experiment * Most of the value of the predictors came from increasing uncertainty, and already after averaging 2-3 distributions we had gotten most of the effect there * Later participants were anchored by the clearly visible current aggregate and prior predictions Future experiments might attempt to test these hypotheses. Perspectives on impact and challenges ===================================== This section summarises some different perspectives on what the current experiment is trying to accomplish and why that might be exciting, as well as some of the challenges it faces. To keep things manageable, we simply give a high-level overview here and discuss each point in more detail in [a separate post](https://www.lesswrong.com/posts/cLtdcxu9E4noRSons/amplifying-generalist-research-models-of-impact-and). There are several perspectives here given that the experiment was designed to explore multiple relevant ideas, rather than testing a particular, narrow hypothesis. As a result, the current design is not optimising very strongly for any of these possible uses, and it is also plausible that its impact and effectiveness will vary widely between uses. Perspectives on impact ---------------------- * **Mitigating capacity bottlenecks.** The effective altruism and rationality communities face rather large bottlenecks in many areas, such as allocating funding, delegating research, [vetting](https://forum.effectivealtruism.org/posts/G2Pfpkcwv3bJNF8o9/ea-is-vetting-constrained) [talent](https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really) and [reviewing content](https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review). The current setup might provide a means of mitigating some of those -- a scalable mechanism of outsourcing intellectual labor. * **A way for intellectual talent to build and demonstrate their skills.** Even if this set-up can’t make new intellectual progress, it might be useful to have a venue where junior researchers can demonstrate their ability to predict the conclusions of senior researchers. This might provide an objective signal of epistemic abilities not dependent on detailed social knowledge. * **Exploring new institutions for collaborative intellectual progress.** Academia has a vast backlog of promising ideas for institutions to help us think better in groups. Currently we seem bottlenecked by practical implementation and product development. * **Getting more data on empirical claims made by the Iterated Amplification AI alignment agenda.** These ideas inspired the experiment. (However, our aim was more practical and short-term, rather than looking for theoretical insights useful in the long-term.) * **Exploring forecasting with distributions.** Little is known about humans doing forecasting with full distributions rather than point estimates (e.g. “79%”), partly because there hasn’t been easy tooling for such experiments. This experiment gave us some cheap data on this question. * **Forecasting fuzzy things.** A major challenge with forecasting tournaments is the need to concretely specify questions; in order to clearly determine who was right and allocate payouts. The current experiments tries to get the best of both worlds -- the incentive properties of forecasting tournaments and the flexibility of generalist research in tackling more nebulous questions. * **Shooting for unknown unknowns.** In addition to being an “experiment”, this project is also an “exploration”. We have an intuition that there are interesting things to be discovered at the intersection of forecasting, mechanism design, and generalist research. But we don’t yet know what they are. Challenges and future experiments --------------------------------- * **Complexity and unfamiliarity of experiment.** The current experiment had many technical moving parts. This makes it challenging to understand for both participants and potential clients who want to use it in their own organisations. * **Trust in evaluations.** The extent to which these results are meaningful depends on your trust in Elizabeth Van Nostrand’s ability to evaluate questions. We think is partly an inescapable problem, but also expect clever mechanisms and more transparency to be able to make large improvements. * **Correlations between predictions and evaluations.** Elizabeth had access to a filtered version of forecaster comments when she made her evaluations. This introduces a potential source of bias and a “self-fulfilling prophecy” dynamic in the experiments. * **Difficulty of converting mental models into quantitative distributions.** It’s hard to turn nuanced mental models into numbers. We think a solution is to have a “division of labor”, where some people just build models/write comments and others focus on quantifying them. We’re working on incentive schemes that work in this context. * **Anti-correlation between importance and “outsourceability”.** The intellectual questions which are most important to answer might be different from the ones that are easiest to outsource, in a way which leaves very little value on the table in outsourcing. * **Overhead of question generation.** Creating good forecasting questions is hard and time-consuming, and better tooling is needed to support this. * **Overly competitive scoring rules.** Prediction markets and tournaments tend to be zero-sum games, with negative incentives for helping other participants or sharing best practices. To solve this we’re designing and testing improved scoring rules which directly incentivise collaboration. Footnotes ========= [1] Examples include: AI alignment, global coordination, macrostrategy and cause prioritisation. [2] We chose the industrial revolution as a theme since it seems like a historical period with many lessons for improving the world. It was a time of radical change in productivity along with many societal transformations, and might hold lessons for future transformations and our ability to influence those. [3] Some readers might also prefer the terms “integration experiment” and “sandbox experiment”. [4] In traditional forecasting tournaments, participants state their beliefs in a binary event (e.g. “Will team X win this basketball tournament?”) using a number between 0% and 100%. This is referred to as a credence, and it captures their uncertainty in a quantitative way. The terminology comes from Bayesian probability theory, where rational agents are modelled as assigning credences to claims and then updating those credences on new information, in a way uniquely determined by Bayes’ rule. However, as a human, we might not always be sure what the right credence for a claim is. If I had an unlimited time to think, I might arrive at the right number. (This is captured by the “after 10 more hours of research” claim.) But if I don’t have a lot of time, I have some uncertainty about exactly how uncertain I should be. This is reflected in our use of distributions. [5] In scaling the number of claims beyond what Elizabeth can evaluate, we would also have to proportionally increase the rewards. [6] Many of these participants had previous experience with forecasting, and some were “superforecaster-equivalents” in terms of their skill. Others had less experience with forecasting but were competent in quantitative reasoning. For future experiments, we ought to survey participants about their previous experience. [7] The payments were doubled after we had seen the results, as the initial scoring scheme proved too harsh on forecasters. [8] The incentive schemes looked somewhat different between groups, mostly owing to the fact that we tried to reduce the complexity necessary to understand the experiment for the online crowdworkers, who to our knowledge had no prior experience with forecasting. They were each paid at a rate of ~$15 an hour, with the opportunity for the top three forecasters to receive a bonus of $35. [9] Elizabeth did this by copying the claims into a google doc, numbering them, and then using Google random number generator to pick claims. For a future scaled up version of the experiment, one could use the [public randomness beacon](https://www.google.com/search?q=public+randomness+beacon&oq=public+randomness+beacon&aqs=chrome..69i57j33.2772j0j1&sourceid=chrome&ie=UTF-8) as a transparent and reproducible way to sample claims. [10] In analysing the data we also plotted 95% confidence intervals by multiplying the standard error by 1.96. In that graph the two lines intersect for something like 80%-90% of the x-axis. You can plot and analyse them yourself [here](https://observablehq.com/@nunosempere/plots-for-the-amplification-experiment). [11] We only display the first four resolutions to not too make up too much space (which were randomly chosen in the course of the experiment). All resolution graphs can be found [here](https://observablehq.com/@jjj/untitled/2). [12] The distributions are calculated using Monte Carlo sampling and Kernel smoothing, so are not perfectly smooth. This also led to errors around bounds being outside of the 0 to 100 range. [13] For this experiment, Elizabeth informally reports that the time saved ranged from 0-60 minutes per question, but she did not keep the kind of notes required to estimate an average. [14] This is a rough model of calculating this and we can imagine there being better ways of doing it. Suggestions are welcome. [15] Using this transformation allows us to visualise the fact smaller scores obtained later in the contest can still be as impressive as earlier scores. For example, moving from 90% confidence to 99% confidence takes roughly as much evidence as moving from 50% to 90% confidence. Phrased in terms of odds ratios, both updates involve evidence of strength roughly 10:1. Participate in future experiments or run your own ================================================= [Foretold.io](https://www.lesswrong.com/posts/wCwii4QMA79GmyKz5/introducing-foretold-io-a-new-open-source-prediction) was built as an open platform to enable more experimentation with prediction-related ideas. We have also made [data and analysis calculations](https://observablehq.com/@jjj/untitled/2) from this experiment publicly available. If you’d like to: * Run your own experiments on other questions * Do additional analysis on this experimental data * Use an amplification set-up within your organisation We’d be happy to consider providing advice, operational support, and funding for forecasters. Just comment here or reach out to [this email](mailto:jacob@parallelforecast.com). If you’d like to participate as a forecaster in future prediction experiments, you can [sign-up here](https://mailchi.mp/60b8ea91e592/ol3ptgmr5d). Acknowledgements ================ Funding for this project was provided by the Berkeley Existential Risk Initiative and the EA Long-term Future Fund. We thank Beth Barnes and Owain Evans for helpful discussion. We are also very thankful to all the participants.
fefb38ea-f058-4c42-afcd-eb3a1b9ae5b1
trentmkelly/LessWrong-43k
LessWrong
"Memento Mori", Said The Confessor Abstract The fear of death acts as a sort of master key for introductory rationality concepts. Examining the fear of death ties all the rationality basics together into a coherent framework, including: * Map/Territory Errors * Something To Protect * Keeping Your Identity Small * Atheism * X-Risk Read more ---------------------------------------- Small brain: Don't think about death. Shining tomagraph: "After I die I'll go to heaven because I'm a good person." Expanding brain: "God isn't real, I find it more comforting to think that this isn't all a test." Galaxy Brain: Practice dying.
0f475488-29ff-4089-9121-1ba68cd16f36
trentmkelly/LessWrong-43k
LessWrong
META: Which posts are appropriate for the articles section vs. the discussion section? Should my post be sent to the discussion section or the articles section of Less Wrong? The published guidelines are slim. Clearly, the following should be sent to discussion: * Links * Posts lacking the clarity, importance, and writing quality expected of front-page articles * Quick, informal comments or questions The articles section is intended for major announcements, meetup announcements, and posts about "refining the art of human rationality" (or AI, probably) that exhibit "substantive new content, clear argument, good writing, popularity, and importance." Yet, some have requested clearer guidelines than this. For example, my article Back to the Basics of Rationality was somewhat of a "meta" post, and AnnaSalomon suggested it be moved to the discussion section, but shortly thereafter it was promoted to the front page and up-voted to 69 points. Eliezer requested that meta posts on the front page be kept to a minimum, but not to zero. Maybe that's close enough to a guideline. But consider these issues: * I'm writing a CliffsNotes summary of David Chalmers article "The Singularity: A Philosophical Analysis." The subject matter is appropriate, the paper is important and recent, and the post will be well-written and well-sourced. But there's nothing in the post that is original. Is this appropriate for the articles section? * What's the policy on publishing sequences? Those require a large investment from the author, and may not always end up being completed unless they are fully written in advance of publishing the first post in the sequence. * Besides rationality, AI, and the Less Wrong community, what topics are appropriate for the articles section? My recent overview of scientific self-help is one of the most up-voted posts in the history of LW, and has a strong link to instrumental rationality, but is otherwise somewhat outside the usual subject matter of LW. Political posts are discouraged, but what subjects should be allowed? Thoughts?
dfd9559e-c60f-4a33-90f0-37b580ab641f
trentmkelly/LessWrong-43k
LessWrong
[Linkpost] Mark Zuckerberg confronted about Meta's Llama 2 AI's ability to give users detailed guidance on making anthrax - Business Insider > Several tech leaders descended upon Capitol Hill last week to discuss the rapid expansion of generative AI. It was a mostly staid meeting until the potential harms from Meta's new Llama 2 model came up. > > During the discussion, attended by most of the Senate's 100 members, Tristan Harris, a co-founder of the Center for Humane Technology, said he recently had engineers take Meta's powerful large language model Llama 2 for a "test drive." After some prompting, Harris said that a chat with Llama 2 came back with a detailed walkthrough of how to create anthrax as a biological weapon, according to one person familiar with the forum and two senators present. That prompted a testy exchange between Harris and Mark Zuckerberg, co-founder and CEO of Meta, formerly known as Facebook. Most specifics of the exchange between Harris and Zuckerberg have not been previously reported, although Harris receiving directions from Llama 2 about an unidentified biological weapon was noted by The Washington Post. > > Among the two dozen tech leaders at the forum were Elon Musk, owner of Twitter and CEO of Tesla and SpaceX; Sam Altman, CEO of OpenAI; Satya Nadella, CEO of Microsoft; Jensen Huang, CEO of Nvidia; and Sundar Pichai, CEO of Google. > > The gathering was led by Senate Majority Leader Chuck Schumer, Democratic Sen. Martin Heinrich, and Republican Sens. Mike Rounds, and Todd Young, who all make up a new "artificial intelligence working group." The group formed earlier this year, a few months after OpenAI's ChatGPT bot became known the world over. > > During the session, Zuckerberg attempted to downplay Harris' statement that Llama 2 can tell users how to make anthrax, saying anyone who was looking for such a guide could find out how to make anthrax on YouTube, according to both of the senators present. Harris rejected the argument, saying such guides do not come up on YouTube, and even if they did, the level of detail and guidance provided by Llama 2 was unique to such a po
0e057009-c197-4348-b50b-4cc3bdff3965
trentmkelly/LessWrong-43k
LessWrong
Generative ML in chemistry is bottlenecked by synthesis Introduction Every single time I design a protein — using ML or otherwise — I am confident that it is capable of being manufactured. I simply reach out to Twist Biosciences, have them create a plasmid that encodes for the amino acids that make up my proteins, push that plasmid into a cell, and the cell will pump out the protein I created. Maybe the cell cannot efficiently create the protein. Maybe the protein sucks. Maybe it will fold in weird ways, isn’t thermostable, or has some other undesirable characteristic. But the way the protein is created is simple, close-ended, cheap, and almost always possible to do. The same is not true of the rest of chemistry. For now, let’s focus purely on small molecules, but this thesis applies even more-so across all of chemistry. Of the 1060 small molecules that are theorized to exist, most are likely extremely challenging to create. Cellular machinery to create arbitrary small molecules doesn’t exist like it does for proteins, which are limited by the 20 amino-acid alphabet. While it is fully within the grasp of a team to create millions of de novo proteins, the same is not true for de novo molecules in general (de novo means ‘designed from scratch’). Each chemical, for the most part, must go through its custom design process. Because of this gap in ‘ability-to-scale’ for all of non-protein chemistry, generative models in chemistry are fundamentally bottlenecked by synthesis. This essay will discuss this more in-depth, starting from the ground up of the basics behind small molecules, why synthesis is hard, how the ‘hardness’ applies to ML, and two potential fixes. As is usually the case in my Argument posts, I’ll also offer a steelman to this whole essay. To be clear, this essay will not present a fundamentally new idea. If anything, it’s such an obvious point that I’d imagine nothing I’ll write here will be new or interesting to people in the field. But I still think it’s worth sketching out the argument for those who ar
244c0fe5-af6f-4b91-b34c-7322a00348a2
trentmkelly/LessWrong-43k
LessWrong
Meetup : Madison Monday Meetup Discussion article for the meetup : Madison Monday Meetup WHEN: 07 November 2011 08:00:00PM (-0500) WHERE: 1831 Monroe St., Madison, WI Along with the usual likely discussions and likely games, I'll run some exercises on group estimation and Fermi calculations, and then we'll play a few rounds of Paranoid Debating. eta: And, if you're in Madison and haven't already, be sure to sign up for the Madison mailing list Discussion article for the meetup : Madison Monday Meetup
bc066ef1-499c-4916-926d-c75cd42ed0eb
trentmkelly/LessWrong-43k
LessWrong
Coronavirus is Here Even in the best case scenarios, things are going to get a lot worse from here before they get better. Please, please, please, do not rely on me here to tell you what is going on or what to do. Even more than that, please, please, please, do not take this as saying that you shouldn’t do a lot more than the things I’m saying here. This is me doing what various stupid reasons prevented me from doing earlier, deciding that saying something is a lot better than saying nothing, and hoping it will do some good. This is not a model of what is happening, or an attempt to justify what you should do, because attempting to do that would cause me to continue to say nothing, and that seems worse. If you already are taking the situation seriously and making serious preparations, this probably won’t tell you anything new. I am totally fine with that. This is me seeing something and saying something. If you need some sort of permission to yourself to acknowledge that this is happening, and that you need to take action now to prepare, this is one more instance of that. You have it. It will be important that you retain the ability, when things get bad, to keep your head on straight. Prepare now, including mentally, as best you can, for people you know and care about getting sick and dying, because this might well happen to you regardless of what actions everyone involved takes. Here is an article describing the symptoms, if you are not yet familiar with that. It’s primarily dry cough and pneumonia. From the statistics I have seen, with super wide error bars, overall risk of death if you do get infected could be up to about 2%. But that is only an average. It varies wildly based on age and prior health, and presumably access to health care. I’ve also seen reasonable claims that initial degree of exposure matters here. Risk of death chart by age: Age Death Rate 80+ years old    14.8% 70-79 years old 8.0% 60-69 years old 3.6% 50-59 years old 1.3% 40-49 years old 0.4% 30-39 ye
4082594b-af7e-48d3-bd74-aeddc8da5e31
trentmkelly/LessWrong-43k
LessWrong
[LINK] Elon Musk interested in AI safety http://www.businessinsider.com/musk-on-artificial-intelligence-2014-6 Summary: The only non-Tesla/SpaceX/SolarCity companies that Musk is invested in are DeepMind and Vicarious, due to vague feelings of wanting AI to not unintentionally go Terminator. The best part of the article is the end, where he acknowledges that Mars isn't a get-out-of-jail-free card any more: "KE: Or escape to mars if there is no other option. MUSK: The A.I. will chase us there pretty quickly." Thinking of SpaceX not as a childhood dream, but as one specific arms supplier in the war against existential risks, puts things into perspective for him.
71691e43-9f79-4ca7-b558-0eaf7d0cd278
trentmkelly/LessWrong-43k
LessWrong
US Govt Whistleblower guide (incomplete draft) 2025-05-20 WARNING This guide is currently a research draft. DO NOT follow the guide as of today, if you are a whistleblower. I do not yet endorse following this guide as being sufficient to protect your safety. I am only sharing this to get research feedback. I will update you once I do think the guide is good enough. Main Why this guide? * My personal motivation here is especially focussed on whistleblowers working at an AI company in the US hoping to build superintelligent AI. The guide may also be useful for others. * Some of the whistleblower guides online are IMO blatantly against your true interests, and in the interests of journalists or lawyers. I wanted to write a guide that's in your true interests as a whistleblower. * A lot of cybersecurity guides online fail to acknowledge that the way to escape the NSA with high success rate is not to improve your opsec, it's to flee the country. But the latter is less fun for them to geek out about. So I wanted to write about that. Disclaimer * Geopolitical disclaimer * This guide is based entirely on publicly available information. * This guide is based on the geopolitical situation as of 2025-05. Some parts of this guide will no longer apply if the US govt were to enter a war (not via proxy) with the govts of Russia, China, Ecuador or any of the other countries mentioned in this document. * Case study: Manhattan project spies Julius and Ethel Rosenberg were executed. This guide assumes US citizen whistleblowers are unlikely to be executed, which is likely true during peacetime but may or may not be true during war. * Expertise disclaimer * This should go without saying, but I don't have credentials or expert-level knowledge on cybersecurity or international law or psychology or any other subject. I have above-average knowledge on cybersecurity and below-average knowledge on all the other subjects. * This is a "first pass" guide. * After reading this guide, you should also do your own ind
a41d60ec-b8dc-4275-b717-f9b6be4801a4
trentmkelly/LessWrong-43k
LessWrong
You Are Likely To Be Eaten By A Grue Previously in sequence/sequence index: Living Luminously Next in sequence: Let There Be Light Luminosity is fun, useful to others, and important in self-improvement.  You should learn about it with this sequence. Luminosity?  Pah!  Who needs it? It's a legitimate question.  The typical human gets through life with astonishingly little introspection, much less careful, accurate introspection.  Our models of ourselves are sometimes even worse than our models of each other - we have more data, but also more biases loading up our reflection with noise.  Most of the time, most people act on their emotions and beliefs directly, without the interposition of self-aware deliberation.  And this doesn't usually seem to get anyone maimed or killed - when was the last time a gravestone read "Here Lies Our Dear Taylor, Who Might Be Alive Today With More Internal Clarity About The Nature Of Memory Retrieval"?  Nonsense.  If Taylor needs to remember something, it'll present itself, or not, and if there's a chronic problem with the latter then Taylor can export memories to the environment.  Figuring out how the memories are stored in the first place and tweaking that is not high on the to-do list. Still, I think it's worth investing considerable time and effort into improving your luminosity.  I submit three reasons why this is so. First, you are a fascinating creature.  It's just plain fun and rewarding to delve into your own mind.  People in general are among the most complex, intriguing things in the world.  You're no less so.  You have lived a fair number of observer-moments.  Starting with a native architecture that is pretty special all by itself, you've accumulated a complex set of filters by which you interpret your input - remembered past, experienced present, and anticipated future.  You like things; you want things; you believe things; you expect things; you feel things.  There's a lot of stuff rolled up and tucked into the fissures of your brain.  Wouldn't you like
7697a48a-a81d-4290-814a-f30e7b714c82
StampyAI/alignment-research-dataset/arbital
Arbital
Morphism A morphism is the abstract representation of a [relation](https://arbital.com/p/3nt) between [mathematical objects](https://arbital.com/p/-mathematical_object). Usually, it is used to refer to [functions](https://arbital.com/p/3jy) mapping element of one set to another, but it may represent a more general notion of a relation in [https://arbital.com/p/-4c7](https://arbital.com/p/-4c7). ##Isomorphisms## To understand a morphism, it is easier to first understand the concept of an [https://arbital.com/p/-4f4](https://arbital.com/p/-4f4). Two mathematical structures (say two [groups](https://arbital.com/p/-3gd)) are called **isomorphic** if they are indistinguishable using the information of the language and theory under consideration. Imagine you are the Count von Count. You care only about counting things. You don't care what it is you count, you just care how many there are. You decide that you want to collect objects you count into boxes, and you consider two boxes equal if there are the same number of elements in both boxes. How do you know if two boxes have the same number of elements? You pair them up and see if there are any left over in either box. If there aren't any left over, then the boxes are "bijective" and the way that you paired them up is a [bijection](https://arbital.com/p/499). A bijection is a simple form of an isomorphism and the boxes are said to be isomorphic. For example, the theory of groups only talks about the way that elements are combined via group operation (and whether they are the [https://arbital.com/p/-identity](https://arbital.com/p/-identity) or [inverses](https://arbital.com/p/-4sn), but that information is already given by the information of how elements are combined under the group operation (hereafter called multiplication). The theory does not care in what order elements are put, or what they are labelled or even what they are. Hence, if you are using the language and theory of groups, you want to say two groups are essentially indistinguishable if their multiplication acts the same way.
ac8a48bc-4e7e-40c7-a79c-e2f82e9b37d0
trentmkelly/LessWrong-43k
LessWrong
I may have just had a dangerous thought. I'm interested in discussing this with someone, non-publicly. It's safe to know about personally, but it's not something I'd like people in general to know. I'm really not sure if there is a protocol for this sort of thing.
94095b2f-f236-40bc-973a-14e976defdb4
StampyAI/alignment-research-dataset/arxiv
Arxiv
Shared Autonomy via Deep Reinforcement Learning I Introduction --------------- Imagine the task of flying a quadrotor to a safe landing site. This problem is challenging for both humans and robots, but in different ways. For a human, controlling many degrees of freedom at once while dealing with unfamiliar quadrotor dynamics is hard. For a robot, understanding what makes a good landing location can be difficult, especially when the human has a future task in mind that might influence where they want the quadrotor to land now. Shared autonomy [[7](#bib.bib7), [1](#bib.bib1)] aims to address this problem by combining user input with automated assistance. We focus on an area of shared autonomy in which information about the user’s intent is hidden from the robot, in which prior work [[20](#bib.bib20), [12](#bib.bib12), [23](#bib.bib23), [15](#bib.bib15), [9](#bib.bib9)] has proposed approaches that infer the user’s goal from their input and autonomously act to achieve it. These approaches tend to assume (1) a known dynamics model of the world, (2) a known goal representation (a set of possible goals), and (3) a known user policy given a goal. ![An overview of our method for assisting humans with real-time control tasks using model-free shared autonomy and deep reinforcement learning. We empirically evaluate our method on simulated pilots and real users playing the Lunar Lander game (a) and flying a quadrotor (b,c).](https://media.arxiv-vanity.com/render-output/7360534/front-fig.jpg) Fig. 1: An overview of our method for assisting humans with real-time control tasks using model-free shared autonomy and deep reinforcement learning. We empirically evaluate our method on simulated pilots and real users playing the Lunar Lander game (a) and flying a quadrotor (b,c). For many real-world tasks, these assumptions constrain the adaptability and generality of the system. (1) Fitting an accurate global dynamics model can be more difficult than learning to perform the task. (2) Assuming a fixed representation of the user’s goal (e.g., a discrete set of graspable objects) reduces the flexibility of the system to perform tasks in which the users’ desires are difficult to specify but easy to evaluate (e.g., goal regions, or success defined directly on raw pixel input). (3) User input can exhibit systematic suboptimality that prevents standard goal inference algorithms from recovering user intent by inverting a generative model of behavior. Our goal is to devise a shared autonomy method that lifts these assumptions, and our primary contribution is a model-free deep reinforcement learning algorithm for shared autonomy that represents a step in this direction. The key idea is that training an end-to-end mapping from environmental observation and user input to agent action, with task reward as the only form of supervision, removes the need for known dynamics, a particular goal representation, and even a user behavior model. From the agent’s perspective, the user acts like a prior policy that can be fine-tuned, and an additional sensor generating observations from which the agent can implicitly decode the user’s private information. From the user’s perspective, the agent behaves like an adaptive interface that learns a personalized mapping from user commands to actions that maximizes task reward. Our method uses human-in-the-loop deep Q-learning to learn an approximate state-action value function that computes the expected future return of an action given the current environmental observation and the user’s suggested control. Our assistive agent executes the closest high-value action to the user’s suggested control. The reward function for the agent is a combination of known terms computed for every state, and a terminal reward provided by the user upon succeeding or failing at the task. We apply this method to two real-time assistive control problems: the Lunar Lander game and a quadrotor landing task (see Figure [1](#S1.F1 "Fig. 1 ‣ I Introduction ‣ Shared Autonomy via Deep Reinforcement Learning")). To mitigate the large sample complexity of deep reinforcement learning in experiments with real human pilots, we show that we can pretrain the assistive agent in simulation without a pilot in the loop, and then fine-tune the agent to adapt it to a human pilot. Our studies with both human and simulated pilots suggest that our method can successfully improve pilot performance. We find that our method is capable of adapting to the unique types of suboptimality exhibited by different simulated pilots, and that by varying a hyperparameter that controls our agent’s tolerance for suboptimal pilot controls, we are able to help simulated pilots who need different amounts of assistance. With human pilots, our method substantially improves task success and reduces catastrophic failure. Finally, we show that when the user policy or goal representation are known, our method can be combined with adaptations of existing techniques to exploit this knowledge. Ii Related Work ---------------- Robotic teleoperation. We build on shared autonomy work in which the system is initially unaware of the user’s goal [[8](#bib.bib8), [6](#bib.bib6), [20](#bib.bib20), [12](#bib.bib12), [23](#bib.bib23), [15](#bib.bib15), [9](#bib.bib9)] and explore problem statements with unknown dynamics, unknown user policy, and unknown goal representation. The parallel autonomy framework [[26](#bib.bib26)] approaches shared-control teleoperation from a different angle: instead of predicting user intent, it minimally adjusts user input to achieve safe trajectories for tasks like semi-autonomous driving. Our agent’s policy of executing a near-optimal action closest to the human’s suggestion is inspired by this approach. Existing work in parallel autonomy requires analytic descriptions of the environment, such as the explicit locations of road boundaries and a model of the behavior of other cars. Our method is analogous to parallel autonomy, but for environments in which we do not have a dynamics model. Brain-computer interfaces. A large body of work in brain-machine interfaces uses optimal control and reinforcement learning algorithms to implement closed-loop decoder adaptation [[28](#bib.bib28)] for applications like prosthetic limb controllers that respond to neural signals from myoelectric sensors [[24](#bib.bib24)]. These algorithms typically track desired motion, whereas we focus on tasks with long-horizon goals. Reinforcement learning with human feedback. Shared autonomy enables a semi-autonomous agent to interpret user input at test time. In contrast, human-in-the-loop reinforcement learning frameworks leverage human feedback to train autonomous agents that operate independently of the user at test time [[31](#bib.bib31), [13](#bib.bib13), [14](#bib.bib14), [18](#bib.bib18)]. These frameworks are applicable to settings where the agent has access to all task-relevant information (e.g., goals), but the reward function is initially unknown or training can be sped up by human guidance. We focus on the orthogonal setting where the agent does not have direct access to the information that is private to the user and relevant to the task, and will always need to leverage user input to accomplish the task; even after training. This is also the key difference between our method and inverse reinforcement learning [[21](#bib.bib21)] and learning from demonstration [[2](#bib.bib2)], which generally require user interaction during training time but not at test time. Adaptive HCI. While the bulk of the shared autonomy research discussed here exists in the context of the robotics literature, adaptive human-computer interfaces have been explored in computer graphics for animating virtual characters using motion capture data from humans [[5](#bib.bib5)], in natural language processing for learning to act on natural language instructions from humans [[30](#bib.bib30), [3](#bib.bib3)], and in formal methods for verification of semi-autonomous systems [[27](#bib.bib27)]. By not assuming a known user policy, our work also enables agents to adapt to a user’s style of giving input. Iii Background --------------- We first recap the reinforcement learning and shared autonomy problem statements on which we build in our method. ### Iii-a Reinforcement Learning Consider a Markov decision process (MDP) with states S, actions A, transitions T:S×A×S→[0,1], reward function R:S×A×S→R, and discount factor γ∈[0,1]. In cases where the state is not fully observable, we can extend this definition to a partially-observable MDP (POMDP) in which there is an additional set of possible observations Ω and observation function O:S×Ω→[0,1]. The expected future discounted return of taking action a in state s with policy π:S×A→[0,1] is expressed by the state-action value function Qπ(s,a), and the goal in RL is to learn a policy π∗ that maximizes expected future discounted return. One algorithm for solving this problem is Q-learning [[32](#bib.bib32)], which minimizes the Bellman error of the Q function, | | | | | --- | --- | --- | | | Q∗(s,a)−γEs′∼T(⋅∣s,a)[R(s,a,s′)+maxa′∈AQ∗(s′,a′)], | | as a proxy for maximizing return. We will build on this method to implement model-free shared autonomy. ### Iii-B Shared Autonomy Prior work has formalized shared autonomy as a POMDP [[12](#bib.bib12)]. The reward function, known to both the user and agent, depends on a goal g∈G known to the user but unknown to the agent. The set of candidate goals G is known to the agent. The user follows a goal-conditioned policy πh:S×G×H→[0,1] known to the agent, where H is the space of possible user inputs – if the user suggests actions, then H=A. The transition distribution T is known to the agent. The agent’s uncertainty in the goal can be formalized as partial observability, which leads to the following POMDP: the state space ~S=S×G is augmented with the goal, the transition distribution ~T((st+1,g)∣st,g,at)=T(st+1∣st,at) maintains a constant goal, and the observation distribution O(s,ah∣s,g)=πh(ah∣s,g) is given by the user policy where ah∈H is the user input. Prior work assumes the goal space G, user policy πh, and environment dynamics T are known ex-ante to the agent, and solves the POMDP (~S,A,~T,~R,H,O) using approximate methods like hindsight optimization [[12](#bib.bib12)]. In the following section, we introduce a different problem statement for shared autonomy which relaxes these assumptions. Iv Model-Free Shared Autonomy ------------------------------ We will relax the standard formulation in Section [III-B](#S3.SS2 "III-B Shared Autonomy ‣ III Background ‣ Shared Autonomy via Deep Reinforcement Learning") to remove first the assumptions of known dynamics and the known observation model πh for the user’s private information, and then the known set of candidate goals G. We introduce a model-free deep reinforcement learning method, with variants that can also take advantage of a known observation model and goal space when they do exist, but still provide assistance even when they are not available. ### Iv-a Problem Statement In our problem formulation, the transition T, the user’s policy πh, and the goal space G are no longer all necessarily known to the robot. The reward function, which still depends on the user’s private information, is decomposed as: | | | | | | --- | --- | --- | --- | | | R(s,a,s′)=Rgeneral(s,a,s′)known% +Rfeedback(s,a,s′)unknown, but % observed. | | (1) | This captures a structure typically present in shared autonomy: there are some terms in the reward that are known, such as the need to avoid collisions. We capture these in Rgeneral. Rfeedback is a user-generated feedback that depends on their private information. We do not know this function. We merely assume the robot is informed when the user provides feedback (e.g., by pressing a button). In practice, the user might simply indicate once per trial whether the robot succeeded or not. Known-User-Policy: Unknown dynamics, known goal space and user policy. In this setting, the transition T is unknown, but we have access to both G and the user’s policy πh(ah|s,g). Having access to G structures Rfeedback, which is now parameterized by the goal according to Rfeedback(s,a,s′;g), and assigns high reward when s′=g, and 0 otherwise, without requiring manual indication from the user. We do not know g, but having access to πh allows us to infer g via Bayesian inference. Known-Goal-Space: Unknown dynamics and user policy, known goal space. We also consider a version of the problem where we know G, but do not make assumptions about the user’s policy πh. In this case, Rfeedback is still parameterized by the goal, but we must use a classification or regression model to predict the goal from the user’s actions. Min-Assumptions: Unknown dynamics, user policy, and goal space. Most of our experiments will be concerned with this setting, where we no longer assume a goal representation. This provides us with a maximally general approach, where the user might imagine whichever goal they prefer, without the need to explicitly define the space of goals in advance. In this case, we do not know the functional form of Rfeedback, nor do we assume any parameterization for it, we merely assume the robot can observe it (evaluate it) as it takes actions. This is typically a sparse terminal reward that signals whether the task was completed successfully, and comes from the user. ### Iv-B Method Overview Our method takes observations of the environment and the user’s controls or inferred goal (when available) as input, and produces a high value action or control output that is as close as possible to the user’s control. We learn state-action values via Q-learning with neural network function approximation. In this section, we will describe how the agent combines user input with environmental observations, motivate and describe our choice of deep Q-learning for training the agent, and describe how the agent shares control with the user. ### Iv-C Incorporating User Control Because we do not know dynamics in any of our problems of interest, we use a deep reinforcement learning agent which maps observations from its sensors to actions (or Q values for each action). We incorporate information from the user as useful observations for the agent. Our method jointly embeds the agent’s observation of the environment st with the information from the user ut by simply concatenating them. The particular form of ut depends on the information that is available. Formally, | | | | | | --- | --- | --- | --- | | | ~st=[stut]. | | (2) | When we do not know G, we use the user’s actions aht as ut. When we know more about the possible user goals and policy, we set ut to the inferred goal ^gt. Known-User-Policy: Incorporating user control via Bayesian goal inference. When the user’s policy is available, it can be used to infer the maximum a posteriori estimate of the goal ^gt. We can instantiate Bayesian goal inference by using maximum entropy inverse reinforcement learning [[33](#bib.bib33)] with a goal-parameterized Q function trained via Q-learning separately from our agent, analogously to prior work [[12](#bib.bib12)]. Each time step produces a better estimate of the goal ^gt, as additional actions reveal more about the user’s intent. Known-Goal-Space: Incorporating user control via supervised goal prediction. When we do not have a convenient model of the user’s policy, we can use supervised prediction to compute the goal estimate ^gt. In this case, we use a separate recurrent LSTM network to predict the goal, conditioned on the sequence of states and user controls observed up to the current time t. Training data is collected from the user. As before, we concatenate ^gt with the agent’s observation of the environment st to get the combined observation ~st. Min-Assumptions: Incorporating user control via raw action embedding. In this setting, which we use in the majority of our experiments, we do not use any explicit goal inference. Instead, the policy directly takes in the user’s actions aht and must learn to implicitly decode the user’s intent and perform the task.111In principle, the user’s past actions are also informative of intent, and a recurrent policy could effectively integrate these. In practice, we found a reactive policy to be more effective for our tasks. To our agent, the user is part of the external environment, and the user’s control is yet another source of observations, much like the output of any of the agent’s other sensors. Because deep neural networks are end-to-end trainable, our agent can discover arbitrary relationships between user controls and observations of the physical environment, rather than explicitly assuming the existence of a goal. Our method jointly embeds the agent’s observation of the environment st with the user’s control input aht by simply concatenating them, henceforth referred to as “raw action embedding.” In this setting, we set ut=aht. ### Iv-D Q-Learning with User Control Model-free reinforcement learning with a human in the loop poses two challenges: (1) maintaining informative user input and (2) minimizing the number of interactions with the environment. (1) If the user input is a suggested control, consistently ignoring the suggestion and taking a different action can degrade the quality of user input, since humans rely on feedback from their actions to perform real-time control tasks [[16](#bib.bib16)]. Additionally, some user policies may already be approximately optimal and only require fine-tuning. (2) Many model-free reinforcement learning algorithms require a large number of interactions with the environment, which may be impractical for human users. To mitigate these two issues, we use deep Q-learning [[32](#bib.bib32)] to learn an approximate state-action value function that can be used to select and evaluate actions. Specifically, we implement neural fitted Q-iteration (NFQI) [[25](#bib.bib25)] with experience replay [[17](#bib.bib17)], a periodically updated target network [[19](#bib.bib19)], and double Q-learning [[29](#bib.bib29)]. This gets around a practical problem with using vanilla deep Q-networks (DQN) [[19](#bib.bib19)] for human-in-the-loop learning: DQN performs a gradient update after each step, which can cause the task interface to lag and disrupts human control, whereas NFQI only performs gradient updates at the end of each episode. We chose Q-learning because (a) it is an off-policy algorithm, so we do not need to exactly follow the agent’s policy and can explicitly trade off control between the user and agent, and (b) off-policy Q-learning tends to be more sample-efficient than policy gradient and Monte Carlo value-based methods [[11](#bib.bib11)]. ### Iv-E Control Sharing Motivated by the discussion of (1) and (a) in the previous section, we use the following behavior policy to select actions during and after Q-learning: select a feasible action closest to the user’s suggestion, where an action is feasible if it isn’t that much worse than the optimal action. Formally, | | | | | | --- | --- | --- | --- | | | πα(a∣~s,ah)=δ(a=argmax{a:Q′(~s,a)≥(1−α)Q′(~s,a∗)}f(a,ah)), | | (3) | where f is an action-similarity function and Q′(~s,a)=Q(~s,a)−mina′∈AQ(~s,a′) maintains a sane comparison for negative Q values: if Q(~s,a)<0 ∀a and 0<α<1, then the set of feasible actions would be empty if we didn’t subtract a baseline from the Q values. The constant α∈[0,1] is a hyperparameter that controls the tolerance of the system to suboptimal human suggestions, or equivalently, the amount of assistance. The functional form of the action feasibility condition is motivated by the fact that it is invariant to multiplicative and additive scaling of Q values. The overall algorithm is summarized in Algorithm [1](#alg1 "Algorithm 1 ‣ IV-E Control Sharing ‣ IV Model-Free Shared Autonomy ‣ Shared Autonomy via Deep Reinforcement Learning"). Initialize experience replay memory D to capacity N Initialize Q-function with random or pretrained weights θ Initialize target action-value function ^Q with weights θ−=θ for episode =1,M do      for t=1,T do           Sample action at∼πα(at∣~st,aht) using equation [3](#S4.E3 "(3) ‣ IV-E Control Sharing ‣ IV Model-Free Shared Autonomy ‣ Shared Autonomy via Deep Reinforcement Learning")           Execute action at and observe (~st+1,aht+1,rt)           Store transition (~st,at,rt,~st+1) in D           if ~st+1 is terminal then                for k=1 to K do ▹ training loop                     Sample minibatch (~sj,aj,rj,~sj+1) from D                     yj=rj+γ^Q(~sj+1,argmaxa′Q(~sj+1,a′;θ);θ−)                     θ←θ−η∇θ∑j(yj−Q(~sj,aj;θ))2                end for           end if           Every C steps reset ^Q=Q      end for end for Algorithm 1 Human-in-the-loop deep Q-learning V Simulation Experiments ------------------------- We begin our experiments with an analysis of our method under different simulated users. To simplify terminology, we henceforth refer to the user as the *pilot* and the semi-autonomous agent as the *copilot*. Our central hypothesis is that our method can improve a pilot’s performance despite not knowing the world’s dynamics and the pilot’s policy, or assuming a particular set of goals. Simulating pilots enables us to take a deeper dive into different aspects of our method (like the effects of the tolerance parameter α, and of training and testing on different types of input) before testing on real users – after all, simulated pilots do not run out of patience. The Lunar Lander System. We use the Lunar Lander game from OpenAI Gym [[4](#bib.bib4)] (see Figure [1](#S1.F1 "Fig. 1 ‣ I Introduction ‣ Shared Autonomy via Deep Reinforcement Learning")) as our test platform for this part of our experiments. The objective of the game is to pilot the lunar lander vehicle to a specified landing site on the ground without crashing using two lateral thrusters and a main engine. Each episode lasts at most 1000 steps, and runs at 50 frames per second. An episode ends when the lander crashes, flies out of bounds, remains stationary on the ground, or time runs out. The action space A consists of six discrete actions that correspond to the {left, right, off} steering commands and {on, off} main engine settings. The state s∈R8 is an eight-dimensional vector that encodes the lander’s position, velocity, angle, angular velocity, and indicators for contact between the legs of the vehicle and the ground. The x-coordinate of the landing site is selected uniformly at random at the beginning of each episode, and is not directly accessible to the agent through the state s. A human playing the game can see two flags demarcating the landing site, and can supply a suggested control ah∈A – depending on the user policy, ah could be an approximately-optimal action, a signal that encodes the relative direction of the landing site, etc. Thus, in order to perform the task, the agent needs to leverage ah to maneuver toward the landing site. The agent uses a multi-layer perceptron with two hidden layers of 64 units each to approximate the Q function ^Q:S×A2→R. The action-similarity function f(a,ah) in the agent’s behavior policy counts the number of dimensions in which actions a and ah agree (e.g., f((left,on),(left,off))=1). As discussed earlier in Section [IV](#S4 "IV Model-Free Shared Autonomy ‣ Shared Autonomy via Deep Reinforcement Learning"), the agent’s reward function is composed of a hard-coded function Rgeneral and a user-generated signal Rfeedback. Rgeneral penalizes speed and tilt, since moving fast and tipping over are generally dangerous for any pilot regardless of their intent. Rfeedback emits a large positive reward at the end of the episode if the vehicle successfully lands at the intended site, or a large negative reward if it crashes or goes out of bounds. ### V-a Testing Unstructured Copilot Performance We now test the central hypothesis that our method, model-free shared autonomy, improves a pilot’s performance. We do this first in the Min-Assumptions setting, where the dynamics, user policy, and goal space are all unknown. We then test our ability to leverage this information when it exists in the next section. Manipulated variables. We manipulate (1) the operator team composition: a solo pilot, a solo copilot, or our method – a pilot assisted by a copilot; and (2) the policy followed by the simulated pilot – a categorical variable that can take on four values: None (always executes a noop), LaggyPilot, NoisyPilot, and SensorPilot. LaggyPilot is an optimal pilot except that it can’t change actions quickly, which for a real human might be the result of poor reaction time. The LaggyPilot policy is trained as follows: augment the state vector with the landing site coordinates, train a reinforcement learning agent using vanilla DQN, and corrupt the trained policy by forcing it to repeat the previously executed action with fixed probability p=0.85. This causes each action to repeat for a number of steps that follows a geometric distribution. NoisyPilot is an optimal pilot except that it occasionally takes the wrong action, which for a real human might be the result of mistakenly pressing the wrong key. It uses the same training procedure as LaggyPilot but follows an ϵ-greedy behavior policy at test time (ϵ=0.3). SensorPilot tries to move toward the landing site by firing the appropriate lateral thruster, but is oblivious to gravity and doesn’t use the main engine; these actions provide enough signal for an assistive copilot to deduce the location of the landing site, which may be all the human is willing to do. Dependent measures. We measure reward, success rate, and crash rate. Hypothesis. We hypothesize that a pilot-copilot team with a simulated pilot will perform better on the Lunar Lander game than a solo pilot or solo copilot. | | | | | | --- | --- | --- | --- | | (1,2) A copilot that leverages input from the synthetic | (1,2) A copilot that leverages input from the synthetic | (1,2) A copilot that leverages input from the synthetic | (1,2) A copilot that leverages input from the synthetic | Fig. 2: (1,2) A copilot that leverages input from the synthetic LaggyPilot outperforms the solo LaggyPilot and solo copilot. The colored bands illustrate the standard error of rewards and success rates for ten different random seeds. Rewards and success rates are smoothed using a moving average with a window size of 20 episodes. (3) The benefit of using Bayesian goal inference or supervised goal prediction depend on α and the user model. Each success rate is averaged over ten different random seeds and the last 100 episodes of training. (4) The effect of varying α depends on the user model. Each success rate is averaged over ten different random seeds and the last 100 episodes of training. | | | | | | | --- | --- | --- | --- | --- | | | Without Copilot | With Copilot | | | | Pilot | Reward | Success Rate | Crash Rate | Reward | Success Rate | Crash Rate | Training Episodes | α | | None | - | - | - | −151±17 | 0.026 | 0.156 | 742 | 0.0 | | Sensor | −479±14 | 0.000 | 1.000 | −82±0 | 0.060 | 0.650 | 800 | 0.2 | | Laggy | −133±16 | 0.150 | 0.750 | 8±30 | 0.287 | 0.186 | 236 | 0.8 | | Noisy | −72±8 | 0.150 | 0.700 | −28±0 | 0.240 | 0.160 | 604 | 0.5 | | Optimal | 75±6 | 0.720 | 0.030 | - | - | - | - | - | TABLE I: Evaluation of simulated pilot-copilot teams on Lunar Lander. Rewards are shown with their standard error on ten different random seeds and the last 100 episodes of copilot training for teams with a copilot; on 100 episodes for teams without a copilot. Analysis. The results in Figure [2](#S5.F2 "Fig. 2 ‣ V-A Testing Unstructured Copilot Performance ‣ V Simulation Experiments ‣ Shared Autonomy via Deep Reinforcement Learning") (first two plots) show that a copilot which leverages input from LaggyPilot outperforms the solo LaggyPilot and solo copilot: the combined pilot-copilot team crashes and goes out of bounds less often, uses less fuel, follows stabler trajectories, and finds the landing site more often than the other two solo teams. The solo copilot and combined pilot-copilot teams learn from experience, whereas the solo LaggyPilot is pretrained and frozen; hence the stationarity of the gray curve. Table [I](#S5.T1 "TABLE I ‣ V-A Testing Unstructured Copilot Performance ‣ V Simulation Experiments ‣ Shared Autonomy via Deep Reinforcement Learning") shows that NoisyPilot and SensorPilot also benefit from assistance, although SensorPilot’s success rate does not substantially increase. To measure the sensitivity of the copilot’s performance to the pilot tolerance hyperparameter α (recall Equation [3](#S4.E3 "(3) ‣ IV-E Control Sharing ‣ IV Model-Free Shared Autonomy ‣ Shared Autonomy via Deep Reinforcement Learning")), we sweep different values of α while shaping the reward Rfeedback to improve the performance of SensorPilot. The results in Figure [2](#S5.F2 "Fig. 2 ‣ V-A Testing Unstructured Copilot Performance ‣ V Simulation Experiments ‣ Shared Autonomy via Deep Reinforcement Learning") (bottom right) show the effects of varying α for different simulated pilot models: α=0 is optimal for SensorPilot, and α≈0.5 is optimal for LaggyPilot and NoisyPilot. ### V-B The Benefit of Structure when Structure Exists In some tasks, the user’s private information will indeed be a goal, and we will indeed know the set of candidate goals and the policy that user follows given a goal. In this section, we show the adaptions of our method for Known-Goal-Space and Known-User-Policy from Section [IV](#S4 "IV Model-Free Shared Autonomy ‣ Shared Autonomy via Deep Reinforcement Learning") can effectively leverage this information when it exists. Manipulated variables. We manipulate (1) the input decoding mechanism – a categorical variable that can take on three values: Bayesian goal inference, supervised goal prediction, and raw action embedding; (2) the policy followed by the simulated pilot – a categorical variable that can take on two values: SensorPilot and LaggyPilot; and (3) the pilot tolerance α∈[0,1] – a continuous variable sampled uniformly across the unit interval. Hypothesis. We hypothesize that a copilot that uses Bayesian goal inference or supervised goal prediction to interpret user control inputs will outperform a copilot that uses raw action embedding. Analysis. The results in Figure [2](#S5.F2 "Fig. 2 ‣ V-A Testing Unstructured Copilot Performance ‣ V Simulation Experiments ‣ Shared Autonomy via Deep Reinforcement Learning") (third plot) show that when the goal space and user model are known, Bayesian goal inference and supervised goal prediction outperform raw action embedding. Bayesian goal inference enables much better assistance when the user model is approximately correct: LaggyPilot behaves similarly enough to an optimal pilot that maximum entropy inverse reinforcement learning generates high-accuracy estimates of the landing site. As a result, Bayesian goal inference performs better than supervised goal prediction and raw action embedding on LaggyPilot. We conclude that when an approximately correct user model is available, one should take advantage of it by using Bayesian goal inference instead of supervised goal prediction or raw action embedding. When the user model is unknown ex-ante, then one should use supervised goal prediction instead of raw action embedding. ### V-C Adapting to Diverse Users Next, we investigate to what extent the copilot’s learned policy is adapted to the pilot it assists at training time. User-specific adaptation is important because it would enable our method to generalize to tasks in which users display a range of behavior policies with distinct types of errors that cannot simultaneously be corrected by a general assistance feature. Manipulated variables. We manipulate (1) the policy followed by the simulated pilot used to train the copilot and (2) the policy followed by the simulated pilot used to evaluate the copilot – both categorical variables that can each take on four values: None (always executes a noop), SensorPilot, LaggyPilot, and NoisyPilot. Hypothesis. We hypothesize that the copilot learns an assistive policy that is personalized to the individual user, and that a copilot trained with one type of simulated pilot will perform better if evaluated with the same type of pilot than with a different pilot. | | | | --- | --- | | | Evaluation Pilot | | Training Pilot | None | Sensor | Laggy | Noisy | | None | 0.02 | 0.02 | 0.29 | 0.04 | | Sensor | 0.18 | 0.46 | 0.31 | 0.23 | | Laggy | 0.00 | 0.00 | 0.31 | 0.23 | | Noisy | 0.10 | 0.10 | 0.38 | 0.21 | TABLE II: Training and testing with different pilots on Lunar Lander. Success rates shown for 100 episodes. Analysis. The results in Table [II](#S5.T2 "TABLE II ‣ V-C Adapting to Diverse Users ‣ V Simulation Experiments ‣ Shared Autonomy via Deep Reinforcement Learning") hint that the copilot trained to assist SensorPilot acquires a relatively unique assistive policy, and that assisting SensorPilot requires something qualitatively different than assisting other pilots. A copilot trained with SensorPilot does not help other simulated pilots as well as it helps SensorPilot. Copilots trained with non-SensorPilot pilots do not assist SensorPilot as well as a copilot trained with SensorPilot. In contrast, a copilot evaluated with LaggyPilot or NoisyPilot performs equally well when trained with either of those two pilots. These results may be explained by the fact that SensorPilot implements a goal-signaling policy that is qualitatively distinct from LaggyPilot and NoisyPilot, which both implement policies based on perturbations of an optimal pilot. Another takeaway from Table [II](#S5.T2 "TABLE II ‣ V-C Adapting to Diverse Users ‣ V Simulation Experiments ‣ Shared Autonomy via Deep Reinforcement Learning") is that a copilot trained without a pilot learns an assistive policy that is just as effective at helping LaggyPilot as a copilot policy trained with LaggyPilot in the loop. This suggests that the copilot can still learn useful assistive behaviors even when there is no pilot in the loop during training. Furthermore, it may allow us to save human pilots time by pretraining the copilot without the human pilot, then fine-tuning the pretrained copilot with the human pilot. Vi User Study with a Game Agent -------------------------------- We saw in simulation that our method can improve the performance of different kinds of pilots. Next, we test whether it can actually help real people in a teleoperation task. Manipulated variables. We manipulated the team structure: solo copilot, solo human pilot, and our method – human pilot with a copilot. We use the same Lunar Lander environment for this part of the experiment. Rfeedback is a terminal reward as before. Dependent measures. Our objective measures are success rate and crash rate. We additionally introduce some informal subjective measures, where in each condition we ask participants about their experience, to help us understand their perception of the copilot. Hypothesis. We hypothesize that a pilot-copilot team with a real human pilot will perform better than a solo human pilot or solo copilot. Subject allocation. We recruited 11 male and 1 female participants, with an average age of 24. Each participant was provided with the rules of the game and a short practice period of 20 episodes to familiarize themselves with the controls and dynamics. To avoid the confounding effect of humans learning to play the game better over time, we counterbalanced the order of the two conditions that required human play (solo pilot, and assisted pilot). Each condition lasted 30 episodes. To speed up learning, the copilot was pretrained without a pilot in the loop then fine-tuned on data collected from the human pilot. Pilot tolerance α=0.6 was chosen heuristically to match the difficulty of the game and the average human user’s skill level. The default game environment is too challenging for human pilots, so it was modified to make the vehicle’s legs more resistant to crashing on impact with the ground. Additionally, pilot tolerance was set to α=0 when the human was not pressing any keys, and an additional key was introduced to allow the user to explicitly enter a noop input with α=0.6. | | | | | --- | --- | --- | | (1) Evaluation of real humans on Lunar Lander. Success and crash rates averaged over 30 episodes for teams with human pilots. (2,3) Trajectories followed by real humans with and without a copilot on Lunar Lander. Red trajectories end in a crash or out of bounds, green in success, and gray in neither. The landing pad is marked by a star. For the sake of illustration, we only show data for a landing site on the left boundary. | (1) Evaluation of real humans on Lunar Lander. Success and crash rates averaged over 30 episodes for teams with human pilots. (2,3) Trajectories followed by real humans with and without a copilot on Lunar Lander. Red trajectories end in a crash or out of bounds, green in success, and gray in neither. The landing pad is marked by a star. For the sake of illustration, we only show data for a landing site on the left boundary. | (1) Evaluation of real humans on Lunar Lander. Success and crash rates averaged over 30 episodes for teams with human pilots. (2,3) Trajectories followed by real humans with and without a copilot on Lunar Lander. Red trajectories end in a crash or out of bounds, green in success, and gray in neither. The landing pad is marked by a star. For the sake of illustration, we only show data for a landing site on the left boundary. | Fig. 3: (1) Evaluation of real humans on Lunar Lander. Success and crash rates averaged over 30 episodes for teams with human pilots. (2,3) Trajectories followed by real humans with and without a copilot on Lunar Lander. Red trajectories end in a crash or out of bounds, green in success, and gray in neither. The landing pad is marked by a star. For the sake of illustration, we only show data for a landing site on the left boundary. Analysis. Figure [3](#S6.F3 "Fig. 3 ‣ VI User Study with a Game Agent ‣ Shared Autonomy via Deep Reinforcement Learning") shows a clear quantitative and qualitative benefit to combining a real human pilot with a copilot. Humans follow a tortuous path with sudden drops and difficult course corrections, leading to fewer successes and more crashes. With a copilot, the human follows a smooth, gradual descent to the landing site, leading to significantly more successes and significantly fewer crashes than without a copilot for each of the participants. We ran a repeated measures ANOVA with the presence of the copilot as a factor influencing success and crash rates, and found that f(1,11)=165.0001,p<0.0001 for the success rate and f(1,11)=259.9992,p<0.0001 for the crash rate. The combined human pilot-copilot team succeeds significantly more often than the solo copilot, at the expense of crashing significantly more often. For each of the participants, we ran a binomial test comparing their success rate and crash rate in the combined pilot-copilot team to those of the solo copilot and found that p<0.01 for all comparisons. The subjective evaluations generally suggest that users benefited from the copilot. The assistive system was particularly helpful in avoiding crashing, but perceived to be somewhat inconsistent in its behavior and too aggressive in stabilizing flight at the expense of slowing down the lander’s descent. Vii User Study with a Physical Robot: Quadrotor Perching --------------------------------------------------------- | | | | | --- | --- | --- | | (1) Evaluation of real humans on the quadrotor perching task. Success and crash rates averaged over 20 episodes. (2,3) A bird’s-eye view of trajectories followed by real humans with and without a copilot. Red trajectories end in a crash or out of bounds, green in success, and gray in neither. The landing pad is marked by a star. | (1) Evaluation of real humans on the quadrotor perching task. Success and crash rates averaged over 20 episodes. (2,3) A bird’s-eye view of trajectories followed by real humans with and without a copilot. Red trajectories end in a crash or out of bounds, green in success, and gray in neither. The landing pad is marked by a star. | (1) Evaluation of real humans on the quadrotor perching task. Success and crash rates averaged over 20 episodes. (2,3) A bird’s-eye view of trajectories followed by real humans with and without a copilot. Red trajectories end in a crash or out of bounds, green in success, and gray in neither. The landing pad is marked by a star. | Fig. 4: (1) Evaluation of real humans on the quadrotor perching task. Success and crash rates averaged over 20 episodes. (2,3) A bird’s-eye view of trajectories followed by real humans with and without a copilot. Red trajectories end in a crash or out of bounds, green in success, and gray in neither. The landing pad is marked by a star. One of the drawbacks of analyzing Lunar Lander is that the game interface and physics do not reflect the complexity and unpredictability of a real-world robotic shared autonomy task. To evaluate our method in a more realistic environment, we formulate a “perching” task for a real human flying a real quadrotor: land the vehicle on a level, square landing pad at some distance from the initial take-off position, such that the drone’s first-person camera is pointed at a specific object in the drone’s surroundings, without flying out of bounds or running out of time. Perching a drone at an arbitrary vantage point enables it to be used as a mobile security camera for surveillance applications. Humans find it challenging to simultaneously point the camera at the desired scene and navigate to the precise location of a feasible landing pad under time constraints. An assistive copilot has little trouble navigating to and landing on the landing pad, but does not know where to point the camera because it does not know what the human wants to observe after landing. Together, the human can focus on pointing the camera and the copilot can focus on landing precisely on the landing pad. ![The quadrotor perching task involves a human pilot landing a drone on a small pad while pointing its camera at a random object.](https://media.arxiv-vanity.com/render-output/7360534/quad-fig.jpg) Fig. 5: The quadrotor perching task involves a human pilot landing a drone on a small pad while pointing its camera at a random object. Robot task. Figure [5](#S7.F5 "Fig. 5 ‣ VII User Study with a Physical Robot: Quadrotor Perching ‣ Shared Autonomy via Deep Reinforcement Learning") illustrates the experimental setup. We fly the Parrot AR-Drone 2 in an indoor flight room equipped with a Vicon motion capture system to measure the position and orientation of the drone as well as the position of the landing pad. Users are only allowed to look through the drone’s first-person camera to navigate, and are blocked from getting a third-person view of the drone. Each episode lasts at most 30 seconds. An episode begins when the drone finishes taking off. An episode ends when the drone lands, flies out of bounds, or time runs out. The action space A consists of 18 discrete actions that correspond to moving left, right, forward, back, descending, or hovering in place and simultaneously rotating (yawing) clockwise, counter-clockwise, or not rotating. The state s∈R10 is a ten-dimensional vector that encodes the vehicle’s position, velocity, angle, angular velocity, and the horizontal components of the difference between the landing pad position and the vehicle’s position. At the beginning of each episode, the starting position and orientation of the drone are randomized and the user is told that their goal is to point the camera at an object selected randomly from a set of four in the vicinity: a red chair, a gray chair, white styrofoam boards, or a door. The agent’s state does not include this target orientation, which is necessary for success. Success is defined as landing on the pad (evaluated automatically using motion tracking) while orienting the camera at the correct object, which is evaluated by the human experimenter with a button press at the end of the episode. Crashing is defined as landing outside the landing pad or going out of bounds. As before, the agent uses a multi-layer perceptron with two hidden layers of 64 units each to approximate the Q function ^Q:S×A2→R. The action-similarity function f(a,ah) in the agent’s behavior policy counts the number of dimensions in which actions a and ah agree (e.g., f((left,rotate clockwise),(left,rotate counter-% clockwise))=1). As discussed earlier in Section [IV](#S4 "IV Model-Free Shared Autonomy ‣ Shared Autonomy via Deep Reinforcement Learning"), the agent’s reward function is composed of a hard-coded function Rgeneral and a user-generated signal Rfeedback. Rgeneral penalizes distance from the landing pad, since moving toward the pad is generally useful to all pilots regardless of their desired camera orientation. Rfeedback emits a large positive reward at the end of the episode if the task was completed successfully, or a large negative reward in the event of a crash. Manipulated variables. We manipulate the pilot-copilot team membership as before. Dependent measures. Performance is measured using the dependent factors of success rate and crash rate. As before, we ask participants about their experience (see Table 2 in the supplementary material) to help us understand their perception of the copilot. Hypothesis. We hypothesize that a pilot-copilot team with a real human pilot will perform better on the quadrotor perching task than a solo human pilot or a solo copilot. Subject allocation. We recruited 3 male and 1 female participants, with an average age of 23. Each participant was provided with the rules of the game and a short practice period of 2 episodes to familiarize themselves with the controls and dynamics. To avoid the confounding effect of humans learning to play the game better over time, we counterbalanced the order of the two conditions that required human play (solo pilot, and assisted pilot). Each condition lasted 20 episodes. To speed up learning, the copilot was pretrained in simulation without a pilot in the loop then fine-tuned on data collected from the human pilot. The pretraining simulation assumed an idealized physics model in which the drone is a point mass, there are no external forces, linear velocity commands are executed without any noise, and sensors have zero measurement error. In the pretraining simulation, a target angle (yaw) is randomly sampled for each episode to simulate the random choice of a target object for the camera in the real world. As before, the agent cannot directly access this target angle through its state. With a real human pilot in the real world, pilot tolerance was set to α=0 when the human was not pressing any keys, and otherwise set to α=1. Analysis. Figure [4](#S7.F4 "Fig. 4 ‣ VII User Study with a Physical Robot: Quadrotor Perching ‣ Shared Autonomy via Deep Reinforcement Learning") shows a clear quantitative and qualitative benefit to combining a real human pilot with a copilot. Humans are rarely able to arrive at the landing pad, leading to fewer successes and more crashes. With a copilot, the human consistently gets to the landing pad, leading to significantly more successes and significantly fewer crashes than without a copilot. As before, we ran a repeated measures ANOVA with the presence of the copilot as a factor influencing success and crash rates, and found that f(1,3)=44.1045,p<0.01 for the success rate and f(1,3)=62.3151,p<0.01 for the crash rate. The combined human pilot-copilot team succeeds significantly more often than the solo copilot, at the expense of crashing significantly more often. For each of the participants, we ran a binomial test comparing their success rate and crash rate in the combined pilot-copilot team to those of the solo copilot and found that p<0.01 for all comparisons. Note that although the p-values for the hypothesis tests indicate statistical significance, our sample size of n=4 participants is still relatively small. The subjective evaluations in Table 2 of the supplementary material generally suggest that users benefited from the copilot. Viii Discussion ---------------- In this paper, we contribute an algorithm for shared autonomy that uses model-free reinforcement learning to help human users with tasks with unknown dynamics, user policy, and goal representation. Our user studies with a virtual agent and a real robot suggest that this method can indeed be effective at improving user performance. Several weaknesses and open questions remain to be addressed. Inferring user intent in general will require memory. Several existing techniques may accomplish this, including concatenating m previous frames with the current observation, or adding recurrent connections to the copilot policy architecture as in [[10](#bib.bib10)]. Finally, users will adapt to the robot’s interface, and explicitly capturing this may improve copilot training and inform theoretical guarantees on convergence [[22](#bib.bib22)].
ec389538-931f-4fec-a9c9-58beb3d5e5b0
trentmkelly/LessWrong-43k
LessWrong
Group-level Consequences of Psychological Problems In a previous post, we introduced the distinction between psychological disorders (massive life-shattering issues like some forms of schizophrenia and drug addiction) from psychological problems (more minor issues like being anger-driven or avoiding parties). And we highlighted that although disorders are obviously important to address, neglecting psychological problems comes at a cost for our ability to optimize. What are the consequences of this model for groups, particularly in terms of norms? First, let’s avoid reification bias here: we’re not looking for an inherent notion of bad or good norms, of moral rightness of groups and things like that. Instead we’re interested in incentive structures, and their consequences for the performance and health of the group. So what happens if members of a group don’t address their psychological problems, and the group doesn’t incentivize dealing with these? * Generation of new psychological problems * Example: Person A doesn’t like working in the group office. Yet they are still required to do so by group norms to boost productivity and collaboration. However, if they have a health issue (being tired, feeling a bit sick today, etc.), now from the productivity perspective letting them work from home is better. This creates incentives to find themselves in situations where they say and feel that they are too tired, for example: * Lie about being tired. * Pay too much attention to their tiredness, as it gives them an excuse (therefore making them actually feel more tired!) * Trick themselves into thinking they are tired. If they have feelings that are close to tiredness, and they’re unsure of how to interpret them, they’ll feel a slight pull toward tiredness. * Sleep late, as they can genuinely say the next day that they are tired and get to work from home. * Generation of friction and constraints * Example: Person A suffers from anxieties about publishing, although they’re the best person to wri
d70f9af8-9886-4bfd-847b-79e1f8f01209
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
GPT 4 is Smarter than You Think: Introducing SmartGPT I have three goals for this video first I want to show you a way of using gpt4 to get smarter results second I want to argue that The Benchmark results we have for GT4 do not reflect its full abilities and third I want to show you a system that I am developing somewhat cheekily called smart GPT that is already showing significant results on official benchmarks it remains to be fully optimized which I think is exciting in itself I have shown the system to people at openai who have been quite impressed and I'm going to end with some Reflections on where that might leave us for gpt5 but before I get into how it works I just want to show you one example of it in action to wet your appetite this example comes from a TED Talk that was released this week so suppose I left to five close to dry out in the sun and it took them five hours to dry completely how long would it take to dry 30 clothes GPT 4 the newest greatest AI system says 30 hours not good on the left you can see gpt4's original answer and it gives this answer pretty consistently whenever you prompt it with the question provided on the right you can see the final answer from the smart GPT model which is correct and it consistently gives that answer I really like how it gives context as well and it provides some of the assumptions that it had in giving this correct answer now don't you worry there will be plenty more examples to go through in this video including another one from that Ted talk but first I want to give you an overview of what is this smart GPT model where did I get my inspiration for it from and how does it work I'm going to keep it fairly simple because it's the beginning of the video and I know a lot of people won't really care about the inner details that will come later in the video but the high level overview is this there are at least three things that have been proven to improve the outputs of gpc4 what's called Chain of Thought prompting sometimes called step-by-step prompting reflection or finding its own errors and I did an entire video on this called Gypsy 4 can self-improve and dialoguing with itself entering into a back and forth on its own outputs and deciding which one is best you can see the title of the papers which contain much more detailed results of course linked above now the first paper only came out a few days ago Midway through my testing so my results don't even reflect the full capacity of the model and even if there's nothing else you take from this video the results from this paper can instantly improve the outputs you get from gpt4 many of you might remember that prompting GT4 with let's think step by step improves its results to give you a very quick reference point just asking a question to Gypsy 4 gives you 81 accuracy with that prompt let's think step by step it goes up to 86 but algorithmically the paper found an improved prompt that can give you even better results 89 accuracy all we do and this is the first part of smart GPT is we add answer let's work this out in a step-by-step way to be sure we have the right answer now I have so much to say about why I think this works but I know many of you won't be that interested in my theories so I'm going to save them to the end for those who are interested some of you just want the results so I'm going to get to those first so far you might be thinking well thanks Philip that's a cool prompt I'm going to use that but what's this whole smart GPT about is it just a single prompt no I believe with evidence there are ways of leveraging even better results than just using a great Chain of Thought prompt so let's move on to the next part of the system these different outputs in the middle for my tests I typically did three outputs but of course depending on the context window it could be far more than that and I'm going to talk about ways I could further improve this model or we could later on in the video just to restate these outputs are when you take the user input and add the word question at the start and then at the end add answer let's work this out in a step-by-step way to make sure we have the right answer at this moment many of you are thinking what is the point of multiple outputs it's gpt4 it's just going to give you the answer it thinks is best and that's it well actually it doesn't quite work like that these models have a temperature between zero and one I believe the default 4gbt4 might be around 0.5 and simplifying massively this determines how creative or conservative the model is in giving its outputs so given that gpt4 tries to be fairly creative you don't get the same output every time the output is randomly sampled according to an internal probability distribution so you can get situations and I face this hundreds of times where some of the outputs are correct and others are incorrect and this is where reflection comes in sometimes definitely not always but sometimes quite often gbt4 can detect the errors in its own output and many of you will notice at this point that the prompt that I used to elicit gpt4 to spot its own errors contains the same step-by-step prompt I used earlier that has been shown to produce good results so to summarize sometimes at this stage gpt4 detects the errors that some of its outputs have made definitely not always there are certain questions it just simply can't spot the error but sometimes it can and then I get it to engage in a dialogue using a format similar to one in this paper published last month it's a short dialogue and this is the step I believe that can be most optimized in the future I Envision an entire Council of advisors made up of gpc4 imitating mathematicians judges Etc at the moment it's just being a resolver and printing a final improved output anyway I'm going to get back to the theory later in the video because I know some of you will be getting bored at this stage and want to see more practical examples and the results from my Benchmark tests as I don't have the gpt4 API key yes I had to manually input each of these steps hundreds of times waiting sometimes three hours between each go because you can only do 25 messages every three hours on the left you can see the three outputs when you ask it to think step by step and then you have the researcher step in the middle and at the top right and finally the resolver step notice here I was using the original let's think step by step because the paper hadn't yet been published on improving that prompt but it's time for the second example from that Ted Talk and then I definitely will get onto the benchmarks a different one I have 12 liter Jog and 60 liter joke and I want to measure six letters how do I do it just use the six liter job right gpt4 spits out some very elaborate nonsense foreign of course I tested smart GPT with that question and you can see the difference between the original Gypsy 4 which gives this incredibly convoluted bad answer and smart GPT The Final Answer output now at this point I know many of you will be impressed but you'll be thinking I don't have time to input things five times well I'm developing a model where it can all be done automatically here is a preview of how it works but of course at the moment it has to use GPT 3.5 turbo because I don't have the API key of gpd4 the epic thing is this you just ask a single question I've written ask smartgpt a question and of course it does take a little bit longer to respond because it's doing five or six calls via API but it does output the final answer from the resolver step I will be honest and say that GPT 3.5 isn't as good at reflecting or resolving but this is an example of a question where the original chat gbt consistently gets it wrong and smart GPT 3.5 gets it right using this program remember all you have to do as a user is type in a question as normal and it goes through this entire five or six step process behind the scenes by the way this was a question from mmlu which is a famous Benchmark which I'll get to in a second here's one last practical example before I get to that Benchmark I know many teachers use chat GPT and GT4 to create quizzes for their classes and here is the same question put through Gypsy 4 and smart gbt the question is create a high school algebra quiz with five questions and answers and explanations at the end now points for spotting the difference but if the teacher had handed out the original quiz look at the answers for question five it says the answers are 1 and 1.5 but then in the explanation it gives the final answers which are correct by the way of three and 0.5 so that would really confuse some students at the reflection stage smartgbts spotted that error and resolved it and as you can see the answer for question five has the correct answers straight away if at any point you're wondering if I completed the open AI chat topt prompt engineering course the answer is yes but it didn't inform too much of my thinking it was more for beginners and I had already factored in things like giving the model time to think and writing clear instructions The Benchmark that I chose to test smartgbt on was the famous mmlu massive multitask language understanding Benchmark as you can see the state-of-the-art is indeed gpt4 with 86.4 accuracy and you know open AI think it's a big deal because it's the Benchmark mentioned on the front page of their technical report without boring you too much I extracted the questions from the test set of the mmlu data file and I didn't pick the topics at random I went for those that I thought gpt4 would find the hardest delving into the original mmlu paper you can see that Gypsy 3 found formal Logic the hardest scoring just over 25 which is random chance it's a four question multiple choice test so around 25 or 30 is pretty bad and notice they helped out Gypsy 3 here they did it few shot meaning they gave it five successful examples before asking it a new question it's the same thing they did with Gypsy 4 they did it five shot but just before I show you the results there are three things I want to mention here first I was curious how smart gbt would do without any help zero shot second I wanted to do it zero shot because people using gpd4 don't typically give five successful examples before asking Gypsy for a question they just want code or a quiz or a poem or an example they don't often provide five brilliant examples of code before asking their question and third if I can prove it works zero shot then of course future refinements can be made to push the results even further and here are the results from the first 25 questions from the formal logic test set of the mmlu I did many more tests after this but you can see from this set if you just to ask the question you get a lower overall accuracy but of course 68 for GT4 is still a huge improvement over gpd3s around 25 what happens when you add let's think step by step which as we know now isn't the fully optimized Chain of Thought prompt well on average you get around 74 75 that was 75 examples inputted manually and I still have all the tabs open I'm keeping them open because I'm compiling a spreadsheet with the actual outputs but what did the resolver get drawing upon gt4's ability to reflect and engage in dialogue with itself it got 84 now notice something about that number gpc4 zero short got 32 of the questions wrong that was half to 16 after putting it through the smart GPT system there was one question where the resolver model gave both a correct and incorrect answer but I'm counting that as an incorrect answer for the purposes of this test anyway from 30 22 to 16 incorrect that is a pattern that stayed consistent throughout all my testing that approximately half of the errors that gpt4 makes can be rectified if you give it the optimized step-by-step prompt get it to reflect on its results and get it to engage in dialogue and decide on a final answer at this point for those people losing track of all the details I want to put into context what resolving half of the errors on mmlu might mean in the context of the big picture here's Leonard Heim an AI governance researcher suggesting a score of 95 on the mmlu would be reflective of agi-like abilities I do think I have like a 50 chance like within the next 20 years or so there might be something will be my call in AGI or a transformative AI what do I mean by this well maybe we can measure it on benchmarks there's like this famous mmau benchmarks like yeah there's something which like scores like 95 on this going back to the results if a smart gpt-like system can automatically resolve half of the errors that gpt4 makes on the mmlu that would increase its score from around 86.4 to around 93 which is not far off 95 remember his prediction was a 50 chance in 20 years I'm talking about g54 now for those who are still skeptical I'm going to show you plenty more results now and then walk through the papers that give the theory as to why this works one thing that I forgot to mention earlier is that the human expert level on the mmlu is 89.8 and that's taking the 95th percentile of human test takers and remember those are domain experts in each of these subtopics what we're doing is testing Gypsy 4 or smart GPT on all of the topics simultaneously so even if smart gbt like systems can't quite reach 95 and I think honestly they'll get pretty close with all the requirements I'm going to suggest I think they should almost certainly be 89.8 which is the human expert test taker level intrigued by these results I then put it through the college math test from the mmlu and remember this was before using the optimized version of the step-by-step prompt obviously I'm not going to go through all the questions here but let's skip to the final results we have zero shot accuracy 6 out of 15 which is 40 the average when you add let's think step by step was 53.5 and then the final output of the resolver model had a 60 accuracy so it couldn't quite resolve half of the errors but the overall pattern held up in case anyone is wondering about methodology I kept the formatting identical for every question I always opened a new tab for each question it wasn't looking at the context of what it had already put out each attempt was fresh aside from the resolver model which looked at the context of the researchers output and again as you can see from example 14 it wasn't like the researcher could always spot the errors or that the resolver could always pick the right option sometimes the let's think step-by-step prompt gave the right output but the resolver couldn't quite distinguish it the optimized prompt gets a slightly better output and upon reflection the researcher can sometimes but not always spot the errors of those outputs and sometimes but not always the resolver can swap based on those flaws which answer is best these are incremental improvements sometimes GT4 simply can't get it right I have noticed a few themes in those questions anytime it comes to division multiplication characters or counting in general deept4 tends to make mistakes that neither the researcher nor resolver can spot of course integrating a few tools via API would likely solve those issues and I don't want to preempt the conclusion to too much but I believe a smart gpt-like system with tools integrated could probably score around 95 right now on the mmlu especially if it was helped out with a few shot prompting to add weight to that preliminary conclusion I tested it on certain topics and had to stop because it simply got the questions right every single time for example High School psychology on the mmlu I then tried pre-history which it also aced before finding machine learning where I got more interesting results zooming in this time the raw score was 65 The Chain of Thought let's think step-by-step average was 71.6 and the resolver model got 80 let's now look a little deeper into why all of these steps might improve the end result in reply to the original let's think step-by-step paper which was published around a year ago Andrei carpathy said this adding something like let's think step by step to the prompt is a way of using the input space for computation that you'd normally want in the hidden state of the model instead of the workings out being done in the activations of the neural network it's done in the discrete tokens of that input space and he adds did not super see this coming and here is the paper released three days ago that improves upon that original prompt they also did their testing zero shot like me and they tested many prompts starting like I did with just direct prompting just asking the question like 99 of users would do of gypsy 4. and then they tried like me the well-established let's think step-by-step prompt they also iteratively tested Seven original prompts as well as the prompt that I've now integrated into smart GPT the let's work this out in a step-by-step way Etc they share my opinion that zero shot prompting setups have the benefit of not requiring such task dependent selection of exemplars you don't have to find correct examples it just does it all for you here are the end results for gpd4 that we saw earlier showing the difference between asking directly your question and using these refined prompts notice that this technique is somewhat model dependent and it doesn't have the same effect on smaller or weaker models before we move on to the next paper there is one somewhat failed prompt that I want to pick up on it's this self-critique prompt where they ask answer the question then critique the answer based on the critique we consider the other answer options and give a single final answer and you might wonder why didn't that prompt perform best when we know that reflection and dialogue can work my theory is because it's trying to do all of it in one prompt through my hundreds of experiments I've noticed that gpt4 can only handle so much in one go it simply gets overwhelmed or confused if you ask it to do too much in one prompt that's why I broke my model into stages to allow it to show off each of its abilities one by one and before we get to the other papers what's my personal Theory as to why this eliminates up to half of the errors that gpt4 makes well my guess is this remember that gbt4 is drawing on a vast data set of internet text and let me ask you what kind of text has things like question answer let's work this out be sure we have the right answer the kind of data that would have that text would be things like tutorials or expert breakdowns so I believe you're triggering more of the weights inside gpt4 that relate to things like expert tutorials and so inevitably you're getting slightly better answers next I've already explained why you'd get different outputs when you give the exact same prompt that's down to sampling and the temperature of the model but to simplify massively sometimes Gypsy 4 will give you an output that it knows isn't the most probable it introduces some Randomness into its sampling by generating multiple outputs you're getting a larger sample size reflecting the full range of probabilities that gpd4 subscribes to its outputs you're reducing a little bit some of the randomness that's inherent in gpd4 outputs next I believe that gpc4 can sometimes spot its own errors through reflection because prompting like this triggers a different set of Weights you could almost think of it as a different mindset one more focused on finding errors again if the question is too hard or involves counting characters division multiplication as I said earlier this won't help but a percentage of the time it can spot its own errors and point them out notice this is a separate bit of inference not lumped into the original prompt and when it does successfully point out the errors it can often engage in this dialogue with itself notice in a meta kind of way I'm using the step-by-step prompting to improve the reflection and dialogue so those are my theories as to why it works but at the end of the video I'm going to show you at least five ways I think the model can be further refined before we do though I looked up the paper by Zhou which produced that prompt that did the best in the previous paper they came to that special prompt through automatic prompt engineering but there's something interesting I want to point out though on page seven they say we use automatic prompt engineering to find a prompt starting with let's that maximizes the likelihood of correct reasoning steps then they found the best one that I integrated into smart GPT let's work this out in a step-by-step way to be sure we have the right answer that's the one I want you to use and they ran their own benchmarks and of course it did improve the scores but the interesting thing to me is they started with let's each time so even that first stage for the model might not yet be fully optimized maybe there's a prompt that doesn't begin with let's that improves this initial result still further anyway back to the papers I know many people watching this will wonder if I read the paper boosting theory of Mind performance in large language models via prompting and yes I did because they tested something similar for a theory of Mind test using similar techniques they were able to get theory of Mind accuracy for GPS 4 from 80 to 100 and they conclude that these results demonstrate that appropriate prompting enhances large language model theory of Mind reasoning and they underscore the context-dependent nature of these models cognitive capacities they use that original prompt let's think step by step along with some few short examples take a look at the gpt4 table and you can see how the let's think step by step improved the results dramatically and as I theorized earlier adding few short examples would push this still further this is part of why I think that 95 barrier on the mmlu will be broken probably this year by gpt4 a few other points from this paper they admit that there is not currently a theoretical understanding of why these prompting techniques are beneficial I've given you my theory and carpathies but no one quite knows for sure lastly from this paper and I found this really interesting giving it generic few shot prompts that weren't directly there theory of Mind actually improve the outputs slightly more than giving it direct theory of Mind examples this opens the door to the first of the five ways I anticipate smart GPT getting even smarter it could be possible to come up with generic few shot prompts that could be automatically integrated into the model that don't necessarily relate to the topic at hand this graph shows the impact of adding few short examples to gc3 and if this can be done in a generic way for gpd4 results could be improved still further next the boosting theory of mine paper speculates that integrating some of these approaches could boost the performance of weaker models to beyond their levels of GPT 4's zero shot accuracy next here is the original Dira paper that inspired me to have the researcher and resolver dialogue at the end of smart GPT as they say the dearer approach shows significant improvement over base gpc4 performance and these were open-ended questions by the way not multiple choice so this is more generally applicable than you might think you can see from this table how results improved after engaging in this dialogue and that brings me to the second way I anticipate smart GPT getting smarter in the future a longer and more Rich dialogue at the moment we have this simple research and resolver two-step dialogue I can imagine a council of advisors you can imagine a mathematician chipping in and a philosopher and a professor each one tapping into slightly different weights of gpd4 extracting more hidden expertise I'm not saying that would transform the results but it might Edge them another few percent higher next even with longer dialogues and different experts we could find ways of optimizing these prompts just like we did with the original let's think step by step that's the Third Avenue of improvement that I envisage because I came up with these prompts I'm sure they could be improved next we could experiment with different temperatures remember a lower temperature makes the model more conservative relative a Higher One towards one makes it more creative we could experiment with a higher temperature to produce a more diverse range of outputs at this stage and then perhaps a more conservative deterministic temperature for the final judge or resolver it might not work but it's worth trying and the fifth Improvement I know would work integrating apis for character counting calculators code interpreters Etc spending these weeks manually sorting through the outputs of GT4 on these benchmarks I can really see where it goes wrong and it's often by getting letters in the wrong order or making mistakes with division it gets the high level logic right and then makes quite simple errors basic tool integration would I am sure push the results still higher now I know this isn't my usual video and trust me I have been following the AI news and we'll get back to that very soon I'm determined to make those improvements and push smart gbt even further but of course that would be aided massively by getting access to to the plugins and the gpt4 API key so far I've had to do all of this manually which was a lot of work now as you saw earlier I have drawn on gpt4 to help me develop a program in replit to automate this process but at the moment it's GPT 3.5 and honestly the context window really limits the ability but I do look forward to the day when I can integrate gpt4 and put this out as an automatic model for people to test and play about with I'm sure that something similar will ultimately be incorporated by openai itself maybe as a thoughtful mode or smart mode a bit like Bing has creative precise balance Etc each response does take longer but as you've seen the outputs are noticeably better if the results of models like this one do officially exceed the 86.4 that openai talked about in the gpt4 technical War I do think that would reveal quite a few things first the openai isn't even aware of the full capabilities of its own model I don't even know if they anticipated things like Auto GPT I do think it would reveal that they need to do far more proper testing of their models before they release them they should make falsifiable predictions about what their models won't be capable of that way we would know just how much they know about their own models what we're trying to avoid is a situation where open AI say their model can only achieve X and then when they release the model in the wild someone comes along and achieves why where Y is much more impactful than x so those were the goals of this video to show you how to get more out of GT4 to run you through some of the fascinating papers that have been released in the last few days and weeks the third goal was to show you what this model could do with some official benchmarks and suggest ways it might get better in the near-term future of course if you have a gc4 API key or are an expert in benchmarking systems like gpc4 I'd love to hear from you I guess the final goal was to perhaps suggest to you that openai don't know as much about their own models as they might lead you to believe thank you so much for watching to the end and have a wonderful day
b2a5d519-6e48-4fe6-a382-5164847cda12
StampyAI/alignment-research-dataset/blogs
Blogs
Embedded Curiosities This is the conclusion of the **[Embedded Agency](https://intelligence.org/embedded-agency)** series. Previous posts:   [Embedded Agents](https://intelligence.org/embedded-agents)  —  [Decision Theory](https://intelligence.org/embedded-decisions)  —  [Embedded World-Models](https://intelligence.org/embedded-models) [Robust Delegation](https://intelligence.org/embedded-delegation)  —  [Subsystem Alignment](https://intelligence.org/embedded-subsystems)   ---   A final word on curiosity, and intellectual puzzles: I described an embedded agent, Emmy, and said that I don’t understand how she evaluates her options, models the world, models herself, or decomposes and solves problems. In the past, when researchers have talked about motivations for working on problems like these, they’ve generally focused on the motivation from [AI risk](https://intelligence.org/2017/04/12/ensuring/). AI researchers want to build machines that can solve problems in the general-purpose fashion of a human, and [dualism](https://intelligence.org/embedded-agents#3) is not a realistic framework for thinking about such systems. In particular, it’s an approximation that’s especially prone to breaking down as AI systems get smarter. When people figure out how to build general AI systems, we want those researchers to be in a better position to understand their systems, analyze their internal properties, and be confident in their future behavior. This is the motivation for most researchers today who are working on things like updateless decision theory and subsystem alignment. We care about basic conceptual puzzles which we think we need to figure out in order to achieve confidence in future AI systems, and not have to rely quite so much on brute-force search or trial and error. But the arguments for why we may or may not need particular conceptual insights in AI are pretty long. I haven’t tried to wade into the details of that debate here. Instead, I’ve been discussing a particular set of research directions as an *intellectual puzzle*, and not as an instrumental strategy. One downside of discussing these problems as instrumental strategies is that it can lead to some misunderstandings about *why* we think this kind of work is so important. With the “instrumental strategies” lens, it’s tempting to draw a direct line from a given research problem to a given safety concern. But it’s not that I’m imagining real-world embedded systems being “too Bayesian” and this somehow causing problems, if we don’t figure out what’s wrong with current models of rational agency. It’s certainly not that I’m imagining future AI systems being written in second-order logic! In most cases, I’m not trying at all to draw direct lines between research problems and [specific AI failure modes](https://intelligence.org/2018/10/03/rocket-alignment/). What I’m instead thinking about is this: We sure do seem to be working with the wrong basic concepts today when we try to think about what agency is, as seen by the fact that these concepts don’t transfer well to the more realistic embedded framework. If AI developers in the future are *still* working with these confused and incomplete basic concepts as they try to actually build powerful real-world optimizers, that seems like a bad position to be in. And it seems like the research community is unlikely to figure most of this out by default in the course of just trying to develop more capable systems. Evolution certainly figured out how to build human brains without “understanding” any of this, via brute-force search. Embedded agency is my way of trying to point at what I think is a very important and central place where I feel confused, and where I think future researchers risk running into confusions too. There’s also a lot of excellent AI alignment research that’s being done with an eye toward more direct applications; but I think of that safety research as having a different type signature than the puzzles I’ve talked about here. --- Intellectual curiosity isn’t the ultimate reason we privilege these research directions. But there are some *practical* advantages to orienting toward research questions from a place of curiosity at times, as opposed to *only applying the “practical impact” lens* to how we think about the world. When we apply the curiosity lens to the world, we orient toward the sources of confusion preventing us from seeing clearly; the blank spots in our map, the flaws in our lens. It encourages re-checking assumptions and attending to blind spots, which is helpful as a psychological counterpoint to our “instrumental strategy” lens—the latter being more vulnerable to the urge to lean on whatever shaky premises we have on hand so we can get to more solidity and closure in our early thinking. *Embedded agency* is an organizing theme behind most, if not all, of our big curiosities. It seems like a central mystery underlying many concrete difficulties.   The post [Embedded Curiosities](https://intelligence.org/2018/11/08/embedded-curiosities/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
96b40581-95f7-47c3-906c-57f045044428
trentmkelly/LessWrong-43k
LessWrong
Elementary Statistics Our elementary school has a directory listing kids and parents, and since we live in the future it's a spreadsheet, which means I can count things. A typical family at this K-5 school has one child enrolled (76%). The child has two parents (96%) with different last names (59%), but they share a last name with at least one parent (89%). The parents don't share email addresses (95%) or phone numbers (97%), do use gmail (72%), and do have Boston area codes (68%). Our family in the majority for each of these, even though there's naively only a 18% chance of that happening and they seem reasonably independent. It's surprising to me that while parents mostly don't have the same names as each other (59%), only 11% of their kids have hyphenated names. I guess people realized that hyphenated names grow exponentially? I'd like to look at how the children's last names relate to parental gender, but that would involve annotating inferred genders for ~500 parents. Boring details: * There are 233 kids across six grades (K-5) with two classes per grade, for an average of 19 kids per class. Kindergarten is the biggest (45 kids, 22 and 23 each) while second grade is the smallest (30 kids, 15 and 15 each). * 224/233 (96%) have two parents listed. * Of the kids with two parents listed, 91/224 (41%) have the same last name as each other. * 207/233 (89%) of kids have the same name as at least one of their parents. * Of the 26 kids who have a different name, 15 (58%) are hyphenations, 7 (27%) have no obvious connection, 3 (12%) list only one parent and may share a last name with the other, and 1 (4%) is an unhyphenation (Sam Alpha-Bravo and Pat Charlie, with child Alex Bravo). * 141 (76%) of families have one child in the school, 43 (23%) have two, and 2 (1%) have three. * All parents are listed with email addresses, but 12/224 (5%) have the same email listed for both parents. * 255/355 (72%) unique email address gmail accounts, 35 (10%) are yahoo, 13 (4%) are hotmail,
7fb11f45-687e-415f-a8ae-6f0c5e2d8697
trentmkelly/LessWrong-43k
LessWrong
Rational Communication Hopefully LW trains its readers to be aware of personal failures of rationality and to correct them over time. Whenever a small group (all the way down to 2) of rationalists communicate, they should make an equal effort to be aware of failures of their collective rationality (I definitely have failed at this so far in life). In particular: time is valuable, and you can hope to actually learn things from other people. Here are some remarks about communication intended to help you optimize (as opposed to communication intended to help you be happy). 1. Communicate information precisely. I have strong beliefs that contradict the beliefs of other rationalists. I need a better way to express precisely what the disagreement is and my confidence in my beliefs, particularly with people I don't know well. Right now I feel like I have no hope of correctly updating on the beliefs of others or sharing information in a productive way. To this end, I think making precise testable claims about the future and backing them up with probability estimates (as a regular feature of discussion) would be a good idea, even when your probability estimates are horribly miscalibrated. This would help me much more quickly learn about my own miscalibration, evaluate the miscalibration of others, and at least have a definite scale to make precise communication possible. 2. Communicate preferences precisely. Groupmembers often have strong preferences which they fail to coordinate well. I think groups I have been in would have done better if they consistently used any precise semantics for expressing preferences; for example, assigning monetary values or using some other fixed reference commodity. A bigger problem is that statements of preference carry way to much additional meaning in normal communication. I'm not sure how to divorce statements of preference from this other meaning, but its worth thinking about. 3. Explicitly think about the point of communication. There are important things t
05fd6b86-b32c-41cc-8d1a-dfb26c2cae6c
trentmkelly/LessWrong-43k
LessWrong
Are there non-AI projects focused on defeating Moloch globally? Meditations on Moloch lays out a rather pessimistic view of the future, and then offers a super-intelligent AI "gardener" as the solution. A lot of the rationalist community is focused on AI, which makes sense in that light (and of course because of the existential risk of unaligned AI), but I don't know of any projects focused on non-AI solutions to countering or defeating Moloch. Some projects exist to counter specific local coordination problems, but apparently none to counter the global gardening problem in the original post? Am I missing such a project? Is there a reason that AI is the only plausible solution? Is this low-hanging fruit waiting to be picked? edited to add some clarifications: * By defeating Moloch "globally" I mean in the sense of the global race to the bottom - preventing humanity from "reaching the sea" in the metaphor from the original Meditations on Moloch (which itself is borrowed from the Apocrypha Discordia). This doesn't mean solving all local coordination problems forever, just preventing us from reaching the absolute worst case that Bostrom conjures of our own destruction, the "Disneyland with no children". * Yes, I've read Inadequate Equilibria.
bb0d6bdd-c1e7-4320-9173-bc6d6e9e3505
trentmkelly/LessWrong-43k
LessWrong
legged robot scaling laws Fiction has lots of giant walking robots. Those designs are generally considered impractical or impossible, but they've been discussed for thousands of years, there must be something appealing about them. So, let's consider exactly what's impractical about large walking robots and what properties they'd have if they could be made. practicality Suppose you have a humanoid robot that operates in a factory. It never needs to leave the factory, so it can just sit in a wheelchair, which means it doesn't need legs, thus reducing costs. (Or you could give it tracks.) Better yet, it could just stay one place on an assembly line, so you don't even need the wheels. And then maybe it only needs one arm, so you could just take the arm. Now you're down to 1/4 the limbs of the original robot, and the legs would've been heavier because they handle more weight. And then maybe the hand can be replaced with something much simpler, like a vacuum gripper or pincer. So the result of all the cost reduction is cheap, right? Not really; commercial robotic arms are fairly expensive. Industrial equipment does only what's necessary, and it's still expensive. A lot of people designing stuff don't really understand costs. Large-scale production of goods has been heavily optimized, and the costs are very different from what they are for individuals. I've seen chemists who develop a lab-scale process using something expensive like palladium catalyst and expect it to be a good idea for industrial plants. Making a giant humanoid robot wouldn't be practical, but that's part of the point. Going to the moon wasn't practical. Giant robots are difficult, so maybe they're good for developing technology and/or showing off how good the stuff you designed is.   scaling laws > Still, it is possible to make walking machines with hydraulics; they're just slow and inefficient. So, that only makes sense where movement speed and efficiency don't matter much, but it turns out that those are usually importan
611a08ee-fae7-4642-9c3b-3aa6f687d994
trentmkelly/LessWrong-43k
LessWrong
The entropy maxim for binary questions The entropy of a message (H) is roughly proportional to the uniformity of its probability distribution (P(x) for each possible x). If the message has just two possible values, H is greater exactly insofar as the split between their probabilities is close to 50/50. P(A)P(B)H (bits)0.50.510.70.30.880.80.20.720.90.10.470.950.050.29 The entropy of a message is, intuitively, proportional to its information content. Thus you can learn more efficiently by seeking messages generated in higher-entropy ways. Assuming that people ask questions to get information, and that questions are strictly yes-or-no, or otherwise have two main answers (as in "which direction — east or west — leads to our destination?"), the best questions are those which separate options of roughly equal prior probability to the questioner. The prior probability does not necessarily match the intuitive (but kinda meaningless) "objective probability". E.g. with no specific information for the scenario, the "which direction" question is a 50/50 split, but if you're near the east coast of an island containing an otherwise-unknown destination, your priors should be biased in favour of "west". There are at least two uses of this maxim. You can use it yourself to guide your choice of questions. You can assume others follow it and, when they seem to violate it by asking weird binary questions, find that at least one of the maxim's assumptions are false: * the questioner doesn't seek information (as in rhetorical questions) * much of the questioner's response-probability goes to answers other than the main binary (as in the seemingly-binary "is the destination in place A, or place B?", when they really expect that they might get the longer answer "actually, C") * the questioner doesn't know enough information theory * the questioner's priors are very different from what you expect (as in "are you homo or hetero?", when they have information that favours the less common option) Alas, I don't (yet) know
1cb10cd0-bfe5-4cbb-a0c7-21b5f30d6ab2
trentmkelly/LessWrong-43k
LessWrong
Tragedy of the free time commons Procrastination often seems a bit like an internal tragedy of the commons. In a tragedy of the commons, a group of people share something nice, like a shared pasture on which to raise their cows. If they together refrained from overusing it they would all benefit (e.g. because it doesn’t become a mud pit), but if any one person alone refrains (e.g. by having a smaller herd of cows), they expect to see little of the benefit themselves, and they probably expect someone else to use more resources (e.g. by adding an additional cow to another herd), so that there isn’t even a shared benefit to the group from the person’s selfless action. Suppose you have a paper due on Friday, and it is going to take 60 minutes to finish it. Think of yourself over the preceding day as 960 you-minutes. Each you-minute would much prefer the paper be done than not done the following morning, but would somewhat prefer to not work on it themselves. Because these time-slices make their decisions about whether to work one after another, and know what decisions were made in the past, the final N you-minutes in the day will definitely work, if there are N minutes of paper left to write. This means for you-minutes where there are fewer minutes of paper left to write than you-minutes left who might write them, working doesn’t help—it just relieves a later you-minute of working which would otherwise be forced to. And that is certainly worse for the you-minute deciding. So the 60 minutes of writing is done in the last 60 minutes. Which doesn’t destroy any value in this model, so all is good. But let’s make it more realistic. Suppose that there are better and worse times to work, and which are which is not known ahead of time. Working during a worse minute either produces less than a minute of work, or incurs other costs to the relevant you-minute (e.g. extra suffering). Then instead of everyone doing nothing until exactly sixty minutes before the deadline and then working full time after that, wor
8486b289-ae1c-4bfe-a5f4-44d20c5ce0f4
trentmkelly/LessWrong-43k
LessWrong
UK Predicted Grades An issue arising in current UK politics is the issuing of high school grades for students who weren't able to take their exams this summer as a result of the COVID-19 pandemic. This could be an interesting experiment in Game Theory; how does one decide what grade a student should get? Should one decide at all? Firstly, the main victims of this unfortunate position are those expecting to go to university in September, who are unable to delay by waiting to take exams later this year or next year. The current UK policy was to consider previous and predicted grades of students and to consider the performance of the schools to help make an educated guess as to what the students would have achieved. The issue with this is that some schools overestimate what their students would have achieved and their students predicted grades, either honestly (through error) or dishonestly (to improve their performance records). This means those schools who are honest and correct have their student's grades moderated downwards, meaning many students are disappointed in their scores and may miss out on a place at university. Ultimately, if too many students aren't meeting their university admission offer, universities will have to lower their standards to meet their required course numbers. Is there a similar situation in Game Theory in which one must guess the outcome of multiple games based on the correlation of past results? Is there a Game Theory solution to this problem? More practically speaking, if universities make the ultimate decision to admit a student, what is the point in using an arbitrary guess, made by organisations with a conflict of interest, as a basis of admission? Should universities reconsider their process for this year's intake and allow all who have an offer into university, regardless of their "grades"? How would that affect universities?
8fd2af1e-9110-4869-8421-847ed86f8a25
trentmkelly/LessWrong-43k
LessWrong
Ideas for benchmarking LLM creativity
94f1a04b-7643-4c4c-abe7-42ed97458225
trentmkelly/LessWrong-43k
LessWrong
Provably Honest - A First Step Introduction I am new to AI Safety and this is my first lesswrong post (crossposted to EA Forum) so please feel free to correct my mistakes. This post is going to be a bit about my interpretation of existing alignment research and how I intend to tackle the problem, and the rest on provably honest AI systems. My Take on Alignment Research Any sort of interpretability research is model-dependent and so is RL from Human Feedback as it tries to optimize the reward model learnt specific to the given task. I don't know how AI will do alignment research themselves but if they too try to do research to make individual models safe then that too is model-dependent. Ignoring my scepticism about such AIs themselves needing superintelligence to do alignment research, I am otherwise optimistic about other approaches through which AI can assist us in aligning superintelligence AGIs. The biggest challenge of alignment research according to me is the fact that we don't yet know using which methodology AGI will actually come into being and yet we have to, we absolutely must align that AGI before it even comes to exist otherwise we might be too late to save the world. Note that here I don't intend to say that how we will attain generalization is the most challenging problem for alignment. I do think that how we will align an AGI is the challenging part, but I think what adds to the challenge is our uncertainty about what the model itself will look like once it gets superintelligence. Again, this is based on the assumption that we don't yet have definitive proof that the AGI will look like GPT-3 or Alphazero, if we do and I am unaware please do let me know of it. The following image shows a spatial representation of my understanding of existing technical alignment research. This tells the story of how current research is model-dependent and how we are trying to make each model safe separately. Figure 1: Model-dependent Alignment Research Pathway My primary cause of concern about
abfeadc8-58f4-4d70-a8f2-ae286aeb588e
trentmkelly/LessWrong-43k
LessWrong
Announcing the Q1 2025 Long-Term Future Fund grant round The Long-Term Future Fund is pleased to invite applications to our Q1 2025 funding round. We are seeking to support individuals and organizations working to help humanity navigate existential risks or working on projects improving humanity’s long-term flourishing. → Click here to apply ← Deadline for applying is February 15th. We will respond to applications submitted before then by March 31st, and should have most funding disbursed by April 30th. Why Grant Rounds? After a period of accepting applications on a rolling basis, we're trialling returning to focused grant rounds. I am hoping this will: * Allow us to recalibrate our funding thresholds given a rapidly changing giving environment * Provide donors with clearer feedback on the impact of their contributions * Offer applicants predictable response timelines For particularly time-sensitive opportunities, we maintain a fast-track application process, though we expect the bar to be substantially higher for funding through this channel (i.e., your odds of receiving funding are lower if you apply through the fast-track application). What We Fund We continue to focus on reducing existential risk (x-risk) and supporting work that could radically improve humanity's future. In practice, we expect the clear majority of projects we support to be focused on reducing x-risk from AI. Example focus areas include: * Research and analysis on x-risk and transformative technologies, with a focus on existential risks from Artificial Intelligence * Ecosystem support to help others work on x-risk and transformative technologies * Field-building and improving epistemics both within the extended ecosystem of people working on the long-term future, and among important decision-makers * Efforts to improve global governance of transformative technologies, again with a particular focus on AI Who Can Apply? Individuals and organizations worldwide are welcome to apply. We frequently fund: * Novel research projects * Indepe
502f1756-e22f-460d-9caf-d24241dcf67f
trentmkelly/LessWrong-43k
LessWrong
Rule Thinkers In, Not Out Imagine a black box which, when you pressed a button, would generate a scientific hypothesis. 50% of its hypotheses are false; 50% are true hypotheses as game-changing and elegant as relativity. Even despite the error rate, it’s easy to see this box would quickly surpass space capsules, da Vinci paintings, and printer ink cartridges to become the most valuable object in the world. Scientific progress on demand, and all you have to do is test some stuff to see if it’s true? I don’t want to devalue experimentalists. They do great work. But it’s appropriate that Einstein is more famous than Eddington. If you took away Eddington, someone else would have tested relativity; the bottleneck is in Einsteins. Einstein-in-a-box at the cost of requiring two Eddingtons per insight is a heck of a deal. What if the box had only a 10% success rate? A 1% success rate? My guess is: still most valuable object in the world. Even an 0.1% success rate seems pretty good, considering (what if we ask the box for cancer cures, then test them all on lab rats and volunteers?) You have to go pretty low before the box stops being great. I thought about this after reading this list of geniuses with terrible ideas. Linus Pauling thought Vitamin C cured everything. Isaac Newton spent half his time working on weird Bible codes. Nikola Tesla pursued mad energy beams that couldn’t work. Lynn Margulis revolutionized cell biology by discovering mitochondrial endosymbiosis, but was also a 9-11 truther and doubted HIV caused AIDS. Et cetera. Obviously this should happen. Genius often involves coming up with an outrageous idea contrary to conventional wisdom and pursuing it obsessively despite naysayers. But nobody can have a 100% success rate. People who do this successfully sometimes should also fail at it sometimes, just because they’re the kind of person who attempts it at all. Not everyone fails. Einstein seems to have batted a perfect 1000 (unless you count his support for socialism). But failure s
8813d9a2-5f7a-4e05-a6e7-376f9ed08674
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA—The Relativity of Wrong Discussion article for the meetup : West LA—The Relativity of Wrong WHEN: 26 November 2014 07:00:00PM (-0800) WHERE: 11066 Santa Monica Blvd, Los Angeles, CA How to Find Us: Go into this Del Taco. We will be in the back room if possible. Parking is free in the lot out front or on the street nearby. Discussion: Human ignorance is vast, so human knowledge is often understated. The domains of our ignorance keep getting smaller and further away from the decimal point. We will be discussing Isaac Asimov's important essay The Relativity of Wrong. Recommended Reading: * The Relativity of Wrong by Isaac Asimov * A Technical Explanation of Technical Explanation No prior exposure to Less Wrong is required; this will be generally accessible. In fact, this is a 101-level topic, only a touch more advanced than basic map and territory. While I'm at it, I want to mention that a few of us habitually show up early, so you should feel free to. Discussion article for the meetup : West LA—The Relativity of Wrong
aa5cfc24-31b5-4c2c-835e-48500c0ea25b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence **Note:** The video is no longer available. It has been set to private. It'll eventually be released on the main TED channel. This is a TED talk published early by a random TEDx channel. Tweet by EY: <https://twitter.com/ESYudkowsky/status/1655232464506466306> > Looks like my Sudden Unexpected TED Talk got posted early by a TEDx account. > > YouTube description: > Eliezer Yudkowsky is a foundational thinker on the long-term future of artificial intelligence.  > > With more than 20 years of experience in the world of AI, Eliezer Yudkowsky is the founder and senior research fellow of the Machine Intelligence Research Institute, an organization dedicated to ensuring smarter-than-human AI has a positive impact on the world. His writings, both fiction and nonfiction, frequently warn of the dangers of unchecked AI and its philosophical significance in today's world.  > > Yudkowsky is the founder of LessWrong, an online forum and community dedicated to improving human reasoning and decision-making, and the coinventor of the "functional decision theory," which states that decisions should be the output of a fixed mathematical function answering the question: "Which output of this very function would yield the best outcome?" > >
26c08cb6-af12-4fa5-94cf-ed7135487725
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Governing High-Impact AI Systems: Understanding Canada’s Proposed AI Bill. April 15, Carleton University, Ottawa Join us this April 15th for a panel discussion, keynote address by the founder of the Montreal AI Ethics Institute, and networking mixer on the governance of AI Systems in the Canadian context, held by EA Carleton and EA Canada.  The Canadian federal government is currently debating language that will govern and regulate High Impact AI systems. We've organized this one day event to bring academics, policy, legal, technical, AI Ethics and longtermist concerns together for dialogue.    Our panelists are: Dr. Graeme Auld & Benjamin Faveri, authors of [“Governing AI through ethical standards”](https://carleton.ca/sppa/2022/governing-artificial-intelligence-through-ethical-standards/).  Abhishek Gupta, founder of the [Montreal AI Ethics Institute](https://montrealethics.ai/).   Wyatt Tessari, founder of [AI Governance & Safety Canada](https://aigs.ca/) network, a AGI governace focused advocacy network.    Our panel discussion will be followed by a keynote delivered by Abhishek Gupta.
a9f1c6d1-05b8-418d-a15d-12b50d3a3896
trentmkelly/LessWrong-43k
LessWrong
Seeking Optimization of New Website "New Atheist Survival Kit," a go-to site for newly-made atheists I've put together a website, "New Atheist Survival Kit" at atheistkit.wordpress.com   The idea is to help new atheists come to terms with their change in belief, and also invite them to become more than atheists: rationalists.   And if it helps theists become atheists, too, and helps old atheists become rationalists, more the better.   The bare bones of it are all in place now. Once a few people have gone over it, for editing, and for advice about what to include, leave out, improve, re-organize, whatever, I'll ask a bunch of atheist and rationalist communities to write up their own blurb for us to include in a list of communities that we'll point people to in the "Atheist Communities" or "Thinker's Communities" sections on the main menu. It includes my rough draft attempt to basically bring down the Metaethics sequence to a few thousand words and make it stylistically and conceptually accessible to a mass audience, which I could especially use some help with.   So, for now, I'm here to ask that anyone interested check it out, and message me any improvements they think worth making, from grammar and spelling all the way up to what content to include, or how to present things.   Thanks to all for any help.
8c543779-0bc2-4caf-a5ff-1ba0b9ce4a60
trentmkelly/LessWrong-43k
LessWrong
Rational Resolutions: Special CFAR Mini-workshop SATURDAY This Saturday, Michael Valentine and I are taking the most fun and potent habit-formation material from CFAR's previous four-day workshops, and focusing that material towards making rational New Year's resolutions - and then actually keeping them. If you're in the Bay Area and interested in resolutions, or in seeing CFAR's habit-formation material distilled, this workshop is for you. We'll meet at the CFAR office in Berkeley from 11AM to 5PM PST on Saturday, Jan. 4th. You can register here - as of this posting, there are still several spots left! Normally an event like this would cost about $400, but we want to make this workshop more accessible. So, this time the event costs $195. And we’re confident enough in the value of this material that we offer a no-hassle full money-back guarantee: If, after attending this workshop, you decide you didn’t get your money’s worth, ask to have it returned and it’s yours. (We do recommend you try the techniques out for a week or two first. Some of them are likely to surprise you!)