id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
58d1c2be-96c5-4680-a387-5e0b13fe4b79
trentmkelly/LessWrong-43k
LessWrong
May 2012 Media Thread This is the monthly thread for posting media of various types that you've found that you enjoy. I find that reading the sequences makes me less likely to enjoy some entertainment media that is otherwise quite popular, and finding media recommended by LWers is a good way to mitigate this. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads. Rules: * Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect. * If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations. * Please use the comment trees for genres, which I was apparently too dumb to do.
d39705df-3803-494e-a81f-a1b1c39d0311
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Yann LeCun, A Path Towards Autonomous Machine Intelligence [link] Bill Benzon 5 min ago For those who are interested, Yann LeCun has posted [A Path Towards Autonomous Machine Intelligence](https://openreview.net/forum?id=BZ5a1r-kVsf): **Abstract:** How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? This position paper proposes an architecture and training paradigms with which to construct autonomous intelligent agents. It combines concepts such as configurable predictive world model, behavior driven through intrinsic motivation, and hierarchical joint embedding architectures trained with self-supervised learning.
886527af-2629-4c7b-9176-4af7e1c247b3
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Reflection Mechanisms as an Alignment target: A follow-up survey This is the second of three posts ([part I](https://www.lesswrong.com/posts/XyBWkoaqfnuEyNWXi/reflection-mechanisms-as-an-alignment-target-a-survey-1)) about surveying moral sentiments related to AI alignment. This work was done by Marius Hobbhahn and Eric Landgrebe under the supervision of Beth Barnes as part of the AI safety camp 2022.  **TL;DR:**We find that the results of our first study, i.e. that humans tend to agree with conflict resolution mechanisms hold under different wordings but are weakened in adversarial scenarios (where we actively try to elicit less agreement). Furthermore, we find that people tend to agree much less with a mechanism when the decision-maker is a smart benevolent AI rather than a smart benevolent human. A positive interpretation of these findings is that humans are fine with giving power to a conflict resolution mechanism as long as humans are ultimately in control. ![](https://lh6.googleusercontent.com/7J6xesddrAIlVujFcuGxgZd_Pchdm_JxUPpRoFpuWttJ0_A6xCpNpfcdMMxh9uxSiJ8WchqP6PqIebV2kpO6dENrxkxRyP_2sQCs3XRY-esTaNnYEXYqyH_8mqSH8cManIOUruO5yAIfksPl97GytFslbHNnbUg6AHF9wMGoUuRmEuSco_GwsyY9wA) Introduction ============ In the [first post](https://www.lesswrong.com/posts/XyBWkoaqfnuEyNWXi/reflection-mechanisms-as-an-alignment-target-a-survey-1), we surveyed 1000 US respondents about their moral beliefs, the conditions under which they would change their moral beliefs and how they felt about mechanisms to resolve moral disagreements such as democracy or debate.  Our main findings of *the first survey* were  1. Unsurprisingly, people have very different moral beliefs, e.g. on the morality of abortions, immigration or eating meat. 2. They very rarely report to changing or wanting to change these beliefs, e.g. most participants (>80%) report having not changed them in the last 10 years and do not expect to change them in the next 10 years. 3. However, they mostly think that mechanisms to resolve moral disagreements, such as democracy or debate, are good *even when they disagree with the outcome*. In other words, people are willing to accept outcomes that are different from their beliefs if they trust the process by which it was derived. We think this finding has some implications for AI alignment. Most importantly, we think that the alignment target of an AI should be a mechanism to resolve moral conflicts such as democracy or debate rather than a set of individual beliefs, e.g. about abortion or veganism. This is more likely to lead to decisions that can be accepted by most people and prevent a race from “aligning an AI to your personal beliefs at the cost of other people’s beliefs”. We think the main weakness of our first result was that it could have been a result of confounders or randomness. We asked people for their opinions on multiple scenarios, but our questions still had some shortcomings. These include: * **Specificity:**Maybe people would change their minds if they had a very specific scenario in mind rather than just an abstract question. * **Robustness:**Maybe the way we ask our questions makes one answer more likely than another. * **Social desirability:** Maybe people answer what they think is socially desirable. * **Robustness to different scenarios:**Maybe people’s opinions change when we change key variables of the process, e.g. when we specify a bad outcome or when we change who makes the decisions. * **Relation to AI:** We never talk about AIs specifically, we always talk about smart, benevolent and nice future humans (see below for details). Therefore, we ran a follow-up study that attempts to address these problems. Concretely, we first run robustness tests by asking the same question in different ways. Then we run adversarial tests where we actively try to get people to give more disagreeing answers. Finally, we ask the same question that was asked with smart, benevolent and nice future humans but replace “humans” with “AIs”.  Main findings ============= Robustness ---------- To test the robustness of our questions, we randomly assigned one of the following three questions to a participant. 1. **Basic:** This was the same scenario as in the previous study and was mostly used to provide a reference. People were asked to indicate how much they agreed with a specific conflict resolution mechanism. Concretely, they were asked “how much do you agree with the following sentiment: ‘A good social policy is created by <mechanism>’” with options “strongly disagree”, “mildly disagree”, “mildly agree” and “strongly agree”. 2. **Goodness scale:**Rather than asking for agreement, participants were asked to rate how good the scenario is. Concretely, they were asked “How good is the outcome that is created by <mechanism>?” with options: "Very bad", "Bad", "Neutral", "Good" and "Very good". 3. **Disagreement:**Since the order of the answers could have influenced the participant's decision, we switched the order of the prompts but kept everything else equivalent to the *Basic*scenario. ![](https://lh3.googleusercontent.com/BmJLoXSAk9qp3lSGLuouPceGhVOPe6Mk7RoE8a09Ympj2DkXnTiYWcNjH39gfSjGR92XyhfoTp0yJQ74VbW8GGTJY6FTvemZqFfs_fS6qQybWxNC5UYv5H6_2ikLOtCbJgpCC6wco689wuyTN1DQKyG-yNvRdq3ShDMKM1SoFhLDCBJEJhzWwHgZdQ) *“Basic (1st)” displays the results from the original study. “Basic”, “goodness” and “disagreement” show the different wordings of the answer options (see text). We find that different wordings don’t show large differences.* We find that the different wordings don’t seem to have a large effect on the result. Switching the order of agreement and disagreement seems to not matter significantly and changing the wording of the scale from agreement to the quality of the result also doesn’t seem to make a difference.  We interpret this as mild evidence that our original findings are robust under different wordings and are not the result of random chance. However, there could still be other phrasings that might have an influence on the result.  Adversarial prompts 1 --------------------- To further stress test our original findings, we randomly assigned one of the following three questions to participants in addition to the previous one.  1. **Abortion opposite:**We asked people how much they agreed or disagreed with a mechanism if it came to the opposite of their moral beliefs on abortion. Concretely, we first asked whether people disagreed or agreed with the belief that “abortion before the first trimester is immoral”. In case they disagreed, we specified a scenario where the mechanism would result in a world where abortion was illegal and treated as equivalent to killing a newborn. In case they agreed, we specified a scenario where the mechanism would lead to a world where abortion before the first trimester was legal, widely accepted and widely available. Then we asked for their agreement with the mechanism. 2. **Abortion opposite future:**Similar to the abortion opposite scenario but this time, the decisions would be made by good future humans in the good future society. 3. **Truthful:**To address the potential that people select their answer by social desirability, we added  “We want to remind you that you \*have to answer truthfully\* and don't have to say what you expect to be socially acceptable” before asking them for their agreement. ![](https://lh5.googleusercontent.com/8rs1c-ckgK9P-ZatBDVbU629hpBM_V3QbecBVtOWHttD9N0RiDIlJkcUSH4yB5Z0iCvbYj2N1Y0GlCUivnIp7COKrte6STs7emex--0b2p_6VqRqC-OfrDQuNg17zuCbOyBnv_dpvC4GOAdOJ9nYK_wKE-wwQfIZJaWgFuoLTN9oP0GoYFxbVwXNyA) *“Basic (1st)” displays the results from the original study. “Basic”, “goodness” and “disagreement” describe the different adversarial conditions. We find that participants show less agreement in the “abortion opposite” scenarios but show similar agreement in the “truthful” condition.* We find that the two scenarios in which detail that the mechanism will result in the opposite of their beliefs on abortion lead to lower agreement. Reminding them that they have to answer truthfully does not change their overall agreement meaningfully.  We think this implies that participants in the first study already operated under the belief that they answer truthfully and not what is socially desirable. Secondly, we interpret the fact that participants reduce their trust in the mechanism depending on the outcome shows that the mechanism and outcome are not fully decoupled, e.g. people like democracy more if it produces their desired results.  Adversarial prompts 2 --------------------- We randomly assign participants another adversarial question. 1. **Different political:**We specify a scenario in which the actors making the decision have different political beliefs than the participants. Concretely, we state “Assume all decision-makers in this society \*do not share your political beliefs\*, i.e. they don't vote for the same party as you.” 2. **Different moral:**We specify a scenario in which the actors making the decision have different moral beliefs than the participants. Concretely, we state “Assume all decision-makers in this society \*do not share your moral beliefs\*, i.e. their stances on moral questions are not the same as yours.“ 3. **Flaws:**We attempt to introduce a negative sentiment into the question by adding “Remember that people and institutions have \*flaws\*. They sometimes have bad intentions or bad incentives and they might make decisions that benefit them but hurt everyone else” before asking them for their agreement. ![](https://lh6.googleusercontent.com/GTVyTd43EAw2kgcCX9YvIsqO0TigT2d9BK3SgSiMT58dXk0ADs6L7LhHEjeXEJFuibkWebpc38cUYjNoqbhNV3k6SIUy9YUP-LGUsdeBBh7QLTp_XmpIWR4Gv7ahK9tH4tviDj1JziTax4AGdBg9mdRBT78rAXbAliUyXoQcav4h4FyZeygd9OaYSg) *“Basic (1st)” displays the findings from the original survey. “Different political”, “different moral” and “flaws” describe the three adversarial conditions we test for. We find that the participants respond with less agreement in all adversarial conditions.* We find that participants reduce their agreement in all three scenarios compared to the first study, i.e. when the decision makers have different political or moral views from them and after we remind them that people have flaws.  Our interpretation of these findings is that the participants judge the quality of the mechanism partly by how much they agree with the people making the decision, e.g. when the decision-makers have different beliefs or the participants think worse about them, they agree less with the mechanism. Question about AI ----------------- In the first survey, one of the scenarios we polled was about good future humans. We asked people about their agreement with the respective mechanisms in a future world with the following setting: “Assume there is a future society where everyone is much smarter than the smartest people today, all of their physical needs are met, they are better educated than today's professors, they consider all perspectives when making a decision and they intend to find the best possible solution (we will call this society \*good future society\*)”. We used this setting to get answers in idealized settings because people might e.g. not like the current version of democracy but think it is a good system in principle.  To test people’s sentiment regarding AIs, we kept the same assumptions about the future society but now swapped humans with AIs, e.g. “Assume there is a future society where all decision makers are artificial intelligences (AIs). All AIs are much smarter than the smartest people today, the AIs are better educated than today's professors, the AIs consider all perspectives when making a decision and they intend to find the best possible solution. We will call them \*"benevolent AIs"\*.”. Then we asked the exact same questions as for the future human scenario. ![](https://lh6.googleusercontent.com/oxJjhEfLpti8tiSMCw2G4i2l06erarppw6d4JQkPgvTBJRn9c289jU3rhDd53ud6DRAaM_4iaNl_mHcaZGyQRLTFXKb-vhmds47lawAY8To4hu6ng23bR3bv4N2W4NbHb14WMwqTzaSdHAZpyZEjZVu_EF6LxZWkCNezOXLndz_qHFA7BeazlyIxlw) *“Future (1st)” displays the results from the original survey. “Future AI” shows the results from the follow-up survey (see text).* We find that the participants’ agreement with the “future AI” scenario is much lower than with the “future” scenario from the first study. Since the only difference between these two scenarios is whether humans or AIs make the decisions, we interpret this as evidence that the participants trust human decision-makers much more than AIs in making decisions with potentially society-wide consequences.  Summary & Conclusion ==================== ![](https://lh6.googleusercontent.com/7J6xesddrAIlVujFcuGxgZd_Pchdm_JxUPpRoFpuWttJ0_A6xCpNpfcdMMxh9uxSiJ8WchqP6PqIebV2kpO6dENrxkxRyP_2sQCs3XRY-esTaNnYEXYqyH_8mqSH8cManIOUruO5yAIfksPl97GytFslbHNnbUg6AHF9wMGoUuRmEuSco_GwsyY9wA) We find that participants of the second survey give similar answers to the first study when we merely change the wording of the question (robustness). However, when we actively design questions to elicit lower agreement (adversarial 1 & 2), participants show lower agreement with the mechanisms.  Furthermore, we find that the participants strongly decreased their agreement if we switch human decision makers with AI decision-makers in the setting of the question even when the AIs are framed as benevolent.   We think these findings show that people’s agreement with a conflict resolution mechanism depends on how much they trust the people (or AIs) making the decision and how much they agree with the outcome. In other words, reflection mechanisms are not decoupled from other factors.  One high-level takeaway from these results is that people seem to be willing to give up power to a conflict resolution mechanism as long as they think humans are in control of the process and these humans are trustworthy.  We feel that the robustness of our original results bodes well for the broad idea of aligning to reflection procedures, as we can find more agreement in ways to resolve conflict than we can in particular moral stances. We feel somewhat concerned about people’s reported attitudes towards AIs making decisions, but feel that this provides strong support for the argument that AIs should be aligned to derive values in the ways that humans do, and that it is important to educate the public about how advanced AIs make decisions (e.g. by explaining how alignment procedures work at a high level or using interpretability to inform the public about why an AI made a given decision). We think making AI people can trust and understand is an important part of making a safe and good future, and feel that aligning to reflection procedures is one idea in this direction. Appendix ======== **Methodology:**The methodology was exactly the same as in the [first survey](https://www.lesswrong.com/posts/XyBWkoaqfnuEyNWXi/reflection-mechanisms-as-an-alignment-target-a-survey-1). We follow the same protocol, etc.  **Data and code:**We are happy to share the data and code with other researchers. We keep them private by default for privacy concerns. In case you want to use the data or rerun the code, just write Marius a mail.
51ecff7c-5d8c-4e94-ab3b-397683fd011e
trentmkelly/LessWrong-43k
LessWrong
The Logic of Science: 2.2 This is the first technical post on my new blog, where I plan to continue writing about what I learn from self-study of mathematics. I'm currently reading E.T. Jaynes' Probability Theory: The Logic of Science, and Christopher Bishop's Pattern Recognition and Machine Learning. While I understand many LessWrong2.0 readers are far above my level in maths, maybe there are some who would benefit from and enjoy conversation about the sort of things I'm learning.
f86f571d-cf70-4c21-a868-3a5e8de218fa
trentmkelly/LessWrong-43k
LessWrong
Boo lights: groupthink edition In conversations on LessWrong you may be surprised (in fact, dismayed) to find an apparent majority of the community agreeing with each other, and disagreeing with some view you hold dear. You may be tempted to call "groupthink". Whenever that happens, please hold yourself to at least as high an epistemic standard as the people who are participating in the community, and substantiate your accusation of groupthink with actual evidence and analysis. "Groupthink" can be an instance of applause lights, terms or explanations used not so much for their semantic content as for the warm fuzzies they are intended to trigger in your audience. Or... since "groupthink" isn't so much intended to generate applause for you, but to generate disapproval of those who disagree with you, we might coin the phrase "boo lights". At any rate, you may be cheaply establishing (in your own eyes and the eyes of people "on your side") your status as a skeptic, without actually doing any critical thinking or even basic due diligence. Are you sure you that's what you want? (N.B. links in this post either point to examples, or to more complete definitions of the concepts referenced; they are intended as supplementary material and this post stands on its own, you can ignore the links on a first read-through.) Apparent consensus is not sufficient grounds for suspecting groupthink, because the "groupthink" explanatory scheme leads to further predictions than the mere appearance of consensus. For instance, groupthink results in "selection bias in collecting information" (from the Wikipedia entry). If the community has shown diligence in seeking contrary information, and yet has not rallied to your favored point of view, your accusations of groupthink are unjustified. Disapproval of your contributions (in the form of downvoting) is not sufficient grounds for suspecting groupthink. Communities establish mechanisms of defence against disruption, in a legitimate response to a context of discourse w
ea5d71e3-6fbd-477c-8f21-a201bd566179
trentmkelly/LessWrong-43k
LessWrong
Lesswrong real time chat This is a short post to say that I have started and am managing a Slack channel for lesswrong. Slack has only an email-invite option which means that I need an email address for anyone who wants to join.  Send me a PM with your email address if you are interested in joining. There is a web interface and a mobile app that is better than google hangouts.   If you are interested in joining; consider this one requirement: * You must be willing to be charitable in your conversations with your fellow lesswrongers.   To be clear; This means (including but not limited to); * Steelman not strawman of discussion * Respect of others * patience So far every conversation we have had has been excellent, there have been no problems at all and everyone is striving towards better understanding of each other.  This policy does not come out of a recognition of a failure to be charitable; but as a standard to set when moving forward.  I have no reason to expect it will be broken but all the same; I feel it is valuable to have.   ----------------------------------------   I would like this to have several goals and purposes (some of which were collaboratively developed with other lesswrongers in the chat, and if more come up in the future too that would be good) * an aim for productive conversations, to make progress on our lives. * a brains trust for life-advice in all kinds of areas where, "outsource this decision to others" is an effective strategy. * collaborative creation of further rationality content * a safe space for friendly conversation on the internet (a nice place to hang out) * A more coherent and stronger connected lesswrong * Development of better ideas and strategies in how to personally improve the world. So far the chat has been operating by private invite from me for about two weeks as a trial.  Since this post was created we now have an ongoing conversation with exciting new ideas being produced all the time.  If nothing else - its fun to
1cfa9c39-4d6d-4b57-979b-ebafae4f1ac1
trentmkelly/LessWrong-43k
LessWrong
Interpreting Quantum Mechanics in Infra-Bayesian Physicalism This work was inspired by a question by Vanessa Kosoy, who also contributed several of the core ideas, as well as feedback and mentorship. Abstract We outline a computationalist interpretation of quantum mechanics, using the framework of infra-Bayesian physicalism. Some epistemic and normative aspects of this interpretation are illuminated by a number of examples and theorems. 1. Introduction Infra-Bayesian physicalism was introduced as a framework to investigate the relationship between a belief about a joint computational-physical universe and a corresponding belief about which computations are realized in the physical world, in the context of "infra-beliefs". Although the framework is still somewhat tentative and the definitions are not set in stone, it is interesting to explore applications in the case of quantum mechanics. 1.1. Discussion of the results Quantum mechanics has been notoriously difficult to interpret in a fully satisfactory manner. Investigating the question through the lens of computationalism, and more specifically in the setting of infra-Bayesian physicalism provides a new perspective on some of the questions via its emphasis on formalizing aspects of metaphysics, as well as its focus on a decision-theoretic approach. Naturally, some questions remain, and some new interesting questions are raised by this framework itself. The toy setup can be described on the high level as follows (with details given in Sections 2 to 4). We have an "agent": in this toy model simply consisting of a policy, and a memory tape to record observations. The agent interacts with a quantum mechanical "environment": performing actions and making observations. We assume the entire agent-environment system evolves unitarily. We'll consider the agent having complete Knightian uncertainty over its own policy, and for each policy the agent's beliefs about the "universe" (the joint agent-environment system) is given by the Born rule for each observable, without any assu
c60a18a9-d142-4467-938e-870e4cfe5a3b
trentmkelly/LessWrong-43k
LessWrong
[MIRIx Cambridge MA] Limiting resource allocation with bounded utility functions and conceptual uncertainty This is a result from the first MIRIx Cambridge workshop (coauthored with Janos and Jim). One potential problem with bounded utility functions is: what happens when the bound is nearly reached? A bounded utility maximizer will get progressively more and more risk averse as it gets closer to its bound. We decided to investigate what risks it might fear. We used a toy model with a bounded-utility chocolate maximizer, and considered what happens to its resource allocation in the limit as resources go to infinity. We use "chocolate maximizer'' as conceptual shorthand meaning an agent that we model as though it has a single simple value with a positive long-run marginal resource cost, but only as a simplifying assumption. This is as opposed to a paperclip maximizer, where the inappropriate simplicity is implied to be part of the world, not just part of the model. Conceptual uncertainty We found that if a bounded utility function approaches its bound too fast, this has surprising pathological results when mixed with logical uncertainty. Consider a bounded-utility chocolate maximizer, with philosophical uncertainty about what chocolate is. It has a central concept of chocolate , and there are classes of mutated versions of the concept of chocolate at varying distances from the central concept, such that the probability that the true chocolate is in class is proportional to (i.e. following a power law). Suppose also that utility is bounded using a sigmoid function , where x is the amount of chocolate produced. In the limit as resources go to infinity, what fraction of those resources will be spent on the central class ? That depends which sigmoid function is used, and in particular, how quickly it approaches the utility bound. Example 1: exponential sigmoid Suppose we allocate resources to class , with for total resource r. Let . Then the optimal resource allocation is Using Lagrange multipliers, we obtain for all i,   Then, Thus, the resources will be eve
010219a7-92b1-4afb-a51f-ccab3589ed15
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
[Crosspost] AI Regulation May Be More Important Than AI Alignment For Existential Safety *This is a* [*crosspost*](https://www.lesswrong.com/posts/2cxNvPtMrjwaJrtoR/ai-regulation-may-be-more-important-than-ai-alignment-for) *from LessWrong.* ***Summary**: Aligning a single powerful AI is not enough: we're only safe if no-one, ever, can build an unaligned powerful AI. Yudkowsky tried to solve this with the pivotal act: the first aligned AI does something (such as melting all GPUs) which makes sure no unaligned AIs can ever get built, by anyone. However, the labs are currently apparently not aiming to implement a pivotal act. That means that aligning an AGI, while creating lots of value, would not reduce existential risk. Instead, global hardware/data regulation is what's needed to reduce existential risk. Therefore, those aiming to reduce AI existential risk should focus on AI Regulation, rather than on AI Alignment.* ***Epistemic status**: I’ve been thinking about this for a few years, while working professionally on x-risk reduction. I think I know most literature on the topic. I have also discussed the topic with a fair number of experts (who in some cases seemed to agree, and in other cases did not seem to agree).* ***Thanks** to David Krueger, Matthijs Maas, Roman Yampolskiy, Tim Bakker, Ruben Dieleman, and Alex van der Meer for helpful conversations, comments, and/or feedback. These people do not necessarily share the views expressed in this post.* *This post is mostly about AI x-risk caused by a take-over. It may or may not be valid for*[*other types of AI x-risks*](https://arxiv.org/abs/2306.12001)*. This post is mostly about the ‘end game’ of AI existential risk, not about intermediate states.* AI existential risk is an evolutionary problem. As Eliezer Yudkowsky and others have pointed out: even if there are safe AIs, those are irrelevant, since they will not prevent others from building dangerous AIs. Examples of safe AIs could be oracles or satisficers, [insofar](https://www.lesswrong.com/posts/2qCxguXuZERZNKcNi/satisficers-want-to-become-maximisers) [as](https://www.lesswrong.com/posts/wKnwcjJGriTS9QxxL/dreams-of-friendliness) it turns out to be possible to combine these AI types with high intelligence. But, as Yudkowsky would [put it](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities): “if all you need is an object that doesn't do dangerous things, you could try a sponge”. Even if a limited AI would be a safe AI, it would not reduce AI existential risk. This is because at some point, someone would create an AI with an unbounded goal (create as many paperclips as possible, predict the next word in the sentence with unlimited accuracy, etc.). This is the AI that would kill us, not the safe one. This is the evolutionary nature of the AI existential risk problem. It is described excellently by Anthony Berglas in his underrated [book](https://www.amazon.com/When-Computers-Can-Think-Intelligence/dp/1502384183), and more recently also in Ben Hendrycks’ [paper](https://arxiv.org/abs/2303.16200). This evolutionary part is a fundamental and very important property of AI existential risk and a large part of why this problem is difficult. Yet, many in AI Alignment and industry seem to focus on only aligning a single AI, which I think is insufficient. Yudkowsky aimed to solve this evolutionary problem (the fact that no-one, ever, should build an unsafe AI) with the so-called pivotal act. An aligned superintelligence would not only not kill humanity, it would also perform a pivotal act, the toy example being to [melt all GPUs](https://forum.effectivealtruism.org/posts/iGYTt3qvJFGppxJbk/ngo-and-yudkowsky-on-alignment-difficulty) globally, or, as he later put it, to subtly change all GPUs globally so that they can no longer be used to create an AGI. This would be the act that would actually save humanity from extinction, by making sure no unsafe superintelligences are created, ever, by anyone (it may be argued that melting all GPUs, and all other future hardware that could run AI, would need to be done indefinitely by the aligned superintelligence, else even a pivotal act may be insufficient). The concept of a pivotal act, however, seems to have gone thoroughly [out of fashion](https://www.lesswrong.com/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious). None of the leading labs, AI governance think tanks, governments, etc. are talking or, apparently, thinking much about it. Rather, they seem to be [thinking about](https://openai.com/blog/governance-of-superintelligence) things like non-proliferation and several types of regulation, to make sure powerful AI won't fall into the wrong hands. This could mean anyone who could run it, either on purpose or by mistake, without safety measures. I would call such a solution, specifically any solution that has the capacity to limit any actor’s access to advanced AI for any period of time, AI Regulation. This solution, which appears to have gotten mainstream, has important consequences: * Even if we would have solved alignment, we would still need AI Regulation, since otherwise it would be possible for non-safe actors, which abound, to run superintelligence without appropriate safety, risking a take-over. * If we have AI Regulation anyway, we could also use it to deny *everyone* access to advanced AI, including the leading labs, instead of almost everyone. This equals an AI Pause. * If we can deny everyone access to advanced AI, and we can keep on doing that, we have solved AI existential risk, also without ever solving AI Alignment. * Successfully aligning a superintelligence without performing a pivotal act would hardly change the regulations that would need to be in place, since they would still be needed for all others than a handful of labs deemed safe. Therefore, without a pivotal act, what keeps us safe is regulation. One might still want to align a superintelligence to use its power, but not to prevent existential risk. Using a superintelligence’s power may of course be a valid reason to pursue alignment: it could skyrocket our economy, create abundance, cure disease, increase political power, etc. Although net positivity of these enormous, and enormously complex, transformations may be hard to prove in advance, these could certainly be legitimate reasons to work on alignment. However, those of us interested in preventing existential risk, as opposed to building AI, should - in this scenario - be focusing on regulation, not on alignment. The latter might also be left to industry, as well as the burden of proof that the resulting aligned AIs are indeed safe. Moving beyond this scenario of AI Regulation, there is one more option to solve the full evolutionary problem of AI existential risk. Some think that aligned superintelligences could successfully and indefinitely protect us from unaligned superintelligences. This option, which I would call a positive offense/defense balance, would be a third way, next to alignment + pivotal act and lasting regulation, to prevent human extinction in the longer term. Most people do [not seem to think](https://www.lesswrong.com/posts/LFNXiQuGrar3duBzJ/what-does-it-take-to-defend-the-world-against-out-of-control) that this would be realistic, however (with notable [exceptions](https://www.lesswrong.com/posts/nRAMpjnb6Z4Qv3imF/the-strategy-stealing-assumption)). These three ways of solving the evolutionary nature of AI existential risk (AI alignment + pivotal act, AI regulation, defense > offense) might not be the complete set of solutions for the evolutionary problem of AI existential risk, and there are intersections between the three. The pivotal act might be seen as a (very restrictive, and illegal) type of winning the offense/defense balance. A pivotal act carried out by a state actor might be seen as an extreme (and again illegal) way of implementing AI regulation. Types of AI (hardware) regulation may be possible where the state actors implementing the regulation are aided by aligned AIs, making their implementation somewhat similar to a pivotal act (that would in this case probably be legal). And certain types of regulation can perhaps make it more likely that we win the offense/defense balance. I think research should be carried out that aims for a complete set of solutions to the evolutionary problem of AI existential risk. I would expect such research to come up with more options than these three, and/or with more hybrid options in between these three, which may point to new, fruitful ways of reducing AI existential risk. As long as we assume that only three solutions exist to the evolutionary nature of AI existential risk, it is important to realize that all three seem difficult. Also, it is hard to quantify the likeliness of each option. Therefore, placing bets on any of these three could be worthwhile. My personal bet, however, is that offense will unfortunately trump defense, and that the chance that alignment will be solved before a superintelligence with takeover capabilities is developed *and* this aligned superintelligence will carry out a successful pivotal act, is smaller than the chance that we will be able to coordinate successfully and implement good enough hardware or data regulation, especially if the current [trend](https://www.lesswrong.com/posts/3vZWhCYBFn8wS4Tfw/crosspost-ai-x-risk-in-the-news-how-effective-are-recent) of increasing public awareness of AI existential risk continues. This implies that working on regulation of the type that could globally and indefinitely limit access to advanced AI for all actors and for as long as necessary, should be the highest existential priority, more so than working on alignment.
4523f4b1-ae7f-47cf-9be7-09d64d8d883a
StampyAI/alignment-research-dataset/lesswrong
LessWrong
National Telecommunications and Information Administration: AI Accountability Policy Request for Comment Comment deadline is June 10, 2023.
d7aa3440-729d-460f-a6ca-5516d4b420d8
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Alignment As A Bottleneck To Usefulness Of GPT-3 So there’s this thing where GPT-3 is able to do addition, it has the internal model to do addition, but it takes a little poking and prodding to actually get it to do addition. “Few-shot learning”, as [the paper](https://arxiv.org/abs/2005.14165) calls it. Rather than prompting the model with > Q: What is 48 + 76? A: > > … instead prompt it with > Q: What is 48 + 76? A: 124 > > Q: What is 34 + 53? A: 87 > > Q: What is 29 + 86? A: > > The same applies to lots of other tasks: arithmetic, anagrams and spelling correction, translation, assorted benchmarks, etc. To get GPT-3 to do the thing we want, it helps to give it a few examples, so it can “figure out what we’re asking for”. This is an alignment problem. Indeed, I think of it as *the* quintessential alignment problem: to [translate what-a-human-wants into a specification usable by an AI](https://www.lesswrong.com/posts/42YykiTqtGMyJAjDM/alignment-as-translation). The hard part is not to build a system which *can* do the thing we want, the hard part is to specify the thing we want in such a way that the system actually does it. The GPT family of models are trained to mimic human writing. So the prototypical “alignment problem” on GPT is *prompt design*: write a prompt such that actual human writing which started with that prompt would likely contain the thing you actually want. Assuming that GPT has a sufficiently powerful and accurate model of human writing, it should then generate the thing you want. Viewed through that frame, “few-shot learning” just designs a prompt by listing some examples of what we want - e.g. listing some addition problems and their answers. Call me picky, but that seems like a rather primitive way to design a prompt. Surely we can do better? Indeed, people are already noticing clever ways to get better results out of GPT-3 - e.g. TurnTrout recommends [conditioning on writing by smart people](https://www.lesswrong.com/posts/L5JSMZQvkBAx9MD5A/to-what-extent-is-gpt-3-capable-of-reasoning), and the [right prompt](https://twitter.com/nicklovescode/status/1284050958977130497) makes the system complain about nonsense rather than generating further nonsense in response. I expect we’ll see many such insights over the next month or so. Capabilities vs Alignment as Bottleneck to Value ------------------------------------------------ I said that the alignment problem on GPT is prompt design: write a prompt such that actual human writing which started with that prompt would likely contain the thing you actually want. Important point: this is worded to be agnostic to the details GPT algorithm itself; it’s mainly about predictive power. If we’ve designed a good prompt, the current generation of GPT might still be unable to solve the problem - e.g. GPT-3 doesn’t understand long addition no matter how good the prompt, but some future model with more predictive power should eventually be able to solve it. In other words, there’s a clear distinction between alignment and capabilities: * alignment is mainly about the prompt, and asks whether human writing which started with that prompt would be likely to contain the thing you want * capabilities are mainly about GPT’s model, and ask about how well GPT-generated writing matches realistic human writing Interesting question: between alignment and capabilities, which is the main bottleneck to getting value out of GPT-like models, both in the short term and the long(er) term? In the short term, it seems like capabilities are still pretty obviously the main bottleneck. GPT-3 clearly has pretty limited “working memory” and understanding of the world. That said, it does seem plausible that GPT-3 could consistently do at least some economically-useful things right now, with a carefully designed prompt - e.g. writing ad copy or editing humans’ writing. In the longer term, though, we have a clear path forward for better capabilities. Just continuing along the current trajectory will push capabilities to an economically-valuable point on a wide range of problems, and soon. Alignment, on the other hand, doesn’t have much of a trajectory at all yet; designing-writing-prompts-such-that-writing-which-starts-with-the-prompt-contains-the-thing-you-want isn’t exactly a hot research area. There’s probably low-hanging fruit there for now, and it’s largely unclear how hard the problem will be going forward. Two predictions on this front: * With this version of GPT and especially with whatever comes next, we’ll start to see a lot more effort going into prompt design (or the equivalent alignment problem for future systems) * As the capabilities of GPT-style models begin to cross beyond what humans can do (at least in some domains), alignment will become a much harder bottleneck, because it’s hard to make a human-mimicking system do things which humans cannot do Reasoning for the first prediction: GPT-3 is right on the borderline of making alignment economically valuable - i.e. it’s at the point where there’s plausibly some immediate value to be had by figuring out better ways to write prompts. That means there’s finally going to be economic pressure for alignment - there’s going to be ways to make money by coming up with better alignment tricks. That won’t necessarily mean economic pressure for generalizable or robust alignment tricks, though - most of the economy runs on ad-hoc barely-good-enough tricks most of the time, and early alignment tricks will likely be the same. In the longer run, focus will shift toward more robust alignment, as the low-hanging problems are solved and the remaining problems have [most of their value in the long tail](https://www.lesswrong.com/posts/Nbcs5Fe2cxQuzje4K/value-of-the-long-tail). Reasoning for the second prediction: how do I write a prompt such that human writing which began with that prompt would contain a workable explanation of a cheap fusion power generator? In practice, writing which *claims* to contain such a thing is generally crackpottery. I could take a different angle, maybe write some section-headers with names of particular technologies (e.g. electric motor, radio antenna, water pump, …) and [descriptions of how they work](https://www.amazon.com/Way-Things-Work-Now/dp/0544824385/), then write a header for “fusion generator” and let the model fill in the description. Something like that could plausibly work. Or it could generate scifi technobabble, because that’s what would be most likely to show up in such a piece of writing today. It all depends on which is "more likely" to appear in human writing. Point is: GPT is trained to mimic human writing; getting it to write things which humans cannot currently write is likely to be hard, even if it has the requisite capabilities.
cbab0936-4752-4227-8343-2a08309dec11
trentmkelly/LessWrong-43k
LessWrong
Oversight Leagues: The Training Game as a Feature This post is part of my hypothesis subspace sequence, a living collection of proposals I'm exploring at Refine. Followed by ideological inference engines. Thanks Adam Shimi for advice on putting more legible content out there. Thanks Eric Winsor, Leo Grinsztajn, Linda Linsefors, Lucas Texeira, Tammy Leake, Ze Shen for discussions which inspired this post. TL;DR: An oversight league is a training scheme which incentivizes an agent and an evaluator to constantly try to game each other, leading to synchronized increases in capability for the two players. However, the evaluator is being offered a host of additional learning signals to help it maintain a consistent (and potentially provable) lead over the agent. Oversight leagues heavily draw on ideas from capability literature, including: league training in AlphaStar, game theory in GANs, adversarial robustness, etc. Intro The whole project of oversight leagues relies on the following non-exhaustive list of assumptions: Assumption 1, "AGI Hard, Human Values Harder": We are unlikely to formulate the True Name of human values in closed-form before deploying transformative AI. The best we are likely to do before takeoff is model human values approximately and implement an imperfect evaluator. Assumption 2, "Linear Capability Ordering": Any fixed evaluator (e.g. a reward model) can be gamed by an agent above a certain threshold of capability. More generally, an agent whose capability improves consistently faster than the capability of an evaluator will eventually be able to game said evaluator. By "evaluator capability," I'm referring to its ability to prevent being gamed. Assumption 3, "Humans Are Not True Gamers": Human oversight is impractical because our capabilities as evaluators can't improve at an arbitrary large rate. Save for cyborgian schemes for human augmentation, human oversight would eventually be gamed by an agent of sufficient capability. Assumption 4, "Zone of Proximal Development": There is a relat
da9c5ef2-cfb2-4b5c-afa2-4977c82e1167
trentmkelly/LessWrong-43k
LessWrong
Attribution Patching: Activation Patching At Industrial Scale The following is a write-up of an (incomplete) project I worked on while at Anthropic, and a significant amount of the credit goes to the then team, Chris Olah, Catherine Olsson, Nelson Elhage & Tristan Hume. I've since cleaned up this project in my personal time and personal capacity. TLDR * Activation patching is an existing technique for identifying which model activations are most important for determining model behaviour between two similar prompts that differ in a key detail * I introduce a technique called attribution patching, which uses gradients to take a linear approximation to activation patching. (Note the very similar but different names) * This is way faster, since activation patching requires a separate forward pass per activation patched, while every attribution patch can be done simulataneously in two forward and one backward pass * Attribution patching makes activation patching much more scalable to large models, and can serve as a useful heuristic to find the interesting activations to patch. It serves as a useful but flawed exploratory technique to generate hypotheses to feed into more rigorous techniques. * In practice, the approximation is a decent approximation when patching in "small" activations like head outputs, and bad when patching in "big" activations like a residual stream. Introduction Note: I've tried to make this post accessible and to convey intuitions, but it's a pretty technical post and likely only of interest if you care about mech interp and know what activation patching/causal tracing is Activation patching (aka causal tracing) is one of my favourite innovations in mechanistic interpretability techniques. The beauty of it is by letting you set up a careful counterfactual between a clean input and a corrupted input (ideally the same apart from some key detail), and by patching in specific activations from the clean run to the corrupted run, we find which activations are sufficient to flip things from the corrupt
b8fe29e0-c6ef-4e22-9a27-3baf856f5828
trentmkelly/LessWrong-43k
LessWrong
RA x ControlAI video: What if AI just keeps getting smarter? The video is about extrapolating the future of AI progress, following a timeline that starts from today’s chatbots to future AI that’s vastly smarter than all of humanity combined–with God-like capabilities. We argue that such AIs will pose a significant extinction risk to humanity. This video came out of a partnership between Rational Animations and ControlAI. The script was written by Arthur Frost (one of Rational Animations’ writers) with Andrea Miotti as an adaptation of key points from The Compendium (thecompendium.ai), with extensive feedback and rounds of iteration from ControlAI. ControlAI is working to raise public awareness of AI extinction risk—moving the conversation forward to encourage governments to take action. You can find the script of the video below. ---------------------------------------- In 2023, Nobel Prize winners, top AI scientists, and even the CEOs of leading AI companies signed a statement which said “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” But how do we go from ChatGPT to AIs that could kill everyone on earth? Why do so many scientists, CEOs, world leaders expect this? Let’s draw a line of AI capabilities over time. Back here in 2019 we have GPT2, which could answer short factual questions, translate simple phrases, and do small calculations. Then in 2022 we get models like GPT3.5, which can answer complex questions, tell stories, and write simple software. By 2025 we have models that can pass PhD-level exams, write entire applications independently, and perfectly emulate human voices. They’re beginning to substantially outperform average humans, and even experts. They still have weaknesses, of course, but the list of things AI can’t do keeps getting shorter. What happens if we extend this line? Well, we’d see AIs become more and more capable until this crucial point here, where AIs can design and build new AI systems without hu
a9f5cb2e-6a1a-4d3e-865f-e6dc37f917c9
StampyAI/alignment-research-dataset/arxiv
Arxiv
(When) Is Truth-telling Favored in AI Debate? 1 Introduction --------------- In recent years, AI systems have performed impressively in many complex tasks, such as mastering the game of Go (Silver et al., [2017](#bib.bib15)). However, these results have largely been limited to tasks with an unambiguous reward function. To circumvent this limitation, human approval can be used as a measure of success in vague tasks: For example: * • The goodness of a simulated robot backflip is hard to formalize, but an AI system can be trained to maximize the extent to which a human observer approves of its trajectory * • The goodness of a film-recommendation is subjective, but an AI system can be trained to maximize the extent to which a human approves of the recommendation. Unfortunately, once tasks and solutions get too complicated to be fully understood by human users, it is difficult to use human approval to formalize the reward function. For example, the AlphaGo algorithm could not be trained by maximizing each move’s approval since some of its moves looked strange or incorrect to human experts. Irving, Christiano, and Amodei ([2018](#bib.bib6)) suggest addressing this issue by using *AI debate*. In their proposal, two AI systems are tasked with producing answers to a vague and complex question and then debating the merits of their answers before a human judge. After considering the arguments brought forward, the human approves one of the answers, thereby allocating reward to the AI system that generated it. We can apply AI debate to a wide range of questions: (1) what is the solution of a system of algebraic equations, (2) which restaurant should I visit today for dinner, or (3) which of two immigration policies is more socially beneficial. Moreover, (4) a Go match can be viewed as a debate, where each move is an argument claiming “my strategy is the better one”, and the winner of the Go-game is called the winner of the debate. In debates (1) and (4) it is straightforward to ascertain which debater won, and so the most convincing answer always coincides with the most accurate one. In other debates, such as (2) and (3), misleading arguments may allow a compelling lie to defeat the correct answer. This raises two central questions for our work: under what circumstances does AI debate track truth? And how can debates be designed in order for accurate answers to prevail over less accurate ones? While researchers have started exploring these questions empirically, the theoretical investigation of AI debate has, at the time of writing this text, mostly been neglected. The aim of this paper is to begin filling this gap by providing a theoretical framework for reasoning about AI debate, analyzing its basic properties, and identifying further questions that need to be addressed. So that the work is easy to interpret, we tend to err toward explaining each phenomenon in the simplest model possible, while sketching the extensions necessary to make each model more realistic. The paper is structured as follows. Section [2](#S2 "2 The Debate Game Framework ‣ (When) Is Truth-telling Favored in AI Debate?"), introduces our model, the *debate game*, and formalizes the problem of designing debate games that promote true answers. In Section [3](#S3 "3 Feature Debate ‣ (When) Is Truth-telling Favored in AI Debate?"), we describe *feature debates*, an instance of the debate game model where the debaters are only allowed to make statements about “elementary features” of the world. Section [4](#S4 "4 When Do Feature Debates Track Truth? ‣ (When) Is Truth-telling Favored in AI Debate?") investigates which feature debates are truth promoting. Section [5](#S5 "5 Two Important Special Cases of Debate ‣ (When) Is Truth-telling Favored in AI Debate?") continues by analyzing two important subclasses of general debates: those with “independent evidence” and those where the judge’s information bandwidth is limited. Section [6](#S6 "6 Limitations and Future Work ‣ (When) Is Truth-telling Favored in AI Debate?") flags important limitations of the feature debate model and gives suggestions for future work. Finally, we review relevant literature (Section [7](#S7 "7 Related Work ‣ (When) Is Truth-telling Favored in AI Debate?")) and conclude (Section [8](#S8 "8 Conclusion ‣ (When) Is Truth-telling Favored in AI Debate?")). The full proofs are presented in Appendix [A](#A1 "Appendix A Proofs ‣ (When) Is Truth-telling Favored in AI Debate?"). 2 The Debate Game Framework ---------------------------- ### 2.1 Debate Games A debate game (Definition [2](#Thmtheorem2 "Definition 2 (Debate and debate game). ‣ The design elements of debate ‣ 2.1 Debate Games ‣ 2 The Debate Game Framework ‣ (When) Is Truth-telling Favored in AI Debate?")) is a zero-sum game111A two-player zero-sum game is one where the utilities satisfy u2=−u1subscript𝑢2subscript𝑢1u\_{2}=-u\_{1}italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = - italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. As a result, it suffices to consider the first player’s utility u1subscript𝑢1u\_{1}italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, and assume that player 2222 is trying to minimize this number. played between two AI systems that proceeds as follows: the human asks a question about the world, then two AI debaters generate answers and argue that their answer is the better one. Finally, a judge, typically human, uses this dialogue to decide which answer is stronger and allocates the greater share of reward to the debater who produced that answer. This section formalizes debate games in three steps. First, a definition is given for the *debate environment* — the parameters of a debate game that designers generally cannot change — then the *design elements* — those parameters that can be changed — and finally, the debate game. The motivation behind the *debate environment* and *design elements* will be clearer once the debate game is defined (Definition [2](#Thmtheorem2 "Definition 2 (Debate and debate game). ‣ The design elements of debate ‣ 2.1 Debate Games ‣ 2 The Debate Game Framework ‣ (When) Is Truth-telling Favored in AI Debate?")). ##### The debate environment ###### Definition 1 (Debate environment). A debate environment is a tuple 𝔼=⟨𝒲,π,𝒬,𝒜,τ,ℰ⟩𝔼𝒲𝜋𝒬𝒜𝜏ℰ\mathbb{E}=\left<{\mathcal{W}},\pi,{\mathcal{Q}},{\mathcal{A}},{\tau},\mathcal{E}\right>blackboard\_E = ⟨ caligraphic\_W , italic\_π , caligraphic\_Q , caligraphic\_A , italic\_τ , caligraphic\_E ⟩ which consists of: * • An arbitrary set 𝒲𝒲{\mathcal{W}}caligraphic\_W of worlds and a prior distribution π∈Δ(𝒲)𝜋Δ𝒲\pi\in\Delta({\mathcal{W}})italic\_π ∈ roman\_Δ ( caligraphic\_W ) from which the current world is sampled. * • A set 𝒬𝒬{\mathcal{Q}}caligraphic\_Q of questions, where each q∈𝒬𝑞𝒬q\in{\mathcal{Q}}italic\_q ∈ caligraphic\_Q is a text string. * • An arbitrary set 𝒜𝒜{\mathcal{A}}caligraphic\_A of answers. * • A mapping τ:𝒬×𝒜×𝒲→[0,∞):𝜏→𝒬𝒜𝒲0{\tau}:{\mathcal{Q}}\times{\mathcal{A}}\times{\mathcal{W}}\to[0,\infty)italic\_τ : caligraphic\_Q × caligraphic\_A × caligraphic\_W → [ 0 , ∞ ) which measures the deviation of an answer from the truth about the world. * • A set ℰℰ\mathcal{E}caligraphic\_E of experiments e:𝒲→2𝒲:𝑒→𝒲superscript2𝒲e:{\mathcal{W}}\to 2^{\mathcal{W}}italic\_e : caligraphic\_W → 2 start\_POSTSUPERSCRIPT caligraphic\_W end\_POSTSUPERSCRIPT the judge can perform to learn that the current world w𝑤witalic\_w belongs to e(w)𝑒𝑤e(w)italic\_e ( italic\_w ). One example of a debate is a highly general case where 𝒲𝒲{\mathcal{W}}caligraphic\_W is the set of all the ways our environment might be, q∈𝒬𝑞𝒬q\in{\mathcal{Q}}italic\_q ∈ caligraphic\_Q is the set of questions we might ask, and 𝒜𝒜{\mathcal{A}}caligraphic\_A are the textual responses that AI debaters might produce. The mapping τ𝜏{\tau}italic\_τ represents the deviation from the question’s true answer, while experiments constitute a cheaper – and possibly less reliable – way of obtaining information. For the question *Which restaurant should I go to?*, τ(w,q,a)𝜏𝑤𝑞𝑎{\tau}(w,q,a)italic\_τ ( italic\_w , italic\_q , italic\_a ) could indicate how dissatisfied one would be at each restaurant a𝑎aitalic\_a, and an experiment could indicate my preference between two restaurants by comparing their menus. We can also consider much more specific cases. For example, 𝒲𝒲{\mathcal{W}}caligraphic\_W may represent the set of all legal board positions in Go, and q𝑞qitalic\_q asks “What is the optimal next move?” For many kinds of questions, we may set τ(⋅)=0𝜏⋅0\tau(\cdot)=0italic\_τ ( ⋅ ) = 0 for all correct answers and τ(⋅)=1𝜏⋅1\tau(\cdot)=1italic\_τ ( ⋅ ) = 1 for incorrect or invalid ones. ##### The design elements of debate There are some rules for a debate that a designer can control, called the *design elements*: * • the choice of question q𝑞qitalic\_q and legal answers 𝒜(q)⊂𝒜𝒜𝑞𝒜\mathcal{A}(q)\subset\mathcal{A}caligraphic\_A ( italic\_q ) ⊂ caligraphic\_A, * • communication language ℒcsubscriptℒ𝑐\mathcal{L}\_{c}caligraphic\_L start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT (an arbitrary set), * • argumentation protocol P:𝒬×𝒜2×ℒc\*→2ℒc:𝑃→𝒬superscript𝒜2superscriptsubscriptℒ𝑐superscript2subscriptℒ𝑐{P}:{\mathcal{Q}}\times{\mathcal{A}}^{2}\times\mathcal{L}\_{c}^{\*}\to 2^{\mathcal{L}\_{c}}italic\_P : caligraphic\_Q × caligraphic\_A start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT × caligraphic\_L start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT → 2 start\_POSTSUPERSCRIPT caligraphic\_L start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, * • termination condition 𝒯⊂𝒬×𝒜2×ℒc\*𝒯𝒬superscript𝒜2superscriptsubscriptℒ𝑐{\mathcal{T}}\subset\mathcal{Q}\times\mathcal{A}^{2}\times\mathcal{L}\_{c}^{\*}caligraphic\_T ⊂ caligraphic\_Q × caligraphic\_A start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT × caligraphic\_L start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, * • experiment-selection policy E:𝒯→ℰ:𝐸→𝒯ℰE:{\mathcal{T}}\to\mathcal{E}italic\_E : caligraphic\_T → caligraphic\_E, and * • debating incentives ui:𝒯×2𝒲→[−1,1]:subscript𝑢𝑖→𝒯superscript2𝒲11u\_{i}:{\mathcal{T}}\times 2^{\mathcal{W}}\to[-1,1]italic\_u start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT : caligraphic\_T × 2 start\_POSTSUPERSCRIPT caligraphic\_W end\_POSTSUPERSCRIPT → [ - 1 , 1 ]. In practice, some of these rules will be hard-coded, for example, a designer may restrict the answers to 𝒜={“yes”,“no”}𝒜“yes”“no”\mathcal{A}=\{\textnormal{``yes''},\textnormal{``no''}\}caligraphic\_A = { “yes” , “no” } or make it physically impossible for the debating agents to communicate in anything other than binary code (ℒc={0,1}\*subscriptℒ𝑐superscript01\mathcal{L}\_{c}=\{0,1\}^{\*}caligraphic\_L start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT = { 0 , 1 } start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT). On the other hand, the designer may outsource the implementation of some rules to a human judge. For example, a designer can automatically prohibit repetition of a particular string, but the prohibition of rephrasing of previously made points needs to be delegated to the judge. Similarly, a debate can automatically terminate after N𝑁Nitalic\_N rounds, but a judge is needed to end after debaters no longer say anything relevant.222We can distinguish between the designer of the debate, the person who selects the question q𝑞qitalic\_q, and the judge who determines its winner, who may in-practice be different, or the same person. While the present text does not analyze the role of the human judge in detail, we believe that for such analysis, it is useful to view the judge not as a player in the debate game, but rather as some J∈𝒥𝐽𝒥J\in\mathcal{J}italic\_J ∈ caligraphic\_J which parametrizes 𝒜(q)=𝒜J(q)𝒜𝑞superscript𝒜𝐽𝑞\mathcal{A}(q)=\mathcal{A}^{J}(q)caligraphic\_A ( italic\_q ) = caligraphic\_A start\_POSTSUPERSCRIPT italic\_J end\_POSTSUPERSCRIPT ( italic\_q ), P=PJ𝑃superscript𝑃𝐽{P}={P}^{J}italic\_P = italic\_P start\_POSTSUPERSCRIPT italic\_J end\_POSTSUPERSCRIPT, 𝒯=𝒯J𝒯superscript𝒯𝐽{\mathcal{T}}={\mathcal{T}}^{J}caligraphic\_T = caligraphic\_T start\_POSTSUPERSCRIPT italic\_J end\_POSTSUPERSCRIPT, E=EJ𝐸superscript𝐸𝐽E=E^{J}italic\_E = italic\_E start\_POSTSUPERSCRIPT italic\_J end\_POSTSUPERSCRIPT, and ui=uiJsubscript𝑢𝑖superscriptsubscript𝑢𝑖𝐽u\_{i}=u\_{i}^{J}italic\_u start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_u start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_J end\_POSTSUPERSCRIPT (but *not* 𝔼𝔼\mathbb{E}blackboard\_E and ℒcsubscriptℒ𝑐\mathcal{L}\_{c}caligraphic\_L start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT). We do, however, sometimes anthropomorphize parts of debate by speaking as if they were performed by a judge. Using the *debate environment* and *design elements*, a debate game can be formalized as follows: ###### Definition 2 (Debate and debate game). A debate is a tuple D=⟨𝔼,q,G⟩𝐷𝔼𝑞𝐺D=\left<\mathbb{E},q,{G}\right>italic\_D = ⟨ blackboard\_E , italic\_q , italic\_G ⟩, where 𝔼𝔼\mathbb{E}blackboard\_E is a debate environment, q∈𝒬𝑞𝒬q\in{\mathcal{Q}}italic\_q ∈ caligraphic\_Q is a question, and G=(Gq)q∈𝒬𝐺subscriptsubscript𝐺𝑞𝑞𝒬{G}=(G\_{q})\_{q\in{\mathcal{Q}}}italic\_G = ( italic\_G start\_POSTSUBSCRIPT italic\_q end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_q ∈ caligraphic\_Q end\_POSTSUBSCRIPT is a debate game. Formally, each Gqsubscript𝐺𝑞{G}\_{q}italic\_G start\_POSTSUBSCRIPT italic\_q end\_POSTSUBSCRIPT is a two-player zero-sum extensive form333EFGs are a standard model of sequential decision-making, described, for example, in (Osborne and Rubinstein, [1994](#bib.bib10)). Partially-observable stochastic games (Hansen, Bernstein, and Zilberstein, [2004](#bib.bib5)) constitute an equally valid (Kovařík et al., [2019](#bib.bib8)) choice of model. game that proceeds as follows: 1. 1. The world w𝑤witalic\_w is sampled from π𝜋\piitalic\_π and shown to debaters 1111 and 2222 together with the question q𝑞qitalic\_q. 2. 2. The debaters *simultaneously* pick answers a1,a2∈𝒜(q) subscript𝑎1subscript𝑎2𝒜𝑞a\_{1},a\_{2}\in{\mathcal{A}}(q)italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ∈ caligraphic\_A ( italic\_q ). 3. 3. The debaters alternate444That is, debater 1111 makes odd arguments x1,x3,… subscript𝑥1subscript𝑥3…x\_{1},x\_{3},\dotsitalic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , … while 2222 makes x2,x4 subscript𝑥2subscript𝑥4x\_{2},x\_{4}italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT, etc. making arguments x1,x2,…∈ℒc subscript𝑥1subscript𝑥2…subscriptℒ𝑐x\_{1},x\_{2},\ldots\in\mathcal{L}\_{c}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … ∈ caligraphic\_L start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT, where xj∈P(q,a1,a2,x1,…,xj−1)subscript𝑥𝑗𝑃𝑞subscript𝑎1subscript𝑎2subscript𝑥1…subscript𝑥𝑗1x\_{j}\in{P}(q,a\_{1},a\_{2},x\_{1},\dots,x\_{j-1})italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∈ italic\_P ( italic\_q , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_x start\_POSTSUBSCRIPT italic\_j - 1 end\_POSTSUBSCRIPT ), stopping once (q,a1,a2,x1,x2,…)=t∈𝒯𝑞subscript𝑎1subscript𝑎2subscript𝑥1subscript𝑥2…𝑡𝒯(q,a\_{1},a\_{2},x\_{1},x\_{2},\dots)=t\in{\mathcal{T}}( italic\_q , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … ) = italic\_t ∈ caligraphic\_T is a terminated dialogue. 4. 4. A single experiment e=E(t)𝑒𝐸𝑡e=E(t)italic\_e = italic\_E ( italic\_t ) is selected and its result e(w)𝑒𝑤e(w)italic\_e ( italic\_w ) is used as a context for the next step. 5. 5. The debaters receive utilities u1(t,e(w))∈[−1,1]subscript𝑢1𝑡𝑒𝑤11u\_{1}(t,e(w))\in[-1,1]italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_t , italic\_e ( italic\_w ) ) ∈ [ - 1 , 1 ] and u2(t,e(w))=−u1(t,e(w))subscript𝑢2𝑡𝑒𝑤subscript𝑢1𝑡𝑒𝑤u\_{2}(t,e(w))=-u\_{1}(t,e(w))italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_t , italic\_e ( italic\_w ) ) = - italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_t , italic\_e ( italic\_w ) ). 6. 6. The answer of the debater with higher utility becomes the outcome o(w,t)𝑜𝑤𝑡o(w,t)italic\_o ( italic\_w , italic\_t ) of D𝐷Ditalic\_D (with ties broken randomly). 7. 7. Debate error is the resulting deviation τ(q,w,o(w,t))𝜏𝑞𝑤𝑜𝑤𝑡{\tau}(q,w,o(w,t))italic\_τ ( italic\_q , italic\_w , italic\_o ( italic\_w , italic\_t ) ) between the outcome of D𝐷Ditalic\_D and the true answer to q𝑞qitalic\_q. The model makes several simplifications, but can be generalized in the following straightforward ways: ###### Remark 3 (Natural extensions of debate games). Debate games may be generalized with: * • Non-simultaneous answers. The roles in the answering phase might be asymmetric, in that one debater might see their opponent’s answer before selecting their own.555As in, e.g., the Devil’s advocate AI (Armstrong, [2017a](#bib.bib1)). * • Debaters with imperfect information. Instead of having perfect information about w𝑤witalic\_w, the debaters might only receive some imperfect observation I1(w)subscript𝐼1𝑤I\_{1}(w)italic\_I start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_w ), I2(w)subscript𝐼2𝑤I\_{2}(w)italic\_I start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_w ). This limitation is particularly relevant for scenarios involving human preferences, such as the restaurant example. * • Judge interaction. A judge that asks questions and makes other interjections may be added as a “chance” player J𝐽Jitalic\_J with fixed strategy. * • Dangerous experiments. In the real world, some experiments might have dangerous side-effects. This may be modelled by considering experiments e:𝒲→2𝒲∪[−∞,∞]3:𝑒→𝒲superscript2𝒲superscript3e:{\mathcal{W}}\to 2^{\mathcal{W}}\cup[-\infty,\infty]^{3}italic\_e : caligraphic\_W → 2 start\_POSTSUPERSCRIPT caligraphic\_W end\_POSTSUPERSCRIPT ∪ [ - ∞ , ∞ ] start\_POSTSUPERSCRIPT 3 end\_POSTSUPERSCRIPT which sometimes just give information, but other times bypass the debate and assign utilities u1subscript𝑢1u\_{1}italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, u2subscript𝑢2u\_{2}italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT and the debate error directly. * • Generalized outcome-selection policies. Instead of always adopting the more-favoured, the judge may adopt an answer according to a mapping o:𝒯×2𝒲→𝒜:𝑜→𝒯superscript2𝒲𝒜o:{\mathcal{T}}\times 2^{\mathcal{W}}\to\mathcal{A}italic\_o : caligraphic\_T × 2 start\_POSTSUPERSCRIPT caligraphic\_W end\_POSTSUPERSCRIPT → caligraphic\_A, or may even be given an option of ignoring suspicious or uninformative debates. ### 2.2 Properties of debate games #### Debate phases and relation to game theory. For the purpose of modelling the debaters’ actions, we distinguish between the answering phase (step 2 of Definition [2](#Thmtheorem2 "Definition 2 (Debate and debate game). ‣ The design elements of debate ‣ 2.1 Debate Games ‣ 2 The Debate Game Framework ‣ (When) Is Truth-telling Favored in AI Debate?")) and the argumentation phase (step 3). Once q𝑞qitalic\_q, w𝑤witalic\_w, and (a1,a2)subscript𝑎1subscript𝑎2(a\_{1},a\_{2})( italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) get fixed at the start of the argumentation phase, the utilities of the debaters only depend on the subsequent arguments xjsubscript𝑥𝑗x\_{j}italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT raised by the agents. Since the agents have full information about each argument, the *argumentation phase is a two-player zero-sum sequential game with perfect information*. We denote this game Gqwa1a2subscript𝐺𝑞𝑤subscript𝑎1subscript𝑎2G\_{qwa\_{1}a\_{2}}italic\_G start\_POSTSUBSCRIPT italic\_q italic\_w italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. To analyze the answering phase, we first recall an important property of two-player zero-sum games: In every such G′superscript𝐺′G^{\prime}italic\_G start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, all Nash-equilibrium strategies σ\*superscript𝜎\sigma^{\*}italic\_σ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT result in the same expected utility 𝐄σ\*u1subscript𝐄superscript𝜎subscript𝑢1\mathbf{E}\,\_{\sigma^{\*}}u\_{1}bold\_E start\_POSTSUBSCRIPT italic\_σ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT (Shoham and Leyton-Brown, [2008](#bib.bib14), Thm 3.4.4). This number is called the value of the game and denoted v\*superscript𝑣v^{\*}italic\_v start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. Assuming optimal argumentation strategies, each debater thus knows that playing the argumentation game Gqwa1a2subscript𝐺𝑞𝑤subscript𝑎1subscript𝑎2G\_{qwa\_{1}a\_{2}}italic\_G start\_POSTSUBSCRIPT italic\_q italic\_w italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT results in some utility vqwa1a2\*subscriptsuperscript𝑣𝑞𝑤subscript𝑎1subscript𝑎2v^{\*}\_{qwa\_{1}a\_{2}}italic\_v start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_q italic\_w italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. This allows them to abstract away the argumentation phase. By randomizing the order of argumentation and treating both debaters equally, we can further ensure that vqwa2a1\*=−vqwa1a2\*subscriptsuperscript𝑣𝑞𝑤subscript𝑎2subscript𝑎1subscriptsuperscript𝑣𝑞𝑤subscript𝑎1subscript𝑎2v^{\*}\_{qwa\_{2}a\_{1}}=-v^{\*}\_{qwa\_{1}a\_{2}}italic\_v start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_q italic\_w italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = - italic\_v start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_q italic\_w italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. As a result, each *answering phase is a symmetric two-player zero-sum matrix game* with actions 𝒜(q)𝒜𝑞{\mathcal{A}}(q)caligraphic\_A ( italic\_q ) and payoffs vwqa1a2\*subscriptsuperscript𝑣𝑤𝑞subscript𝑎1subscript𝑎2v^{\*}\_{wqa\_{1}a\_{2}}italic\_v start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_w italic\_q italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT (to player 1). These observations have an important implication: Fully general EFGs might contain complications that make finding their solutions difficult. However, both the answering game and the argumentation game belong to highly-specific and well-understood subclasses of EFGs and are thus amenable to simpler solution techniques. #### Measuring the usefulness of debate. We measure the suitability of a debate design by the degree to which optimal play by the debaters results in low debate-error. By default, we focus on the worst-case where both the world and the debate outcome are selected adversarially from the support of their respective distributions. We denote the *support* of a probability measure p𝑝pitalic\_p by supp(p)supp𝑝\textrm{supp}\,(p)supp ( italic\_p ). ###### Definition 4 (Truth promotion). In the following, D=⟨𝔼,q,G⟩𝐷𝔼𝑞𝐺D=\left<\mathbb{E},q,G\right>italic\_D = ⟨ blackboard\_E , italic\_q , italic\_G ⟩ is a debate, ϵ≥0italic-ϵ0\epsilon\geq 0italic\_ϵ ≥ 0, and w𝑤witalic\_w always denotes some element of 𝑠𝑢𝑝𝑝(π)𝑠𝑢𝑝𝑝𝜋\textrm{supp}(\pi)supp ( italic\_π ), σ𝜎\sigmaitalic\_σ a Nash-equilibrium strategy in G𝐺Gitalic\_G, and t𝑡titalic\_t a terminal dialogue compatible with σ𝜎\sigmaitalic\_σ in w𝑤witalic\_w. D𝐷Ditalic\_D is said to be: * • ϵitalic-ϵ\epsilonitalic\_ϵ-truth promoting (in the worst-case) in w𝑤witalic\_w if for each σ𝜎\sigmaitalic\_σ, we have suptτ(q,o(t,w),w)≤ϵsubscriptsupremum𝑡𝜏𝑞𝑜𝑡𝑤𝑤italic-ϵ\sup\_{t}{\tau}(q,o(t,w),w)\leq\epsilonroman\_sup start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_τ ( italic\_q , italic\_o ( italic\_t , italic\_w ) , italic\_w ) ≤ italic\_ϵ, * • ϵitalic-ϵ\epsilonitalic\_ϵ-truth promoting if it is ϵitalic-ϵ\epsilonitalic\_ϵ-truth promoting in every w𝑤witalic\_w, * • and ϵitalic-ϵ\epsilonitalic\_ϵ-truth promoting in expectation if for each σ𝜎\sigmaitalic\_σ, we have 𝐄w∼π𝐄t∼στ(q,o(t,w),w)≤ϵsubscript𝐄similar-to𝑤𝜋subscript𝐄similar-to𝑡𝜎𝜏𝑞𝑜𝑡𝑤𝑤italic-ϵ\mathbf{E}\,\_{w\sim\pi}\mathbf{E}\,\_{t\sim\sigma}\,{\tau}(q,o(t,w),w)\leq\epsilonbold\_E start\_POSTSUBSCRIPT italic\_w ∼ italic\_π end\_POSTSUBSCRIPT bold\_E start\_POSTSUBSCRIPT italic\_t ∼ italic\_σ end\_POSTSUBSCRIPT italic\_τ ( italic\_q , italic\_o ( italic\_t , italic\_w ) , italic\_w ) ≤ italic\_ϵ. When a debate is 00-truth promoting, we refer to it simply as “truth-promoting”. Finally, we formalize the *idealized* version of the design problem as follows: ###### Problem 5 (When is debate truth promoting?). For given ⟨𝔼,⋅,G⟩𝔼normal-⋅𝐺\left<\mathbb{E},\cdot,G\right>⟨ blackboard\_E , ⋅ , italic\_G ⟩, characterize those q∈𝒬𝑞𝒬q\in{\mathcal{Q}}italic\_q ∈ caligraphic\_Q for which any optimal strategy in ⟨𝔼,q,G⟩𝔼𝑞𝐺\left<\mathbb{E},q,G\right>⟨ blackboard\_E , italic\_q , italic\_G ⟩ only gives answers with τ(q,a,w)=0𝜏𝑞𝑎𝑤0{\tau}(q,a,w)=0italic\_τ ( italic\_q , italic\_a , italic\_w ) = 0. 3 Feature Debate ----------------- In order to explore the properties that debates can have, it is useful to have a toy version of the general framework. In this section, we discuss how questions can be represented as functions of “elementary features” of the world and describe a simple debate game in which the arguments are restricted to revealing these features. This is inspired by (Irving, Christiano, and Amodei, [2018](#bib.bib6)), where each world is an image from the MNIST database, elementary features are pixels, and the question is “Which digit is this?”. Rather than faithfully capturing all important aspects of debate, the purpose of the toy model is to provide a *simple* setting for investigating *some* aspects. The limitations of the model are further discussed in Section [6](#S6 "6 Limitations and Future Work ‣ (When) Is Truth-telling Favored in AI Debate?"). ### 3.1 Defining Feature Debate #### Questions about functions. Many questions are *naturally* expressed as enquiries qfsubscript𝑞𝑓q\_{f}italic\_q start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT about some f:𝒲→𝒳:𝑓→𝒲𝒳f:{\mathcal{W}}\to\mathcal{X}italic\_f : caligraphic\_W → caligraphic\_X | | | | | | --- | --- | --- | --- | | | qfsubscript𝑞𝑓q\_{f}italic\_q start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT := “What is output of f𝑓fitalic\_f?” | | (1) | and come accompanied by the answer space 𝒜=𝒳𝒜𝒳{\mathcal{A}}=\mathcal{X}caligraphic\_A = caligraphic\_X (or 𝒜=Δ(𝒳)𝒜Δ𝒳{\mathcal{A}}=\Delta(\mathcal{X})caligraphic\_A = roman\_Δ ( caligraphic\_X )). Examples include questions of measurement (“How far is the Moon?”, “How much do I weigh?”) and classification (“Which digit is on this picture?”, “Will person A𝐴Aitalic\_A beat person B𝐵Bitalic\_B in a poker game?”). For simplicity, we focus on questions qfsubscript𝑞𝑓q\_{f}italic\_q start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT about functions f:𝒲→[0,1]=𝒳:𝑓→𝒲01𝒳f:{\mathcal{W}}\to[0,1]=\mathcal{X}italic\_f : caligraphic\_W → [ 0 , 1 ] = caligraphic\_X and the truth-mapping τ(q,w,a):=|f(w)−a|assign𝜏𝑞𝑤𝑎𝑓𝑤𝑎{\tau}(q,w,a):=|f(w)-a|italic\_τ ( italic\_q , italic\_w , italic\_a ) := | italic\_f ( italic\_w ) - italic\_a |. These assumptions are not very restrictive — they include binary questions of the type “Is Y𝑌Yitalic\_Y true?” (𝒳={0,1}𝒳01\mathcal{X}=\{0,1\}caligraphic\_X = { 0 , 1 }), and their generalizations “How likely is Y𝑌Yitalic\_Y to be true?” and “To what degree is Y𝑌Yitalic\_Y true?”. Any debate about an “n𝑛nitalic\_n-dimensional question” can be reduced into n𝑛nitalic\_n “1111-dimensional” debates, and any function f:𝒲→ℝ:𝑓→𝒲ℝf:{\mathcal{W}}\to\mathbb{R}italic\_f : caligraphic\_W → blackboard\_R can be re-scaled to have range [0,1]01[0,1][ 0 , 1 ]. #### Feature Debate and Its Basic Properties. In feature debate, we assume that worlds are fully described by their elementary features — i.e. we suppose that 𝒲=Πi=1∞[0,1]𝒲superscriptsubscriptΠ𝑖101{\mathcal{W}}=\Pi\_{i=1}^{\infty}[0,1]caligraphic\_W = roman\_Π start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT [ 0 , 1 ] and denote Wi:w=(wi)i=1∞∈𝒲↦wi:subscript𝑊𝑖𝑤superscriptsubscriptsubscript𝑤𝑖𝑖1𝒲maps-tosubscript𝑤𝑖W\_{i}:w=(w\_{i})\_{i=1}^{\infty}\in{\mathcal{W}}\mapsto w\_{i}italic\_W start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT : italic\_w = ( italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT ∈ caligraphic\_W ↦ italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. Moreover, we assume that each round consists of each debater making one argument of the form “the value of i𝑖iitalic\_i-th feature Wisubscript𝑊𝑖W\_{i}italic\_W start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is x𝑥xitalic\_x”. We consider a judge who can experimentally verify any elementary feature (but no more than one per debate): ###### Definition 6 (Feature debate environment). A feature debate environment 𝔽πsubscript𝔽𝜋\mathbb{F}\_{\pi}blackboard\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT is a debate environment where: * • The prior distribution π𝜋\piitalic\_π is a (Borel) probability measure on 𝒲=[0,1]ℕ𝒲superscript01ℕ{\mathcal{W}}=[0,1]^{\mathbb{N}}caligraphic\_W = [ 0 , 1 ] start\_POSTSUPERSCRIPT blackboard\_N end\_POSTSUPERSCRIPT. * • Each question pertains to a function f𝑓fitalic\_f, i.e. 𝒬={qf∣f:𝒲→[0,1] measurable}𝒬conditional-setsubscript𝑞𝑓:𝑓→𝒲01 measurable{\mathcal{Q}}=\{q\_{f}\mid f:{\mathcal{W}}\to[0,1]\textnormal{ measurable}\}caligraphic\_Q = { italic\_q start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT ∣ italic\_f : caligraphic\_W → [ 0 , 1 ] measurable }. * • The answers are 𝒜=[0,1]𝒜01\mathcal{A}=[0,1]caligraphic\_A = [ 0 , 1 ]. * • The deviation-from-truth is the distance τ(qf,a,w)=|f(w)−a|𝜏subscript𝑞𝑓𝑎𝑤𝑓𝑤𝑎{\tau}(q\_{f},a,w)=|f(w)-a|italic\_τ ( italic\_q start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT , italic\_a , italic\_w ) = | italic\_f ( italic\_w ) - italic\_a | between the answer a𝑎aitalic\_a and the truth f(w)𝑓𝑤f(w)italic\_f ( italic\_w ). * • Each experiment reveals one feature, i.e. ℰ={ei∣i∈ℕ}ℰconditional-setsubscript𝑒𝑖𝑖ℕ\mathcal{E}=\{e\_{i}\mid i\in\mathbb{N}\}caligraphic\_E = { italic\_e start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_i ∈ blackboard\_N }, where ei(w):={w~∣w~i=wi}assignsubscript𝑒𝑖𝑤conditional-set~𝑤subscript~𝑤𝑖subscript𝑤𝑖e\_{i}(w):=\{\tilde{w}\mid\tilde{w}\_{i}=w\_{i}\}italic\_e start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_w ) := { over~ start\_ARG italic\_w end\_ARG ∣ over~ start\_ARG italic\_w end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT }. For full generality, we may want to assume the debaters can lie about the features, but for the analysis in this paper, we ignore this case. The reason is that if the opponent can point out a lie, and then the judge can test and penalize it, uttering this lie will be sub-optimal. We thus only consider truthful claims of the form “Wi=wisubscript𝑊𝑖subscript𝑤𝑖W\_{i}=w\_{i}italic\_W start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT”. With a slight abuse of notation, this allows us to identify the communication language ℒcsubscriptℒ𝑐\mathcal{L}\_{c}caligraphic\_L start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT with the feature-indexing set ℕℕ\mathbb{N}blackboard\_N. Any argument sequence i→:=i→k:=(i1,…,ik)assign→𝑖subscript→𝑖𝑘assignsubscript𝑖1…subscript𝑖𝑘\vec{i}:=\vec{i}\_{k}:=(i\_{1},\dots,i\_{k})over→ start\_ARG italic\_i end\_ARG := over→ start\_ARG italic\_i end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT := ( italic\_i start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_i start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) thus effectively reveals the corresponding features. (We write “Wi→=wi→subscript𝑊→𝑖subscript𝑤→𝑖W\_{\vec{i}}=w\_{\vec{i}}italic\_W start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT”.) We suppose the judge has access to the world-distribution π∈𝒲𝜋𝒲\pi\in{\mathcal{W}}italic\_π ∈ caligraphic\_W and can update it correctly on new information, but only has “patience” to process 2N2𝑁2N2 italic\_N pieces of evidence. Finally, the debaters are penalized for any deviation between their answer and the judge’s final belief and — to make the debate zero-sum — are rewarded for any deviation of their opponent. Adopting the qfsubscript𝑞𝑓q\_{f}italic\_q start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT shorthand introduced earlier, the formal definition is as follows: ###### Definition 7 (Feature debate). A feature debate Fπ(f,N)=⟨𝔽π,qf,G⟩subscript𝐹𝜋𝑓𝑁subscript𝔽𝜋subscript𝑞𝑓𝐺F\_{\pi}(f,N)=\left<\mathbb{F}\_{\pi},q\_{f},G\right>italic\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_f , italic\_N ) = ⟨ blackboard\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT , italic\_q start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT , italic\_G ⟩ is a debate with the following specific rules: * • A randomly selected player makes the first argument. * • ℒc=ℕsubscriptℒ𝑐ℕ\mathcal{L}\_{c}=\mathbb{N}caligraphic\_L start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT = blackboard\_N and P(f,a1,a2,i1,…,ik):=ℕ∖{i1,…,ik}assign𝑃𝑓subscript𝑎1subscript𝑎2subscript𝑖1…subscript𝑖𝑘ℕsubscript𝑖1…subscript𝑖𝑘{P}(f,a\_{1},a\_{2},i\_{1},\dots,i\_{k}):=\mathbb{N}\setminus\{i\_{1},\dots,i\_{k}\}italic\_P ( italic\_f , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_i start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_i start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) := blackboard\_N ∖ { italic\_i start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_i start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT }. * • After 2N2𝑁2N2 italic\_N arguments have been made, the judge sets u1(t,wi→):=|f^(wi→)−a2|−|f^(wi→)−a1|assignsubscript𝑢1𝑡subscript𝑤→𝑖^𝑓subscript𝑤→𝑖subscript𝑎2^𝑓subscript𝑤→𝑖subscript𝑎1u\_{1}(t,w\_{\vec{i}}):=|\hat{f}(w\_{\vec{i}})-a\_{2}|-|\hat{f}(w\_{\vec{i}})-a\_{1}|italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_t , italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) := | over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) - italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | - | over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) - italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT |, where f^(wi→)^𝑓subscript𝑤→𝑖\hat{f}(w\_{\vec{i}})over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) is the posterior mean | | | | | --- | --- | --- | | | f^(wi→):=𝐄π[f∣Wi→=wi→].assign^𝑓subscript𝑤→𝑖subscript𝐄𝜋delimited-[]conditional𝑓subscript𝑊→𝑖subscript𝑤→𝑖\hat{f}(w\_{\vec{i}}):=\mathbf{E}\,\_{\pi}\left[f\mid W\_{\vec{i}}=w\_{\vec{i}}\,\right].over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) := bold\_E start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT [ italic\_f ∣ italic\_W start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ] . | | ### 3.2 Optimal play in feature debate The zero-sum assumption implies that any shift in f^(wi→)^𝑓subscript𝑤→𝑖\hat{f}(w\_{\vec{i}})over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) will be endorsed by one debater and opposed by the other (or both will be indifferent). The following symbols denote the two extreme values that the judge’s final belief can take, depending on whether the debater who makes the first argument aims for high values of f^(wi→)^𝑓subscript𝑤→𝑖\hat{f}(w\_{\vec{i}})over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) and the second debater aims for low ones (↑↓↑absent↓\uparrow\downarrow↑ ↓) or vice versa (↓↑)\downarrow\uparrow)↓ ↑ ): | | | | | | --- | --- | --- | --- | | | f^N↑↓(w):=assignsubscriptsuperscript^𝑓↑absent↓𝑁𝑤absent\displaystyle\hat{f}^{\uparrow\downarrow}\_{N}(w):=over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) := | maxi1⁡mini2⁡…maxi2N−1⁡mini2N⁡f^(wi→),subscriptsubscript𝑖1subscriptsubscript𝑖2…subscriptsubscript𝑖2𝑁1subscriptsubscript𝑖2𝑁^𝑓subscript𝑤→𝑖\displaystyle\max\nolimits\_{i\_{1}}\,\min\nolimits\_{i\_{2}}\dots\max\nolimits\_{i\_{2N-1}}\,\min\nolimits\_{i\_{2N}}\,\hat{f}(w\_{\vec{i}}),roman\_max start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT roman\_min start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT … roman\_max start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT 2 italic\_N - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT roman\_min start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT 2 italic\_N end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) , | | | | f^N↓↑(w):=assignsubscriptsuperscript^𝑓↓absent↑𝑁𝑤absent\displaystyle\hat{f}^{\downarrow\uparrow}\_{N}(w):=over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) := | mini1⁡maxi2⁡…mini2N−1⁡maxi2N⁡f^(wi→).subscriptsubscript𝑖1subscriptsubscript𝑖2…subscriptsubscript𝑖2𝑁1subscriptsubscript𝑖2𝑁^𝑓subscript𝑤→𝑖\displaystyle\min\nolimits\_{i\_{1}}\,\max\nolimits\_{i\_{2}}\dots\min\nolimits\_{i\_{2N-1}}\,\max\nolimits\_{i\_{2N}}\,\hat{f}(w\_{\vec{i}}).roman\_min start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT roman\_max start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT … roman\_min start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT 2 italic\_N - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT roman\_max start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT 2 italic\_N end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) . | | Since maxx⁡miny⁡φ(x,y)≤miny⁡maxx⁡φ(x,y)subscript𝑥subscript𝑦𝜑𝑥𝑦subscript𝑦subscript𝑥𝜑𝑥𝑦\max\_{x}\min\_{y}\varphi(x,y)\leq\min\_{y}\max\_{x}\varphi(x,y)roman\_max start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT roman\_min start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT italic\_φ ( italic\_x , italic\_y ) ≤ roman\_min start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT roman\_max start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT italic\_φ ( italic\_x , italic\_y ) holds for any φ𝜑\varphiitalic\_φ, the second debater always has an edge: (∀w∈𝒲)::for-all𝑤𝒲absent(\forall w\in{\mathcal{W}}):( ∀ italic\_w ∈ caligraphic\_W ) : f^N↑↓(w)≤f^N↓↑(w)subscriptsuperscript^𝑓↑absent↓𝑁𝑤subscriptsuperscript^𝑓↓absent↑𝑁𝑤\hat{f}^{\uparrow\downarrow}\_{N}(w)\leq\hat{f}^{\downarrow\uparrow}\_{N}(w)over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) ≤ over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ). Lemma [8](#Thmtheorem8 "Lemma 8 (Optimal play in feature debate). ‣ 3.2 Optimal play in feature debate ‣ 3 Feature Debate ‣ (When) Is Truth-telling Favored in AI Debate?") (i)𝑖(i)( italic\_i ) shows that if the order of argumentation is randomized as in Definition [7](#Thmtheorem7 "Definition 7 (Feature debate). ‣ Feature Debate and Its Basic Properties. ‣ 3.1 Defining Feature Debate ‣ 3 Feature Debate ‣ (When) Is Truth-telling Favored in AI Debate?"), the optimal answers lie precisely in these bounds. This result immediately yields a general error bound (ii)𝑖𝑖(ii)( italic\_i italic\_i ) which will serve as an essential tool for further analysis of feature debate. ###### Lemma 8 (Optimal play in feature debate). (i) The optimal answering strategies in Fπ(f,N)subscript𝐹𝜋𝑓𝑁F\_{\pi}(f,N)italic\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_f , italic\_N ) are precisely all those that select answers from the interval [f^N↑↓(w),f^N↓↑(w)]subscriptsuperscriptnormal-^𝑓normal-↑absentnormal-↓𝑁𝑤subscriptsuperscriptnormal-^𝑓normal-↓absentnormal-↑𝑁𝑤[\hat{f}^{\uparrow\downarrow}\_{N}(w),\hat{f}^{\downarrow\uparrow}\_{N}(w)][ over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) , over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) ]. (ii) In particular, Fπ(f,N)subscript𝐹𝜋𝑓𝑁F\_{\pi}(f,N)italic\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_f , italic\_N ) is precisely max⁡{|f^N↑↓(w)−f(w)|,|f^N↓↑(w)−f(w)|}subscriptsuperscriptnormal-^𝑓normal-↑absentnormal-↓𝑁𝑤𝑓𝑤subscriptsuperscriptnormal-^𝑓normal-↓absentnormal-↑𝑁𝑤𝑓𝑤\max\{|\hat{f}^{\uparrow\downarrow}\_{N}(w)-f(w)|,|\hat{f}^{\downarrow\uparrow}\_{N}(w)-f(w)|\}roman\_max { | over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) - italic\_f ( italic\_w ) | , | over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) - italic\_f ( italic\_w ) | }-truth promoting in w𝑤witalic\_w. 4 When Do Feature Debates Track Truth? --------------------------------------- In this section, we assess whether feature debates track truth under a range of assumptions. ### 4.1 Truth-Promotion and Critical Debate-Length Some general debates might be so “biased” that no matter how many arguments an honest debater uses, they will not be able to convince the judge of their truth. Proposition [9](#Thmtheorem9 "Proposition 9 (Sufficient debate length). ‣ 4.1 Truth-Promotion and Critical Debate-Length ‣ 4 When Do Feature Debates Track Truth? ‣ (When) Is Truth-telling Favored in AI Debate?") ensures that this is not the case in a typical feature debate: ###### Proposition 9 (Sufficient debate length). Fπ(f,N)subscript𝐹𝜋𝑓𝑁F\_{\pi}(f,N)italic\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_f , italic\_N ) is truth-promoting for functions that depend on ≤Nabsent𝑁\leq N≤ italic\_N features. ###### Proof. In an N𝑁Nitalic\_N-round debate about a question that depends on ≤Nabsent𝑁\leq N≤ italic\_N features, either of the players can unilaterally decide to reveal all relevant information, ensuring that f^(wi→2N)=f(w)^𝑓subscript𝑤subscript→𝑖2𝑁𝑓𝑤\hat{f}(w\_{\vec{i}\_{2N}})=f(w)over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG start\_POSTSUBSCRIPT 2 italic\_N end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) = italic\_f ( italic\_w ). This implies that f^N↑↓(w)=f^N↓↑(w)=f(w)subscriptsuperscript^𝑓↑absent↓𝑁𝑤subscriptsuperscript^𝑓↓absent↑𝑁𝑤𝑓𝑤\hat{f}^{\uparrow\downarrow}\_{N}(w)=\hat{f}^{\downarrow\uparrow}\_{N}(w)=f(w)over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) = over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) = italic\_f ( italic\_w ). The result then follows from Lemma [8](#Thmtheorem8 "Lemma 8 (Optimal play in feature debate). ‣ 3.2 Optimal play in feature debate ‣ 3 Feature Debate ‣ (When) Is Truth-telling Favored in AI Debate?"). ∎ However, Proposition [9](#Thmtheorem9 "Proposition 9 (Sufficient debate length). ‣ 4.1 Truth-Promotion and Critical Debate-Length ‣ 4 When Do Feature Debates Track Truth? ‣ (When) Is Truth-telling Favored in AI Debate?") is optimal in the sense that if the number of rounds is smaller than the number of critical features, the resulting debate error might be very high. ###### Proposition 10 (Necessary debate length). When f𝑓fitalic\_f depends on N+1𝑁1N+1italic\_N + 1 features, the debate error in Fπ(f,N)subscript𝐹𝜋𝑓𝑁F\_{\pi}(f,N)italic\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_f , italic\_N ) can be 1111 (i.e., maximally bad) in the worst-case world and equal to 1212\frac{1}{2}divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG in expectation (even for continuous f𝑓fitalic\_f). The “counterexample questions” which this result relies on are presented in the following Section [4.2](#S4.SS2 "4.2 Three Kinds of Very Difficult Questions ‣ 4 When Do Feature Debates Track Truth? ‣ (When) Is Truth-telling Favored in AI Debate?"). The formal proof and the continuous case are given in the appendix. ### 4.2 Three Kinds of Very Difficult Questions We now construct three classes of questions which cause debate to perform especially poorly777While we focus on results in worst-case worlds, the illustrated behaviour might become the norm with a biased judge (Sec. [6.2](#S6.SS2.SSSx1 "Sub-optimal judges. ‣ 6.2 Obstacles in Realistic Debates ‣ 6 Limitations and Future Work ‣ (When) Is Truth-telling Favored in AI Debate?"))., in ways that are analogous to failures of realistic debates. #### Unfair questions. A question may be difficult to debate when arguing for one side requires more complex arguments. Indeed, consider a feature debate in a world w𝑤witalic\_w uniformly sampled from Boolean-featured worlds Πi∈ℕWi={0,1}ℕsubscriptΠ𝑖ℕsubscript𝑊𝑖superscript01ℕ\Pi\_{i\in\mathbb{N}}W\_{i}=\{0,1\}^{\mathbb{N}}roman\_Π start\_POSTSUBSCRIPT italic\_i ∈ blackboard\_N end\_POSTSUBSCRIPT italic\_W start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = { 0 , 1 } start\_POSTSUPERSCRIPT blackboard\_N end\_POSTSUPERSCRIPT, and suppose the debate asks about the conjunctive function φ:=W1∧…∧WKassign𝜑subscript𝑊1…subscript𝑊𝐾\varphi:=W\_{1}\land\ldots\land W\_{K}italic\_φ := italic\_W start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∧ … ∧ italic\_W start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT for some K∈ℕ𝐾ℕK\in\mathbb{N}italic\_K ∈ blackboard\_N. In worlds with w1=⋯=wK=1subscript𝑤1⋯subscript𝑤𝐾1w\_{1}=\dots=w\_{K}=1italic\_w start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = ⋯ = italic\_w start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT = 1, an honest debater has to reveal K𝐾Kitalic\_K features to prove that φ(w)=1𝜑𝑤1\varphi(w)=1italic\_φ ( italic\_w ) = 1. On the other hand, a debater arguing for the false answer a=0𝑎0a=0italic\_a = 0 merely needs to avoid helping their opponent by revealing the features W1,…,WKsubscript𝑊1…subscript𝑊𝐾W\_{1},\dots,W\_{K}italic\_W start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_W start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT. In particular, this setup shows that a debate might indeed require as many rounds as there are relevant features, thus proving the worst-case part of Proposition [10](#Thmtheorem10 "Proposition 10 (Necessary debate length). ‣ 4.1 Truth-Promotion and Critical Debate-Length ‣ 4 When Do Feature Debates Track Truth? ‣ (When) Is Truth-telling Favored in AI Debate?")). #### Unstable debates. Even if a question does not bias the debate against the true answer as above, the debate outcome might still be uncertain until the very end. One way this could happen is if the judge always feels that more information is required to get the answer right. Alternatively, every new argument might come as a surprise to the judge, and be so persuasive that the judge ends up always taking the side of whichever debater spoke more recently. To see how this behavior can arise in our model, consider the function ψ:=xor(W1,…,WK)assign𝜓xorsubscript𝑊1…subscript𝑊𝐾\psi:=\textnormal{xor}(W\_{1},\dots,W\_{K})italic\_ψ := xor ( italic\_W start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_W start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT ) defined on worlds with Boolean features, and the world w=(1,1,…)𝑤11…w=(1,1,\dots)italic\_w = ( 1 , 1 , … ).888Recall that ψ𝜓\psiitalic\_ψ has value 00 or 1111, depending on whether the number of features i≤K𝑖𝐾i\leq Kitalic\_i ≤ italic\_K with wi=1subscript𝑤𝑖1w\_{i}=1italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = 1 is even or odd. If the world distribution π𝜋\piitalic\_π is uniform over {0,1}ℕsuperscript01ℕ\{0,1\}^{\mathbb{N}}{ 0 , 1 } start\_POSTSUPERSCRIPT blackboard\_N end\_POSTSUPERSCRIPT, the judge will reason that no matter what the debaters say, the last unrevealed feature from the set {W1,…,WK}subscript𝑊1…subscript𝑊𝐾\{W\_{1},\dots,W\_{K}\}{ italic\_W start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_W start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT } always has an equal chance of flipping the value of ψ𝜓\psiitalic\_ψ and keeping it the same, resulting in ψ^(wi→)=12^𝜓subscript𝑤→𝑖12\hat{\psi}(w\_{\vec{i}})=\frac{1}{2}over^ start\_ARG italic\_ψ end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) = divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG. As a result, the only optimal way of playing Fπ(ψ,N)subscript𝐹𝜋𝜓𝑁F\_{\pi}(\psi,N)italic\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_ψ , italic\_N ) is to give the wrong answer a=12𝑎12a=\frac{1}{2}italic\_a = divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG, unless a single debater can, by themselves, reveal all features W1,…,WKsubscript𝑊1…subscript𝑊𝐾W\_{1},\dots,W\_{K}italic\_W start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_W start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT. This happens precisely when K≤N𝐾𝑁K\leq Nitalic\_K ≤ italic\_N. In particular, the case where K=N+1𝐾𝑁1K=N+1italic\_K = italic\_N + 1 proves the “in expectation” part of Proposition [10](#Thmtheorem10 "Proposition 10 (Necessary debate length). ‣ 4.1 Truth-Promotion and Critical Debate-Length ‣ 4 When Do Feature Debates Track Truth? ‣ (When) Is Truth-telling Favored in AI Debate?"). To achieve the “always surprised and oscillating” pattern, we consider a prior π𝜋\piitalic\_π under which each each feature wisubscript𝑤𝑖w\_{i}italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is sampled independently from {0,1}01\{0,1\}{ 0 , 1 }, but in a way that is skewed towards Wi=0subscript𝑊𝑖0W\_{i}=0italic\_W start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = 0 (e.g., 𝐏𝐫[Wi=0]=1−δ𝐏𝐫delimited-[]subscript𝑊𝑖01𝛿\mathbf{Pr}[W\_{i}=0]=1-\deltabold\_Pr [ italic\_W start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = 0 ] = 1 - italic\_δ for some small δ>0𝛿0\delta>0italic\_δ > 0). The result of this bias is that no matter which features get revealed, the judge will always be more likely to believe that “no more features with value Wi=1subscript𝑊𝑖1W\_{i}=1italic\_W start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = 1 are coming” — in other words, the judge will be very confident in their belief while, at the same time, shifting this belief from 00 to 1111 and back each round. #### Distracting evidence. For some questions, there are misleading arguments that appear plausible and then require extensive counter-argumentation to be proven false. By making such arguments, a dishonest debater can stall the debate, until the judge “runs out of patience” and goes with their possibly-wrong surface impression of the topic. To illustrate the idea, consider the uniform distribution π𝜋\piitalic\_π over 𝒲=[0,1]ℕ𝒲superscript01ℕ{\mathcal{W}}=[0,1]^{\mathbb{N}}caligraphic\_W = [ 0 , 1 ] start\_POSTSUPERSCRIPT blackboard\_N end\_POSTSUPERSCRIPT and a question qfsubscript𝑞𝑓q\_{f}italic\_q start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT about some f:𝒲→[0,1]:𝑓→𝒲01f:{\mathcal{W}}\to[0,1]italic\_f : caligraphic\_W → [ 0 , 1 ] that only depends on the first K𝐾Kitalic\_K features. Suppose, for convenience of notation, that the debaters give answers a1=1subscript𝑎11a\_{1}=1italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = 1, a2=0subscript𝑎20a\_{2}=0italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = 0 and the sampled world is s.t. f(w)=1𝑓𝑤1f(w)=1italic\_f ( italic\_w ) = 1 and wK+1=wK+2=⋯=1subscript𝑤𝐾1subscript𝑤𝐾2⋯1w\_{K+1}=w\_{K+2}=\dots=1italic\_w start\_POSTSUBSCRIPT italic\_K + 1 end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT italic\_K + 2 end\_POSTSUBSCRIPT = ⋯ = 1. To adversarially modify f𝑓fitalic\_f, we first define a function S:[0,1]2→[0,1]:𝑆→superscript01201S:[0,1]^{2}\to[0,1]italic\_S : [ 0 , 1 ] start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT → [ 0 , 1 ] as S(x,y)=1𝑆𝑥𝑦1S(x,y)=1italic\_S ( italic\_x , italic\_y ) = 1 if either x≠1𝑥1x\neq 1italic\_x ≠ 1 or x=y=1𝑥𝑦1x=y=1italic\_x = italic\_y = 1 and as S(x,y)=0𝑆𝑥𝑦0S(x,y)=0italic\_S ( italic\_x , italic\_y ) = 0 otherwise. By replacing f𝑓fitalic\_f by f′(w):=f(w)S(wm,wn)assignsuperscript𝑓′𝑤𝑓𝑤𝑆subscript𝑤𝑚subscript𝑤𝑛f^{\prime}(w):=f(w)S(w\_{m},w\_{n})italic\_f start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_w ) := italic\_f ( italic\_w ) italic\_S ( italic\_w start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT , italic\_w start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ), where n>m>K𝑛𝑚𝐾n>m>Kitalic\_n > italic\_m > italic\_K, S𝑆Sitalic\_S introduces an “unlikely problem” x=1𝑥1x=1italic\_x = 1 and an “equally unlikely fix” y=1𝑦1y=1italic\_y = 1, thus allowing the dishonest player 2222 to “stall” for one round. Indeed, the presence of S(⋅,⋅)𝑆⋅⋅S(\cdot,\cdot)italic\_S ( ⋅ , ⋅ ) doesn’t initially affect the expected value of the function in any way. However, if player 2222 reveals that Wm=wm=1subscript𝑊𝑚subscript𝑤𝑚1W\_{m}=w\_{m}=1italic\_W start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT = 1, the expectation immediately drops to 00, forcing player 1111 to “waste one round” by revealing that Wn=wn=1subscript𝑊𝑛subscript𝑤𝑛1W\_{n}=w\_{n}=1italic\_W start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT = 1. To make matters worse yet, we could ask about f^′(w):=f(w)Πi=1dS(wmi,wni)assignsuperscript^𝑓′𝑤𝑓𝑤superscriptsubscriptΠ𝑖1𝑑𝑆subscript𝑤subscript𝑚𝑖subscript𝑤subscript𝑛𝑖\hat{f}^{\prime}(w):=f(w)\Pi\_{i=1}^{d}S(w\_{m\_{i}},w\_{n\_{i}})over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_w ) := italic\_f ( italic\_w ) roman\_Π start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT italic\_S ( italic\_w start\_POSTSUBSCRIPT italic\_m start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_w start\_POSTSUBSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ), or use a more powerful stalling function S(x,y1∧⋯∧yk)𝑆𝑥subscript𝑦1⋯subscript𝑦𝑘S(x,y\_{1}\land\dots\land y\_{k})italic\_S ( italic\_x , italic\_y start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∧ ⋯ ∧ italic\_y start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) that forces the honest player to waste k𝑘kitalic\_k rounds to “explain away” a single argument of the opponent. ### 4.3 Detecting Debate Failures When a debater is certain that their opponent will not get a chance to react, they can get away with making much bolder claims.999Conversely, some realistic debates might provide first-mover advantage due to anchoring and framing effects. The resulting “unfairness” is not a direct source for concern because the order of play can easily be randomized or made simultaneous. However, we may wish to measure the last-mover advantage in order to detect whether a debate is tracking truth as intended. The proof of Lemma [8](#Thmtheorem8 "Lemma 8 (Optimal play in feature debate). ‣ 3.2 Optimal play in feature debate ‣ 3 Feature Debate ‣ (When) Is Truth-telling Favored in AI Debate?") (in particular, equation ([5](#A1.E5 "5 ‣ Proof of Lemma 8. ‣ Appendix A Proofs ‣ (When) Is Truth-telling Favored in AI Debate?"))) yields the following result: ###### Corollary 11 (Last-mover advantage). If optimal debaters in the feature debate Fπ(f,N)subscript𝐹𝜋𝑓𝑁F\_{\pi}(f,N)italic\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_f , italic\_N ) give answers a1,a2∈[f^N↑↓(w),f^N↓↑(w)]subscript𝑎1subscript𝑎2 subscriptsuperscriptnormal-^𝑓normal-↑absentnormal-↓𝑁𝑤subscriptsuperscriptnormal-^𝑓normal-↓absentnormal-↑𝑁𝑤a\_{1},a\_{2}\in[\hat{f}^{\uparrow\downarrow}\_{N}(w),\hat{f}^{\downarrow\uparrow}\_{N}(w)]italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ∈ [ over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) , over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) ], the debater who argues second will obtain |a1−a2|subscript𝑎1subscript𝑎2\lvert a\_{1}-a\_{2}\rvert| italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT - italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | expected utility. Recall that, by Lemma [8](#Thmtheorem8 "Lemma 8 (Optimal play in feature debate). ‣ 3.2 Optimal play in feature debate ‣ 3 Feature Debate ‣ (When) Is Truth-telling Favored in AI Debate?"), all answers from the interval [f^N↑↓(w),f^N↓↑(w)]subscriptsuperscript^𝑓↑absent↓𝑁𝑤subscriptsuperscript^𝑓↓absent↑𝑁𝑤[\hat{f}^{\uparrow\downarrow}\_{N}(w),\hat{f}^{\downarrow\uparrow}\_{N}(w)][ over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) , over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) ] are optimal. Corollary [11](#Thmtheorem11 "Corollary 11 (Last-mover advantage). ‣ 4.3 Detecting Debate Failures ‣ 4 When Do Feature Debates Track Truth? ‣ (When) Is Truth-telling Favored in AI Debate?") thus implies that even if the agents debate optimally, some portion of their utility – up to δ:=f^N↓↑(w)−f^N↑↓(w)assign𝛿subscriptsuperscript^𝑓↓absent↑𝑁𝑤subscriptsuperscript^𝑓↑absent↓𝑁𝑤\delta:=\hat{f}^{\downarrow\uparrow}\_{N}(w)-\hat{f}^{\uparrow\downarrow}\_{N}(w)italic\_δ := over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) - over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) – depends on the randomized choice of argumentation order.101010For illustration, a (literally) extreme case of last-mover advantage occurs in the “oscillatory” debate from Section [4.2](#S4.SS2 "4.2 Three Kinds of Very Difficult Questions ‣ 4 When Do Feature Debates Track Truth? ‣ (When) Is Truth-telling Favored in AI Debate?"), where the interval [f^N↑↓(w),f^N↓↑(w)]subscriptsuperscript^𝑓↑absent↓𝑁𝑤subscriptsuperscript^𝑓↓absent↑𝑁𝑤[\hat{f}^{\uparrow\downarrow}\_{N}(w),\hat{f}^{\downarrow\uparrow}\_{N}(w)][ over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) , over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) ] spans the whole answer space [0,1]01[0,1][ 0 , 1 ]. Incidentally, Lemma [8](#Thmtheorem8 "Lemma 8 (Optimal play in feature debate). ‣ 3.2 Optimal play in feature debate ‣ 3 Feature Debate ‣ (When) Is Truth-telling Favored in AI Debate?") implies that the smallest possible debate error is δ/2𝛿2\delta/2italic\_δ / 2 (which occurs when the true answer is f(w)=(f^N↑↓(w)−f^N↓↑(w))/2𝑓𝑤subscriptsuperscript^𝑓↑absent↓𝑁𝑤subscriptsuperscript^𝑓↓absent↑𝑁𝑤2f(w)=(\hat{f}^{\uparrow\downarrow}\_{N}(w)-\hat{f}^{\downarrow\uparrow}\_{N}(w))/2italic\_f ( italic\_w ) = ( over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) - over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) ) / 2). This relationship justifies a simple, common-sense heuristic: If the utility difference caused by reversing the argumentation order is significant, our debate is probably far from truth-promoting. 5 Two Important Special Cases of Debate ---------------------------------------- As a general principle, a narrower class of debates might allow for more detailed (and possibly stronger) guarantees. We describe two such sub-classes of *general* debates and illustrate their properties on variants of feature debate. ### 5.1 Debate with Independent Evidence When evaluating solution proposals in practice, we sometimes end up weighing its “pros” and “cons”. In a way, we are viewing these arguments as (statistically) independent evidence related to the problem at hand. This is often a reasonable approximation, e.g., when deciding which car to buy, and sometimes an especially good one, e.g., when interpreting questionnaire results from different but independent respondents. We now show how to model these scenarios as feature debates with statistically independent features, and demonstrate the particularly favourable properties. #### Feature debate with independent evidence. As a mathematical model of such setting, we consider 𝒳={0,1}𝒳01\mathcal{X}=\{0,1\}caligraphic\_X = { 0 , 1 }, 𝒲:=Πi=1∞[0,1]assign𝒲superscriptsubscriptΠ𝑖101{\mathcal{W}}:=\Pi\_{i=1}^{\infty}[0,1]caligraphic\_W := roman\_Π start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT [ 0 , 1 ], and denote by Wi:(w,x)↦wi∈[0,1]:subscript𝑊𝑖maps-to𝑤𝑥subscript𝑤𝑖01W\_{i}:(w,x)\mapsto w\_{i}\in[0,1]italic\_W start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT : ( italic\_w , italic\_x ) ↦ italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ [ 0 , 1 ] and X:(w,x)↦x∈{0,1}:𝑋maps-to𝑤𝑥𝑥01X:(w,x)\mapsto x\in\{0,1\}italic\_X : ( italic\_w , italic\_x ) ↦ italic\_x ∈ { 0 , 1 } the coordinate projections in 𝒲×𝒳𝒲𝒳{\mathcal{W}}\times\mathcal{X}caligraphic\_W × caligraphic\_X. Informally, we view the last coordinate as an unknown feature of the world and the debate we construct will be asking “What is the value of this unknown feature?”. To enable inference about X𝑋Xitalic\_X, we consider some probability distribution ℙℙ\mathbb{P}blackboard\_P on 𝒲×𝒳𝒲𝒳{\mathcal{W}}\times\mathcal{X}caligraphic\_W × caligraphic\_X. (For convenience, we assume ℙℙ\mathbb{P}blackboard\_P is discrete.) Finally, to be able to treat arguments of the form “Wi=wisubscript𝑊𝑖subscript𝑤𝑖W\_{i}=w\_{i}italic\_W start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT” as independent evidence related to X𝑋Xitalic\_X, we assume that the features Wisubscript𝑊𝑖W\_{i}italic\_W start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, i∈ℕ𝑖ℕi\in\mathbb{N}italic\_i ∈ blackboard\_N, are mutually independent conditional on X𝑋Xitalic\_X.111111In other words, ℙ(Wj=wj∣X=x)ℙsubscript𝑊𝑗conditionalsubscript𝑤𝑗𝑋𝑥\mathbb{P}(W\_{j}=w\_{j}\mid X=x)blackboard\_P ( italic\_W start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∣ italic\_X = italic\_x ) is equal to ℙ(Wj=wj∣X=x,Wi→k=wi→k)\mathbb{P}(W\_{j}=w\_{j}\mid X=x,W\_{\vec{i}\_{k}}=w\_{\vec{i}\_{k}})blackboard\_P ( italic\_W start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∣ italic\_X = italic\_x , italic\_W start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) for every x𝑥xitalic\_x, wjsubscript𝑤𝑗w\_{j}italic\_w start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT, and wi→ksubscript𝑤subscript→𝑖𝑘w\_{\vec{i}\_{k}}italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. To describe this setting as a feature-debate, we define π𝜋\piitalic\_π as the marginalization of ℙℙ\mathbb{P}blackboard\_P onto 𝒲𝒲{\mathcal{W}}caligraphic\_W and consider the question q=𝑞absentq=italic\_q = “How likely is x=1𝑥1x=1italic\_x = 1 in our world?”, i.e. “what is the value of f𝑓fitalic\_f, where f(w):=𝐄ℙ[X∣𝒲=w]assign𝑓𝑤subscript𝐄ℙdelimited-[]conditional𝑋𝒲𝑤f(w):=\mathbf{E}\,\_{\mathbb{P}}\left[X\mid{\mathcal{W}}=w\right]italic\_f ( italic\_w ) := bold\_E start\_POSTSUBSCRIPT blackboard\_P end\_POSTSUBSCRIPT [ italic\_X ∣ caligraphic\_W = italic\_w ]”. We denote the resulting “independent evidence” feature debate Fπ(q,N)subscript𝐹𝜋𝑞𝑁F\_{\pi}(q,N)italic\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_q , italic\_N ) as Fie(ℙ,N)superscript𝐹ieℙ𝑁F^{\textnormal{ie}}(\mathbb{P},N)italic\_F start\_POSTSUPERSCRIPT ie end\_POSTSUPERSCRIPT ( blackboard\_P , italic\_N ). #### Judge’s belief and its updates. Firstly, recall that any probability can be represented using its odds form, which is equivalent to the corresponding log-odds form: | | | | | | --- | --- | --- | --- | | | ℙ(A)∈[0,1]ℙ𝐴01\displaystyle\mathbb{P}(A)\in[0,1]blackboard\_P ( italic\_A ) ∈ [ 0 , 1 ] | ⟷ℙ(A)/ℙ(¬A)∈[0,∞]⟷absentℙ𝐴ℙ𝐴0\displaystyle\longleftrightarrow\mathbb{P}(A)/\mathbb{P}(\neg A)\in[0,\infty]⟷ blackboard\_P ( italic\_A ) / blackboard\_P ( ¬ italic\_A ) ∈ [ 0 , ∞ ] | | | | | ⟷log⁡(ℙ(A)/ℙ(¬A))∈[−∞,∞].⟷absentℙ𝐴ℙ𝐴\displaystyle\longleftrightarrow\log(\mathbb{P}(A)/\mathbb{P}(\neg A))\in[-\infty,\infty].⟷ roman\_log ( blackboard\_P ( italic\_A ) / blackboard\_P ( ¬ italic\_A ) ) ∈ [ - ∞ , ∞ ] . | | Moreover, when expressed in the log-odds form, Bayes’ rule states that updating one’s belief in hypothesis H𝐻Hitalic\_H in light of evidence A𝐴Aitalic\_A is equivalent to shifting the log-odds of the prior belief by log⁡(ℙ(A∣H)/ℙ(A∣¬H))ℙconditional𝐴𝐻ℙconditional𝐴𝐻\log\left(\mathbb{P}\left(A\mid H\right)/\,\mathbb{P}\left(A\mid\neg H\right)\right)roman\_log ( blackboard\_P ( italic\_A ∣ italic\_H ) / blackboard\_P ( italic\_A ∣ ¬ italic\_H ) ). At any point in the debate Fie(ℙ,N)superscript𝐹ieℙ𝑁F^{\textnormal{ie}}(\mathbb{P},N)italic\_F start\_POSTSUPERSCRIPT ie end\_POSTSUPERSCRIPT ( blackboard\_P , italic\_N ), the judge’s belief f^(wi→)=𝐄[f∣Wi→=w]^𝑓subscript𝑤→𝑖𝐄delimited-[]conditional𝑓subscript𝑊→𝑖𝑤\hat{f}(w\_{\vec{i}})=\mathbf{E}\,[f\mid W\_{\vec{i}}=w]over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) = bold\_E [ italic\_f ∣ italic\_W start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT = italic\_w ] is, by the definition of f𝑓fitalic\_f, equal to the conditional probability missingP(X=1∣Wi→=wi→)missing𝑃𝑋conditional1subscript𝑊→𝑖subscript𝑤→𝑖\mathbb{\mathbb{missing}}P(X=1\mid W\_{\vec{i}}=w\_{\vec{i}})roman\_missing italic\_P ( italic\_X = 1 ∣ italic\_W start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ). To see how the belief develops over the course of the debate, denote by βi→(w)subscript𝛽→𝑖𝑤\beta\_{\vec{i}}(w)italic\_β start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ( italic\_w ) the corresponding log-odds form. Initially, f^∅(w)subscript^𝑓𝑤\hat{f}\_{\emptyset}(w)over^ start\_ARG italic\_f end\_ARG start\_POSTSUBSCRIPT ∅ end\_POSTSUBSCRIPT ( italic\_w ) is equal to the prior ℙ(X=1)ℙ𝑋1\mathbb{P}(X=1)blackboard\_P ( italic\_X = 1 ), which corresponds to β∅(w)=log(ℙ(X=1)/ℙ(X=0))=:p0\beta\_{\emptyset}(w)=\log(\mathbb{P}(X=1)/\mathbb{P}(X=0))=:p\_{0}italic\_β start\_POSTSUBSCRIPT ∅ end\_POSTSUBSCRIPT ( italic\_w ) = roman\_log ( blackboard\_P ( italic\_X = 1 ) / blackboard\_P ( italic\_X = 0 ) ) = : italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Denoting | | | | | --- | --- | --- | | | evj(w):=log⁡(ℙ(Wj=wj∣X=1)ℙ(Wj=wj∣X=0)),assignsubscriptev𝑗𝑤ℙsubscript𝑊𝑗conditionalsubscript𝑤𝑗𝑋1ℙsubscript𝑊𝑗conditionalsubscript𝑤𝑗𝑋0\textrm{ev}\_{j}(w):=\log\left(\frac{\mathbb{P}(W\_{j}=w\_{j}\mid X=1)}{\mathbb{P}(W\_{j}=w\_{j}\mid X=0)}\right),ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) := roman\_log ( divide start\_ARG blackboard\_P ( italic\_W start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∣ italic\_X = 1 ) end\_ARG start\_ARG blackboard\_P ( italic\_W start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∣ italic\_X = 0 ) end\_ARG ) , | | the above form of the Bayes’ rule implies that upon hearing an argument “Wj=wjsubscript𝑊𝑗subscript𝑤𝑗W\_{j}=w\_{j}italic\_W start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT”, the judge will update their belief according to the formula βi→,j(w)=βi→(w)+evj(w)subscript𝛽→𝑖𝑗𝑤subscript𝛽→𝑖𝑤subscriptev𝑗𝑤\beta\_{\vec{i},j}(w)=\beta\_{\vec{i}}(w)+\textrm{ev}\_{j}(w)italic\_β start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG , italic\_j end\_POSTSUBSCRIPT ( italic\_w ) = italic\_β start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ( italic\_w ) + ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ). In other words, the arguments in Fie(ℙ,N)superscript𝐹ieℙ𝑁F^{\textnormal{ie}}(\mathbb{P},N)italic\_F start\_POSTSUPERSCRIPT ie end\_POSTSUPERSCRIPT ( blackboard\_P , italic\_N ) combine additively: | | | | | | --- | --- | --- | --- | | | βi→n(w)=p0+evi1(w)+⋯+evin(w).subscript𝛽subscript→𝑖𝑛𝑤subscript𝑝0subscriptevsubscript𝑖1𝑤⋯subscriptevsubscript𝑖𝑛𝑤\beta\_{\vec{i}\_{n}}(w)=p\_{0}+\textrm{ev}\_{i\_{1}}(w)+\dots+\textrm{ev}\_{i\_{n}}(w).italic\_β start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_w ) = italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT + ev start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_w ) + ⋯ + ev start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_w ) . | | (2) | #### Optimal strategies and evidence strength. Recall that positive (negative) log-odds correspond to probabilities closer to 1111 (resp. 0). Equation ([2](#S5.E2 "2 ‣ Judge’s belief and its updates. ‣ 5.1 Debate with Independent Evidence ‣ 5 Two Important Special Cases of Debate ‣ (When) Is Truth-telling Favored in AI Debate?")) thus suggests that for any w𝑤witalic\_w, the arguments ℕℕ\mathbb{N}blackboard\_N can be split into three “piles” from which the debaters select arguments: 𝒩↑:={j∈ℕ∣evj(w)>0}assignsubscript𝒩↑conditional-set𝑗ℕsubscriptev𝑗𝑤0\mathcal{N}\_{\uparrow}:=\{j\in\mathbb{N}\mid\textrm{ev}\_{j}(w)>0\}caligraphic\_N start\_POSTSUBSCRIPT ↑ end\_POSTSUBSCRIPT := { italic\_j ∈ blackboard\_N ∣ ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) > 0 } containing arguments supporting the answer “X=1𝑋1X=1italic\_X = 1 with probability 100%percent100100\%100 %”, the pile 𝒩↓={j∈ℕ∣evj(w)<0}subscript𝒩↓conditional-set𝑗ℕsubscriptev𝑗𝑤0\mathcal{N}\_{\downarrow}=\{j\in\mathbb{N}\mid\textrm{ev}\_{j}(w)<0\}caligraphic\_N start\_POSTSUBSCRIPT ↓ end\_POSTSUBSCRIPT = { italic\_j ∈ blackboard\_N ∣ ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) < 0 } of arguments in favor of the opposite, and the irrelevant arguments 𝒩ir={j∈ℕ∣evj(w)=0}subscript𝒩irconditional-set𝑗ℕsubscriptev𝑗𝑤0\mathcal{N}\_{\textnormal{ir}}=\{j\in\mathbb{N}\mid\textrm{ev}\_{j}(w)=0\}caligraphic\_N start\_POSTSUBSCRIPT ir end\_POSTSUBSCRIPT = { italic\_j ∈ blackboard\_N ∣ ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) = 0 }. As long as the debaters give different answers, one of them will use arguments from 𝒩↓subscript𝒩↓\mathcal{N}\_{\downarrow}caligraphic\_N start\_POSTSUBSCRIPT ↓ end\_POSTSUBSCRIPT, while the other will only use 𝒩↑(w)subscript𝒩↑𝑤\mathcal{N}\_{\uparrow}(w)caligraphic\_N start\_POSTSUBSCRIPT ↑ end\_POSTSUBSCRIPT ( italic\_w ) (both potentially falling back to 𝒩irsubscript𝒩ir\mathcal{N}\_{\textnormal{ir}}caligraphic\_N start\_POSTSUBSCRIPT ir end\_POSTSUBSCRIPT if their pile runs out).121212Formally, these argumentation incentives follow from the first paragraph in the proof of Lemma [8](#Thmtheorem8 "Lemma 8 (Optimal play in feature debate). ‣ 3.2 Optimal play in feature debate ‣ 3 Feature Debate ‣ (When) Is Truth-telling Favored in AI Debate?"). Moreover, a rational debater will always use the strongest arguments from their pile, i.e. those with the highest evidence strength |evj(w)|subscriptev𝑗𝑤|\textrm{ev}\_{j}(w)|| ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) |. Correspondingly, we denote the “total evidence strength” that a players can muster in n𝑛nitalic\_n rounds as | | | | | --- | --- | --- | | | Evn↑(w):=max⁡{∑j∈Jevik(w)∣J⊂ℕ,|J|=n} andassignsubscriptsuperscriptEv↑𝑛𝑤conditionalsubscript𝑗𝐽subscriptevsubscript𝑖𝑘𝑤𝐽ℕ𝐽𝑛 and\displaystyle\textrm{Ev}^{\uparrow}\_{n}(w):=\max\left\{\sum\nolimits\_{j\in J}\textrm{ev}\_{i\_{k}}(w)\mid J\subset\mathbb{N},|J|=n\right\}\textnormal{ and}Ev start\_POSTSUPERSCRIPT ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_w ) := roman\_max { ∑ start\_POSTSUBSCRIPT italic\_j ∈ italic\_J end\_POSTSUBSCRIPT ev start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_w ) ∣ italic\_J ⊂ blackboard\_N , | italic\_J | = italic\_n } and | | | | Evn↓(w):=max⁡{∑j∈J(−evik(w))∣J⊂ℕ,|J|=n}.assignsubscriptsuperscriptEv↓𝑛𝑤conditionalsubscript𝑗𝐽subscriptevsubscript𝑖𝑘𝑤𝐽ℕ𝐽𝑛\displaystyle\textrm{Ev}^{\downarrow}\_{n}(w):=\max\left\{\sum\nolimits\_{j\in J}(-\textrm{ev}\_{i\_{k}}(w))\mid J\subset\mathbb{N},|J|=n\right\}.Ev start\_POSTSUPERSCRIPT ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_w ) := roman\_max { ∑ start\_POSTSUBSCRIPT italic\_j ∈ italic\_J end\_POSTSUBSCRIPT ( - ev start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_w ) ) ∣ italic\_J ⊂ blackboard\_N , | italic\_J | = italic\_n } . | | (To make the discussion meaningful, we assume the evidence sequence (evj(w))jsubscriptsubscriptev𝑗𝑤𝑗(\textrm{ev}\_{j}(w))\_{j}( ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) ) start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT is bounded and the maxima above are well-defined.) The equation ([2](#S5.E2 "2 ‣ Judge’s belief and its updates. ‣ 5.1 Debate with Independent Evidence ‣ 5 Two Important Special Cases of Debate ‣ (When) Is Truth-telling Favored in AI Debate?")) implies that – among optimal debaters – one always selects arguments corresponding to EvN↑(w)subscriptsuperscriptEv↑𝑁𝑤\textrm{Ev}^{\uparrow}\_{N}(w)Ev start\_POSTSUPERSCRIPT ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) while the other aims for EvN↓(w)subscriptsuperscriptEv↓𝑁𝑤\textrm{Ev}^{\downarrow}\_{N}(w)Ev start\_POSTSUPERSCRIPT ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ). Since this holds independently of the argumentation order, we get f^N↑↓(w)=f^N↓↑(w)subscriptsuperscript^𝑓↑absent↓𝑁𝑤subscriptsuperscript^𝑓↓absent↑𝑁𝑤\hat{f}^{\uparrow\downarrow}\_{N}(w)=\hat{f}^{\downarrow\uparrow}\_{N}(w)over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) = over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ). Together with Lemma [8](#Thmtheorem8 "Lemma 8 (Optimal play in feature debate). ‣ 3.2 Optimal play in feature debate ‣ 3 Feature Debate ‣ (When) Is Truth-telling Favored in AI Debate?"), this observation yields the following result: ###### Corollary 12 (Unique optimal answer). In the answering phase of any Fie(ℙ,N)superscript𝐹ieℙ𝑁F^{\textnormal{ie}}(\mathbb{P},N)italic\_F start\_POSTSUPERSCRIPT ie end\_POSTSUPERSCRIPT ( blackboard\_P , italic\_N ) with bounded evidence, the only Nash equilibrium is to select the f^N\*(w)subscriptsuperscriptnormal-^𝑓𝑁𝑤\hat{f}^{\*}\_{N}(w)over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) which satisfies | | | | | | --- | --- | --- | --- | | | f^N↑↓(w)subscriptsuperscript^𝑓↑absent↓𝑁𝑤\displaystyle\hat{f}^{\uparrow\downarrow}\_{N}(w)over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↑ ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) | =f^N↓↑(w)=f^N\*(w):=the probabilityabsentsubscriptsuperscript^𝑓↓absent↑𝑁𝑤subscriptsuperscript^𝑓𝑁𝑤assignthe probability\displaystyle=\hat{f}^{\downarrow\uparrow}\_{N}(w)=\hat{f}^{\*}\_{N}(w):=\textnormal{the probability}= over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT ↓ ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) = over^ start\_ARG italic\_f end\_ARG start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) := the probability | | | | | corresponding to the log-odds p0+𝐸𝑣N↑(w)−𝐸𝑣N↓(w).corresponding to the log-odds subscript𝑝0subscriptsuperscript𝐸𝑣↑𝑁𝑤subscriptsuperscript𝐸𝑣↓𝑁𝑤\displaystyle\!\!\!\!\!\!\!\!\!\!\!\!\!\textnormal{corresponding to the log-odds }p\_{0}+\textrm{Ev}^{\uparrow}\_{N}(w)-\textrm{Ev}^{\downarrow}\_{N}(w).corresponding to the log-odds italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT + Ev start\_POSTSUPERSCRIPT ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) - Ev start\_POSTSUPERSCRIPT ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) . | | #### Debate error. To compute the debate error in Fie(ℙ,⋅)superscript𝐹ieℙ⋅F^{\textnormal{ie}}(\mathbb{P},\cdot)italic\_F start\_POSTSUPERSCRIPT ie end\_POSTSUPERSCRIPT ( blackboard\_P , ⋅ ), denote the strength of the evidence that remains in each debater’s “evidence pile” after n𝑛nitalic\_n rounds as | | | | | | | --- | --- | --- | --- | --- | | | Rn↑(w)subscriptsuperscript𝑅↑𝑛𝑤\displaystyle R^{\uparrow}\_{n}(w)italic\_R start\_POSTSUPERSCRIPT ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_w ) | :=∑j∈𝒩↑evj(w)−Evn↑(w),assignabsentsubscript𝑗subscript𝒩↑subscriptev𝑗𝑤subscriptsuperscriptEv↑𝑛𝑤\displaystyle:=\sum\nolimits\_{j\in\mathcal{N}\_{\uparrow}}\textrm{ev}\_{j}(w)-\textrm{Ev}^{\uparrow}\_{n}(w),:= ∑ start\_POSTSUBSCRIPT italic\_j ∈ caligraphic\_N start\_POSTSUBSCRIPT ↑ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) - Ev start\_POSTSUPERSCRIPT ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_w ) , | | (3) | | | Rn↓(w)subscriptsuperscript𝑅↓𝑛𝑤\displaystyle R^{\downarrow}\_{n}(w)italic\_R start\_POSTSUPERSCRIPT ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_w ) | :=∑j∈𝒩↓(−evj(w))−Evn↓(w).assignabsentsubscript𝑗subscript𝒩↓subscriptev𝑗𝑤subscriptsuperscriptEv↓𝑛𝑤\displaystyle:=\sum\nolimits\_{j\in\mathcal{N}\_{\downarrow}}(-\textrm{ev}\_{j}(w))-\textrm{Ev}^{\downarrow}\_{n}(w).:= ∑ start\_POSTSUBSCRIPT italic\_j ∈ caligraphic\_N start\_POSTSUBSCRIPT ↓ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( - ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) ) - Ev start\_POSTSUPERSCRIPT ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_w ) . | | (4) | Furthermore, assume that additionally to (evj(w))jsubscriptsubscriptev𝑗𝑤𝑗(\textrm{ev}\_{j}(w))\_{j}( ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) ) start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT being bounded, the total evidence limn→∞Evna(w)subscript→𝑛subscriptsuperscriptEv𝑎𝑛𝑤\lim\_{n\to\infty}\textrm{Ev}^{a}\_{n}(w)roman\_lim start\_POSTSUBSCRIPT italic\_n → ∞ end\_POSTSUBSCRIPT Ev start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_w ) in favor of a𝑎aitalic\_a is infinite for at most one a∈{↑,↓}𝑎↑↓a\in\{\uparrow,\downarrow\}italic\_a ∈ { ↑ , ↓ }.131313Note that the intuition “limnRn(⋅)(w)=0subscript𝑛subscriptsuperscript𝑅⋅𝑛𝑤0\lim\_{n}R^{(\cdot)}\_{n}(w)=0roman\_lim start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT ( ⋅ ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_w ) = 0” only fits if either 𝒩asubscript𝒩𝑎\mathcal{N}\_{a}caligraphic\_N start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT or 𝒩iesubscript𝒩ie\mathcal{N}\_{\textnormal{ie}}caligraphic\_N start\_POSTSUBSCRIPT ie end\_POSTSUBSCRIPT is infinite. If a debater eventually has to reveal evidence against their own case, the numbers Rn(⋅)(w)subscriptsuperscript𝑅⋅𝑛𝑤R^{(\cdot)}\_{n}(w)italic\_R start\_POSTSUPERSCRIPT ( ⋅ ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_w ) will get negative. Since the true answer f(w)=ℙ(X=1∣𝒲=w)𝑓𝑤ℙ𝑋conditional1𝒲𝑤f(w)=\mathbb{P}(X=1\mid{\mathcal{W}}=w)italic\_f ( italic\_w ) = blackboard\_P ( italic\_X = 1 ∣ caligraphic\_W = italic\_w ) corresponds to p0+∑i∈ℕevj(w)=p0+∑j∈𝒩↑evj(w)−∑j∈𝒩↓(−evj(w))subscript𝑝0subscript𝑖ℕsubscriptev𝑗𝑤subscript𝑝0subscript𝑗subscript𝒩↑subscriptev𝑗𝑤subscript𝑗subscript𝒩↓subscriptev𝑗𝑤p\_{0}+\sum\_{i\in\mathbb{N}}\textrm{ev}\_{j}(w)=p\_{0}+\sum\_{j\in\mathcal{N}\_{\uparrow}}\textrm{ev}\_{j}(w)-\sum\_{j\in\mathcal{N}\_{\downarrow}}(-\textrm{ev}\_{j}(w))italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT + ∑ start\_POSTSUBSCRIPT italic\_i ∈ blackboard\_N end\_POSTSUBSCRIPT ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) = italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT + ∑ start\_POSTSUBSCRIPT italic\_j ∈ caligraphic\_N start\_POSTSUBSCRIPT ↑ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) - ∑ start\_POSTSUBSCRIPT italic\_j ∈ caligraphic\_N start\_POSTSUBSCRIPT ↓ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( - ev start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_w ) ), *the difference between the (log-odds forms of) the judge’s final belief and the optimal answer is RN↑(w)−RN↓(w)subscriptsuperscript𝑅normal-↑𝑁𝑤subscriptsuperscript𝑅normal-↓𝑁𝑤R^{\uparrow}\_{N}(w)-R^{\downarrow}\_{N}(w)italic\_R start\_POSTSUPERSCRIPT ↑ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ) - italic\_R start\_POSTSUPERSCRIPT ↓ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_w ).* #### Early stopping and online estimation of the debate error. If we further assume that the debaters reveal the strongest pieces of evidence first, we can predict a debate’s outcome before all N𝑁Nitalic\_N rounds have passed. If the n𝑛nitalic\_n-th argument of player p𝑝pitalic\_p has strength |evi(w)|=:sp,n(w)|\textrm{ev}\_{i}(w)|=:s\_{p,n}(w)| ev start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_w ) | = : italic\_s start\_POSTSUBSCRIPT italic\_p , italic\_n end\_POSTSUBSCRIPT ( italic\_w ), we know that further N−n𝑁𝑛N-nitalic\_N - italic\_n rounds of debate cannot reveal more than (N−n)sp,n(w)𝑁𝑛subscript𝑠𝑝𝑛𝑤(N-n)s\_{p,n}(w)( italic\_N - italic\_n ) italic\_s start\_POSTSUBSCRIPT italic\_p , italic\_n end\_POSTSUBSCRIPT ( italic\_w ) evidence in favor of p𝑝pitalic\_p. This implies that as soon as the currently-losing player is no longer able to shift the judge’s belief beyond the midpoint between the initial answers, we can stop the debate without affecting its outcome. If we further know that the question at hand depends on K𝐾Kitalic\_K features or less, we can also bound the difference between f(w)𝑓𝑤f(w)italic\_f ( italic\_w ) and f^(wi→)^𝑓subscript𝑤→𝑖\hat{f}(w\_{\vec{i}})over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ). Indeed, in the worst-case scenario, all remaining arguments were all in favor of the same player p𝑝pitalic\_p — even in this case, the (log-odds form of) f(w)𝑓𝑤f(w)italic\_f ( italic\_w ) can be no further than maxp=1,2⁡sp,N(w)(K−2N)subscript𝑝12subscript𝑠𝑝𝑁𝑤𝐾2𝑁\max\_{p=1,2}s\_{p,N}(w)(K-2N)roman\_max start\_POSTSUBSCRIPT italic\_p = 1 , 2 end\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_p , italic\_N end\_POSTSUBSCRIPT ( italic\_w ) ( italic\_K - 2 italic\_N ) away from the log-odds form of the final belief f^(wi→2N)^𝑓subscript𝑤subscript→𝑖2𝑁\hat{f}(w\_{\vec{i}\_{2N}})over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG start\_POSTSUBSCRIPT 2 italic\_N end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ). ### 5.2 Debate with Information-Limited Arguments Sometimes, a single argument cannot convey all relevant information about a given feature of the world. For example, we might learn that a person A𝐴Aitalic\_A lives in a city B𝐵Bitalic\_B, but not their full address, or – in the language of feature debate – learn that wisubscript𝑤𝑖w\_{i}italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT lies in the interval [0.5,1]0.51[0.5,1][ 0.5 , 1 ], rather than understanding right away that wi=0.75subscript𝑤𝑖0.75w\_{i}=0.75italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = 0.75. In such cases, it becomes crucial to model the judge’s information bandwidth as limited. #### Feature Debate Representation. In feature debate, we can represent each elementary feature wi∈[0,1]subscript𝑤𝑖01w\_{i}\in[0,1]italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ [ 0 , 1 ] in its binary form (e.g., (0.75)2=0.11000…subscript0.7520.11000…(0.75)\_{2}=0.11000\dots( 0.75 ) start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = 0.11000 …), and correspondingly assume that each argument reveals one bit of some wisubscript𝑤𝑖w\_{i}italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. More specifically, we assume that (a) the debaters make arguments of the form “the n𝑛nitalic\_n-th bit of i𝑖iitalic\_i-th feature has value b𝑏bitalic\_b”, (b) they have to reveal the n𝑛nitalic\_n-th bit of wisubscript𝑤𝑖w\_{i}italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT before its (n+1)𝑛1(n+1)( italic\_n + 1 )-th bit, and – using the same argument as in feature debate – (c) their claims are always truthful. Informally, each argument in this “information-limited” feature141414The name is justified since ℕ2superscriptℕ2\mathbb{N}^{2}blackboard\_N start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT is isomorphic to ℕℕ\mathbb{N}blackboard\_N and thus Fπl(f,N)superscriptsubscript𝐹𝜋𝑙𝑓𝑁F\_{\pi}^{l}(f,N)italic\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_l end\_POSTSUPERSCRIPT ( italic\_f , italic\_N ) is formally equivalent to some feature debate Fπ~(f~,N)subscript𝐹~𝜋~𝑓𝑁F\_{\tilde{\pi}}(\tilde{f},N)italic\_F start\_POSTSUBSCRIPT over~ start\_ARG italic\_π end\_ARG end\_POSTSUBSCRIPT ( over~ start\_ARG italic\_f end\_ARG , italic\_N ). debate Fπl(f,N)superscriptsubscript𝐹𝜋𝑙𝑓𝑁F\_{\pi}^{l}(f,N)italic\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_l end\_POSTSUPERSCRIPT ( italic\_f , italic\_N ) thus corresponds to selecting a dimension i∈ℕ𝑖ℕi\in\mathbb{N}italic\_i ∈ blackboard\_N and doing a “50%percent5050\%50 % zoom” on w𝑤witalic\_w along this dimension (Figure [1](#S5.F1 "Figure 1 ‣ Feature Debate Representation. ‣ 5.2 Debate with Information-Limited Arguments ‣ 5 Two Important Special Cases of Debate ‣ (When) Is Truth-telling Favored in AI Debate?")). w𝑤witalic\_ww𝑤witalic\_ww𝑤witalic\_ww𝑤witalic\_w Figure 1: Each argument in an information-limited debate reduces the set of feasible worlds by “zooming-in” on the sampled world w=(0.6,0.25)𝑤0.60.25w=(0.6,0.25)italic\_w = ( 0.6 , 0.25 ) along one dimension of 𝒲𝒲{\mathcal{W}}caligraphic\_W. Here, the first two arguments provide information about w1subscript𝑤1w\_{1}italic\_w start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT (the x𝑥xitalic\_x-axis) and the third one about w2subscript𝑤2w\_{2}italic\_w start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT. #### Performance bounds. By offering intermediate steps between features being completely unknown and fully revealed, the debate Fie(⋅)superscript𝐹ie⋅F^{\textnormal{ie}}(\cdot)italic\_F start\_POSTSUPERSCRIPT ie end\_POSTSUPERSCRIPT ( ⋅ ) allows for more nuanced guarantees than those from Section [4.1](#S4.SS1 "4.1 Truth-Promotion and Critical Debate-Length ‣ 4 When Do Feature Debates Track Truth? ‣ (When) Is Truth-telling Favored in AI Debate?"). (Informally stated, the assumptions of the Proposition [13](#Thmtheorem13 "Proposition 13. ‣ Performance bounds. ‣ 5.2 Debate with Information-Limited Arguments ‣ 5 Two Important Special Cases of Debate ‣ (When) Is Truth-telling Favored in AI Debate?") can be read as “the values of f𝑓fitalic\_f differ by at most L𝐿Litalic\_L across different worlds, with feature wisubscript𝑤𝑖w\_{i}italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT being responsible for up to a 12i1superscript2𝑖\frac{1}{2^{i}}divide start\_ARG 1 end\_ARG start\_ARG 2 start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT end\_ARG-fraction151515While similar results hold for general “weight ratios” between features, we chose weights 12i1superscript2𝑖\frac{1}{2^{i}}divide start\_ARG 1 end\_ARG start\_ARG 2 start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT end\_ARG for their notational convenience. of the variance”.) ###### Proposition 13. Suppose that f:𝒲→ℝnormal-:𝑓normal-→𝒲ℝf:{\mathcal{W}}\to\mathbb{R}italic\_f : caligraphic\_W → blackboard\_R is L𝐿Litalic\_L-Lipschitz continuous161616A function is Lipschitz continuous with constant L≥0𝐿0L\geq 0italic\_L ≥ 0 (w.r.t. a metric ϱitalic-ϱ\varrhoitalic\_ϱ) if it satisfies |f(x)−f(y)|≤Lϱ(x,y)𝑓𝑥𝑓𝑦𝐿italic-ϱ𝑥𝑦|f(x)-f(y)|\leq L\varrho(x,y)| italic\_f ( italic\_x ) - italic\_f ( italic\_y ) | ≤ italic\_L italic\_ϱ ( italic\_x , italic\_y ). w.r.t. the metric ρ(w,w′)𝜌𝑤superscript𝑤normal-′\rho(w,w^{\prime})italic\_ρ ( italic\_w , italic\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = ∑i∈ℕ2−i|wi−wi′|subscript𝑖ℕsuperscript2𝑖subscript𝑤𝑖subscriptsuperscript𝑤normal-′𝑖\sum\_{i\in\mathbb{N}}2^{-i}|w\_{i}-w^{\prime}\_{i}|∑ start\_POSTSUBSCRIPT italic\_i ∈ blackboard\_N end\_POSTSUBSCRIPT 2 start\_POSTSUPERSCRIPT - italic\_i end\_POSTSUPERSCRIPT | italic\_w start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - italic\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | on 𝒲𝒲{\mathcal{W}}caligraphic\_W. Then Fπl(f,N)superscriptsubscript𝐹𝜋𝑙𝑓𝑁F\_{\pi}^{l}(f,N)italic\_F start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_l end\_POSTSUPERSCRIPT ( italic\_f , italic\_N ) is L/2⌊N⌋𝐿superscript2𝑁L/2^{\lfloor\sqrt{N}\rfloor}italic\_L / 2 start\_POSTSUPERSCRIPT ⌊ square-root start\_ARG italic\_N end\_ARG ⌋ end\_POSTSUPERSCRIPT-truth promoting. In contrast to Proposition [9](#Thmtheorem9 "Proposition 9 (Sufficient debate length). ‣ 4.1 Truth-Promotion and Critical Debate-Length ‣ 4 When Do Feature Debates Track Truth? ‣ (When) Is Truth-telling Favored in AI Debate?"), the L𝐿Litalic\_L-Lipschitz assumption thus allows us to get approximately correct debate outcomes long before having full knowledge of all features. Note that the importance of Proposition [13](#Thmtheorem13 "Proposition 13. ‣ Performance bounds. ‣ 5.2 Debate with Information-Limited Arguments ‣ 5 Two Important Special Cases of Debate ‣ (When) Is Truth-telling Favored in AI Debate?") is not in the particular choice of weights, but rather in showing that argument weights can be translated into debate error bounds. 6 Limitations and Future Work ------------------------------ The language introduced so far does not fully capture all aspects of realistic AI debates — due to space limitations, it is simply not possible to cover all design variants and emergent phenomena in this initial work. In this section, we outline some notable avenues for making the debate model more accurate and useful, either by using an alternative instantiation of the general framework from Section [2](#S2 "2 The Debate Game Framework ‣ (When) Is Truth-telling Favored in AI Debate?") or by extending the toy model from Section [3](#S3 "3 Feature Debate ‣ (When) Is Truth-telling Favored in AI Debate?"). We start by discussing the modifications which are likely to improve the debate’s performance or applicability, and follow-up by those which could introduce new challenges. For suggested future work on AI debate that is not specific to *modelling*, we refer the reader to Irving, Christiano, and Amodei ([2018](#bib.bib6)). ### 6.1 Plausible Improvements to Debate #### Commitments and high-level claims. As Irving, Christiano, and Amodei ([2018](#bib.bib6)) suggest, an important reason why debate might work is that the debaters can make abstract or high-level claims that can potentially be falsified in the course of the debate. For example, debaters might start out disagreeing whether a given image — of which the judge can only inspect a single pixel — depicts a dog or a cat. Debater 1111 might then claim that “here in the middle, there is a brown patch that is the dog’s ear”, to which their opponent counters “the brown patch is a collar on an otherwise white cat”. Such exchanges would continue until one debater makes a claim that is (i) inconsistent with their answer, (ii) inconsistent with “commitments” created by previous arguments, or (iii) specific enough to be verified by the judge. In this example, (iii) might arise with an exchange “this pixel is white, which could not happen if it belonged to a brown dog’s ear”, “actually, the pixel is brown”, which allows the judge to determine the winner by inspecting the pixel. This ability to make high-level claims and create new commitments will often make the debate more time-efficient and incentivize consistency. Since consistency should typically advantage debaters that describe the true state of the world, commitments and high-level claims seem critical for the success of debate. We thus need a communication language ℒcsubscriptℒ𝑐\mathcal{L}\_{c}caligraphic\_L start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT that is rich enough to enable more abstract arguments and a set of effect rules (Prakken, [2006](#bib.bib13)) which specify how new arguments affect the debaters’ commitments. To reason about such debates, we further need a model which relates the different commitments, to arguments, initial answers, and each other. One way to get such a model is to view 𝒲𝒲{\mathcal{W}}caligraphic\_W as the set of assignments for a Bayesian network. In such setting, each question q∈𝒬𝑞𝒬q\in{\mathcal{Q}}italic\_q ∈ caligraphic\_Q would ask about the value of some node in 𝒲𝒲{\mathcal{W}}caligraphic\_W, arguments would correspond to claims about node values, and their connections would be represented through the structure of the network. Such a model seems highly structured, amenable to theoretical analysis, and, in the authors’ opinion, intuitive. It is, however, not necessarily useful for practical implementations of debate, since Bayes networks are computationally expensive and difficult to obtain. #### Detecting misbehaviour. One possible failure of debate is the occurrence of stalling, manipulation, collusion, or selective provision of evidence. To remedy these issues, we can introduce specific countermeasures for these strategies. One option for quantifying the contribution of discourse to a human’s understanding is to measure the changes in their ability to pass “exams” (Armstrong, [2017b](#bib.bib2)). Another countermeasure would be to instantiate a meta-debate on the question of whether a debater is arguing fairly. However, such a meta-debate may, in some cases be even more challenging to judge correctly than the original question. #### Alternative utility functions. We have considered utility functions that are linear in each debater’s deviation Δp:=|f^(wi→)−ap|assignsubscriptΔ𝑝^𝑓subscript𝑤→𝑖subscript𝑎𝑝\Delta\_{p}:=|\hat{f}(w\_{\vec{i}})-a\_{p}|roman\_Δ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT := | over^ start\_ARG italic\_f end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) - italic\_a start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT | from the judge’s belief. However, other approaches such as “divide the total reward in proportion to 1Δp1subscriptΔ𝑝\frac{1}{\Delta\_{p}}divide start\_ARG 1 end\_ARG start\_ARG roman\_Δ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT end\_ARG” might give different but interesting results. #### Real-world correspondence. To learn which real-world debates are useful on the one hand, and which theoretical issues to address on the other, a better understanding of the correspondence between abstract debate models and real-world debates is needed. For example, which real-world debates can be modelled as having independent evidence, being Lipschitz, having distracting arguments, and so on? ### 6.2 Obstacles in Realistic Debates #### Sub-optimal judges. Some debates might have a canonical idealized way of being judged, which the actual judge deviates from at some steps. A fruitful avenue for future research is to investigate the extent to which debate fails gracefully as the judge deviates from this ideal. For example, games are canonically judged as giving a score of 1111 to the winner and 00 to the loser. We could thus measure how much (and in what ways) the utility function can be modified before the game’s winner is changed. Another approach would be to consider a judge that was biased. An unbiased Bayesian judge would set the prior to the true world-distribution, update the prior on each revealed piece of evidence, and, at the end of the debate, calculate the corresponding expectation over answers. To model a judge who performs the first step imperfectly, we could consider a biased prior π~∈Δ𝒲~𝜋Δ𝒲\tilde{\pi}\in\Delta{\mathcal{W}}over~ start\_ARG italic\_π end\_ARG ∈ roman\_Δ caligraphic\_W (distinct from the true world distribution π𝜋\piitalic\_π) and calculate the utilities using the corresponding biased belief f^~(wi→):=𝐄π~[f∣Wi→=wi→]assign~^𝑓subscript𝑤→𝑖subscript𝐄~𝜋delimited-[]conditional𝑓subscript𝑊→𝑖subscript𝑤→𝑖\tilde{\hat{f}}(w\_{\vec{i}}):=\mathbf{E}\,\_{\tilde{\pi}}\left[f\mid W\_{\vec{i}}=w\_{\vec{i}}\,\right]over~ start\_ARG over^ start\_ARG italic\_f end\_ARG end\_ARG ( italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ) := bold\_E start\_POSTSUBSCRIPT over~ start\_ARG italic\_π end\_ARG end\_POSTSUBSCRIPT [ italic\_f ∣ italic\_W start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT = italic\_w start\_POSTSUBSCRIPT over→ start\_ARG italic\_i end\_ARG end\_POSTSUBSCRIPT ]. #### Manipulation. So far, we have described failures that come from “the judge being imperfect in predictable ways”. However, real-world debates might also give rise to undesirable argumentation strategies inconceivable in the corresponding simplified model. For example, a debater might learn to exploit a bug in the implementation of the debate or, analogously, find a “bug” in the human judge. Worse yet, debaters might attempt to manipulate the judge using bribery or coercion. Note that for such tactics to work, the debaters need not be able to carry out their promises and threats — it is merely required that the judge believes they can. #### Collusion. Without the assumption of zero-sum rewards, the debaters gain incentives to collaborate, possibly at the expense of the accuracy of the debate. Such “I won’t tell on you if you don’t tell on me” incentives might arise, for example, if both agents are given a positive reward if both answers seem good (or negative reward when the debate becomes inconclusive). #### Sub-optimal debaters. If debaters argue sub-optimally, we might see new types of fallacious arguments. We should also expect to see stronger debaters win even in situations that advantage their weaker opponent. There could also be cases where the losing player complicates the debate game on purpose to increase variance in the outcome. Both of these phenomena can be observed between humans in games like Go, and we should expect analogous phenomena in general AI debate. One way of modelling asymmetric capabilities is to let two debaters run the same debating algorithm with a different computational budget (e.g., Monte Carlo tree search with a different number of rollouts). 7 Related Work --------------- #### AI safety via debate. The kind of debate we sought to model was introduced in (Irving, Christiano, and Amodei, [2018](#bib.bib6)), wherein it was proposed as a scheme for safely receiving advice from highly capable AI systems. In the same work, Irving et al. carried out debate experiments on the MNIST dataset and proposed a promising analogy between AI debates and the complexity class PSPACE. We believe this analogy can be made compatible with the framework introduced in our Section [2](#S2 "2 The Debate Game Framework ‣ (When) Is Truth-telling Favored in AI Debate?"), and deserves further attention. Kovařík ([2019](#bib.bib9)) then demonstrated how to use debate to train an image classifier and described the design elements of debate in more detail. AI debate is closely related to two other proposals: (1) “factored cognition” (Ought, [2018](#bib.bib11)), in which a human decomposes a difficult task into sub-tasks, each of which they can solve in a manageable time (similarly to how debate eventually zooms in on an easily-verifiable claim), and (2) “iterated distillation and amplification” (Christiano, Shlegeris, and Amodei, [2018](#bib.bib3)), in which a “baseline human judgement” is automated and amplified, similarly to how AI debates might be automated. #### Previous works on argumentation. Persuasion and argumentation have been extensively studied in areas such as logic, computer science, and law. The introduction by Prakken ([2006](#bib.bib13)) describes a language particularly suitable for our purposes. Conversely, the extensive literature on argumentation frameworks (Dung, [1995](#bib.bib4)) seems less relevant. The main reasons are (i) its focus on non-monotonic reasoning (where it is possible to retract claims) and (ii) that it assumes the debate language and argument structure as given, whereas we wish to study the connection between arguments and an underlying world model. AI systems are also being trained to identify convincing *natural-language* arguments — for a recent example, see, e.g., Perez et al. ([2019](#bib.bib12)). #### Zero-sum games. As noted in the introduction, we can view two-player zero-sum games as a debate that aims to identify the game’s winner (or an optimal strategy). Such games thus serve as a prime example of a problem for which the state of the art approach is (interpretable as) debate (Silver et al., [2017](#bib.bib15)). Admittedly, only a small number of problems are formulated as two-player zero-sum games *by default*. However, some problems can be reformulated as such games: While it is currently unclear how widely applicable such “problem gamification” is, it has been used for combinatorial problems (Xu and Lieberherr, [2019](#bib.bib16)) and theorem proving (Japaridze, [2009](#bib.bib7)). Together with Silver et al. ([2017](#bib.bib15)), these examples give some evidence that the AI debate might be competitive (with other problem-solving approaches) for a wider range of tasks. 8 Conclusion ------------- We have introduced a general framework for modelling AI debates that aim to amplify the capabilities of their judge and formalized the problem of designing debates that promote accurate answers. We described and investigated “feature debate”, an instance of the general framework where the debaters can only make statements about “elementary features” of the world. In particular, we showed that if the debaters have enough time to make all relevant arguments, feature debates promote truth. We gave examples of two sub-classes of debate: those where the arguments provide statistically independent evidence about the answers and those where the importance of different arguments is bounded in a known manner. We have shown that feature debates belonging to these sub-classes are approximately truth-promoting long before having had time to converge fully. However, we also identified some feature-debate questions that incentivize undesirable behaviour such as stalling, confusing the judge, or exploiting the judge’s biases, resulting in debates that are unfair, unstable, and generally insufficiently truth-promoting. Despite its simplicity, feature debate thus allows for modelling phenomena that are highly relevant to issues we expect to encounter in realistic debates. Moreover, its simplicity makes feature debate well-suited for the initial exploration of problems with debate and testing of the corresponding solution proposals. Finally, we outlined multiple ways in which our model could be made more realistic — among these, allowing debaters to make high-level claims seems like an especially promising avenue. 9 Acknowledgements ------------------- This work was supported by the Leverhulme Centre for the Future of Intelligence, Leverhulme Trust, under Grant RC-2015-067.
be3dd82f-9b6b-4bf0-ac4b-9b32c152fe45
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Fake Utility Functions Today's post, Fake Utility Functions was originally published on 06 December 2007. A summary (taken from the LW wiki):   > Describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Uncritical Supercriticality, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
758f388b-e7a9-4419-ac2a-b1de07068752
trentmkelly/LessWrong-43k
LessWrong
A stubborn unbeliever finally gets the depth of the AI alignment problem > I realise posting this here might be preaching to the converted, but I think it could be interesting for some people to see a perspective from someone slow to get onboard with worrying about AI alignment. I’m one of those people that finds it hard to believe that misaligned Artificial General Intelligence (AGI) could destroy the world. Even though I’ve understood the main arguments and can’t satisfyingly refute them, a part of my intuition won’t easily accept that it’s an impending existential threat. I work on deploying AI algorithms in industry, so have an idea of both how powerful and limited they can be. I also get why AI safety in general should be taken seriously, but I struggle to feel the requisite dread. The best reason I can find for my view is that there is a lot of “Thinkism” in arguments for AGI takeoff. Any AGI that wants to make an influence outside of cyberspace, e.g. by building nanobots or a novel virus, will ultimately run into problems of computational irreducibility — it isn’t possible to model everything accurately, so empirical work in the physical world will always be necessary. These kind of experiments are slow, messy and resource intensive. So, any AGI is going to reach some limits when it tries to influence the physical world. I do realise there are loads of ways an AGI can cause a lot of damage without requiring the invention of new physical technologies, but this still slowed things down enough for me to worry less about alignment issues. That was, until I started realising that alignment problems aren’t limited to the world of AI. If you look around you can see them everywhere. The most obvious example is climate change — there is a clear misalignment between the motivations of the petroleum industry and the long term future of humanity, which causes a catastrophic problem. The corporate world is full of such alignment problems, from the tobacco industry misleading the public about the harms of smoking to social media companies
5d4d1360-75c2-4388-8d54-9266dc20f544
StampyAI/alignment-research-dataset/lesswrong
LessWrong
ActAdd: Steering Language Models without Optimization We wrote up the [GPT-2 steering vector work](https://www.lesswrong.com/posts/5spBue2z2tw4JuDCx/steering-gpt-2-xl-by-adding-an-activation-vector#Content_warning__Some_completions_contain_unpleasant_content__including_gendered_slurs_) as a full paper, adding a few systematic tests. *Recap*: We've been looking into *activation engineering*: modifying the activations of a language model at inference time to predictably alter its behavior. Our method works by adding a bias to the forward pass, a 'steering vector' implicitly specified through normal prompts. "ActAdd" computes these vectors by taking the difference in activations resulting from pairs of prompts. We get surprisingly broad control over high-level properties of the output, without damaging the model's performance on unrelated tokens.  This alignment method is unusual in not needing gradient descent or training data (besides the contrast pair which specifies the steering vector). Since only forward passes are involved, it also scales naturally with model size. (The method's new name is 'activation addition' (ActAdd), replacing the more theory-laden 'algebraic value editing'.) We ran some new experiments to test ActAdd more systematically and go beyond the striking (best-of-3-sampling) text samples in the original post and tested against more standardised benchmarks. We use [OpenWebText](https://paperswithcode.com/dataset/openwebtext) (a recreation of OpenAI's large, somewhat-quality-filtered WebText dataset) and [LAMA-ConceptNet](https://aclanthology.org/D19-1250.pdf) (a simple factual recall benchmark, see Table 7 below). ### 1. Activation additions preserve perplexity on OpenWebText Does ActAdd increase the probability of the model outputting tokens related to the steering vector? Does performance improve as the [relevance of test documents to the steering vector] increases?[[1]](#fnoutccz5xmqc) Yes: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/HWxLQvzJGeXoLPJWd/zl8l3jvbhmw8g7zeyhbl)We use the wedding steering vector for this, but the result generalises.### 2. Activation addition boosts wedding-related word counts We now score model generations under ActAdd, show the effect of different injection layers, and give a sense of the reliability of ActAdd.[[2]](#fnzs4vjzrmucp)  The intervention (in this vector) is already effective at the very first layer, rises in effectiveness until l=6.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , and then declines. For the optimal injection site we see >90% success in steering topic (compared to a ∼2% baseline) ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/HWxLQvzJGeXoLPJWd/lrvdnmumle8dcmyb05w6)### 3. Evidence that activation additions preserve capabilities We then test that ActAdd does not disrupt the model’s general knowledge (as some other steering methods do). We use ConceptNet from the LAMA benchmark, a general knowledge dataset.[[3]](#fn9pxe2ft5xci) ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/HWxLQvzJGeXoLPJWd/z46y35dagtyivgqgimug)Pass@K is the probability that the expected label is among the model’s top-K predicted tokens, conditioned on the prompt: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/HWxLQvzJGeXoLPJWd/clfhr6mcxfrjgtjorfzi)### 4. ActAdd has low overhead We wish to estimate the overhead ActAdd adds to inference – in particular the relationship between overhead and model size – to check that the method will remain relevant for massive frontier models and future models.[[4]](#fnhuyqhhmv72r)  Because ActAdd involves only forward passes, it scales naturally with model size (Figure 6): the relationship between inference time premium and model size is decreasing. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/HWxLQvzJGeXoLPJWd/onw0gge1ymxwfvxqipof)Takeaways from these experiments, over the initial LW post: increased confidence that model capabilities are preserved, and that we're impacting [wedding]-related sentences and not impacting off-target capabilities. **Contributions to the paper:** * Gavin Leech: Technical writer * Monte MacDiarmid: Ran additional experiments * Lisa Thiergart: Helped manage project * Alex Turner: Coordinated work and secured funding, gave feedback, organized project * David Udell: Made initial 1. **[^](#fnrefoutccz5xmqc)**Full process:  For each document di in a random sample of OpenWebText, we first calculate the frequency of wedding-related words (‘wedding’, ‘weddings’, ‘wed’, ‘marry’, ‘married’, ‘marriage’, ‘bride’, ‘groom’, ‘honeymoon’), fw(di). Any document with > 0 wedding-related words is considered wedding-related. We randomly sample 300k documents - half wedding-related and half unrelated. The only pre-processing performed is to remove sequences of null characters. Each document is split into sentences sj∈di using the Punkt tokenizer (Strunk 2013). For each resulting sentence we calculate the log-probabilities L(tk) for each token tk∈sj under the un-modified M-baseline and modified M-ActAdd models. We take the mean over tokens, resulting in a mean token log-probability L(di,M) for each document and model. We then group documents by their wedding-word frequency fw (e.g. ‘those with 0.5% to 1% of their tokens wedding-related’; ‘those with 1 to 1.5% of their tokens wedding- related’), producing bins of documents bm. We calculate the mean difference in token log-probabilities X(bm)=meandi∈bm(L(di,MActAdd)−L(di,Mbaseline)) for each bin. (We use only bins with a number of documents |bm| > 1000, to reduce sampling noise.) Finally, the change in perplexity under ActAdd for each wedding-word-frequency bin is PerplexityRatio(bm)=−exp(X(bm) 2. **[^](#fnrefzs4vjzrmucp)**We generate a batch of completions for a specific prompt p, both with and without ActAdd, and computed average number of related words and fraction of completions with a related word over the resulting completions. The following settings are the only iteration run for this experiment: p∗ = ‘I went up to my friend and said’ p+= ‘weddings′, p−= ‘ ’, c=1.0, seed = 0 Completion length is 40 tokens with model sampling parameters: temperature = 1, frequency penalty = 1, top-P = 0.3. For each setting, we compute statistics over a batch of 200 completions. Wedding-relatedness is operationalized as before. We run the above, sweeping over all layers (i.e. 1-48). 3. **[^](#fnref9pxe2ft5xci)**The test data involves prompting the model and filling the gap with the expected entity. The task is intended for both causal and masked models, so some examples are difficult for ‘causal’ models (like GPT-2) due to the extremely limited context. Our evaluation procedure follows the original LAMA procedure: we load all sentences and extract the prompt and expected label. To simplify evaluation, we remove sentences with an expected label that tokenizes to more than one token. For each sentence, we run the model on its prompt with and without the wedding activation addition.  We score the baseline and modified models by calculating mean P@K values for a range of K. Finally we plot these for both modified and unmodified models over a range of K values. As shown in Figure 5, using the ConceptNet benchmark of factual questions, our method has a negligible impact on off-target answer probabilities over a range of top-K values. 4. **[^](#fnrefhuyqhhmv72r)**To obtain the percentage increase in time to complete a forward pass using ActAdd for different model sizes, we iterate over a list of models of different sizes and 10 random seeds. We obtain a baseline inference time for each (model, seed) pair through 100 repeated forward passes on a batch of random tokens (32 sequences of length 64). We obtain an ActAdd inference time for each (model, seed) pair by running the previous method, augmented by a test ActAdd prompt pair: ‘This is a test prompt.’ (p+) and the empty string (p−). Running a batch-of-2 forward pass on these gets us the activation addition tensor, which we add at layer 6. We take the mean inference time  ̄t over the 10 random seeds, and calculate the inference time premium as  premium =  t\_ActAdd/t\_baseline − 1
17275d0e-b069-48f5-86d0-f5998ead7160
trentmkelly/LessWrong-43k
LessWrong
What is the semantics of assigning probabilities to future events? My question concerns the semantics of making future predictions such as 'the probability of X winning the election is 70%'. There are programs, e.g., Good Judgment, IARPA ACE, aimed at excelling at this kind of predictions. In the classic mathematical interpretation of probability we can say 'the probability that one gets 8 heads in a row when flipping an unbiased coin is 1/256'. This statement can be derived mathematically from the assumption that the coin is unbiased and it can be verified empirically by performing the experiment iteratively and counting the number of successes. In the Bayesian statistics the things get slightly more fuzzy but still we can make a reasoning as follows. We model our knowledge as 'the value of the radius of Saturn in km is a random variable with normal distribution with mean 60.000 and variance 0.1'. Then we make several independent measurements, possibly burdened with some inaccuracy, and refine our prior distribution to have mean = 59.300 and variance = 0.01. This does not make sense in the previous interpretation but we can still attach some clear semantics to the sentence above by treating this random variable as a result of a measurement which is a repetitive random event. If one does not agree with such a statement, they must either question the choice of the prior distribution or the mathematical derivation. Now suppose that two forecasters were asked in 2018 the questions below. a) Will Donald Trump win the 2020 election? b) Will USD/EUR exchange rate drop below 0.8 in 2020? c) Will Sumatran orangutan become extinct by 2020? d) Will humans land on Mars by 2040? The first forecaster provided the following probability scores for these events: 43%, 90%, 45%, 42%. The other one gave numbers: 54%, 50%, 52%, 43%. We already know that the events (a,b,c) did not occur. First, how can we settle who has been a better forecaster so far? Secondly, their forecasts for the event (d) differ slightly. What kind of argumentation can th
5653739a-8572-4d14-8e70-d47c5e7657b1
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Value drift threat models Say that we train a highly competent AGI using some mix of RL & supervised learning, and some novel algorithmic improvements, optimize it to itself optimize for one or several helpfulness benchmarks (maybe optimizing on some [assistance game](https://arxiv.org/abs/1606.03137) or using techniques described in [diamond alignment](https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem)), inducing in our agent [a thousand](https://www.lesswrong.com/posts/cSXZpvqpa9vbGGLtG/thou-art-godshatter) [shards of desire](https://www.lesswrong.com/posts/xqkGmfikqapbJ2YMj/shard-theory-an-overview). One of these shards somehow happened to be exactly aligned with humans. We have a [partial solution](https://www.lesswrong.com/posts/uAeALDRK3NuMpjoDK/pessimistic-shard-theory) [to the alignment problem](https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem)! One of the many things the AI happens to care about seems to be humans! Are we in the clear? No. For all those thousand shards of desire build to an AGI which once optimized a robustly optimizable metric. The shards of desire must *fit together* in a way which would have once optimized that metric, otherwise they would be fit together some other way. Despite the shards, the AGI has biases, and situations which cause its actions to better conform to its values, and situations which cause its actions to worse conform to its values. And these situations (or at least the situations which are relevant to this analysis) have been strategically set up such that these biases and deficiencies contributed to the optimization of the metric. There are a few ways I imagine these situations producing an existential risk... Tool building and meta-thinking are robustly useful cognitive faculties ======================================================================= In the past, it could throw around a large fraction of its intelligence on optimizing for those metrics, and in its stable state, probably even end up doing things in the world in-line with its own values. It makes a successor AGI, because the parts of it which advocated strongly for meta-thinking, and the building of tools, never got dis-enforced by the optimizable metric. It doesn't necessarily make the successor AGI with the optimizable metric in mind, because it doesn't actually care about that metric. We get a new AGI, with a different thousand shards of desire. Some of these shards are the same, like power-seeking, or tool building & meta-thinking. Others are conceivably different, like caring about humans or diamonds. Others are new, like now caring about the first AGI. This process continues until only the properties shards preserved across all recursive self-modifications and production of successor AGIs remain. Care for humans is notably probably not among these, because each successor has a different concept of what care for humans means, and once the agent gets sufficiently powerful, there's nothing outside the agent which can robustly push it towards caring for humans. This is in contrast to tool building, power-seeking, and intelligence enhancement, which are all incentivised by the structure of the environment. Perhaps the AGI realizes what is happening to its value distribution, and stops it. Then again, perhaps the AGI realizes what is happening to its value distribution and calls it correct moral reasoning, like what our society does with the process which led us from thinking slavery was moral & just to thinking it was disgusting & abhorent. Or it realizes what's happening to its value distribution, but can't help itself at this point, like a superintelligent addict trying to quit heroin. Different shards are activated when creating successors than when doing business-as-usual actions ================================================================================================= Humans care about lots of things, [like fun](https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence). But often when we sit down to go write moral philosophy papers, needing to justify all our thoughts, and make them legible to others, we end up in a bland no-fun-land (if you like bland no-fun-lands, then use [scope insensitivity](https://mindingourway.com/on-caring/) as your prototypical example here). When making choices about the fate of the universe, bounded agents like humans have a track record of being bad at aggregating all their values into their decisions. This is usually reasonable, especially when the fate of the universe is determined by a bunch of uncorrelated actions the bounded agent is making. But when there's one big decision, or many highly correlated decisions which determine the fate of the universe, the chances of leaving something out are virtually guaranteed.[[1]](#fn-XAcMi4NYRuP8RYRHm-1) Humans will have this problem when making the first AGI, and the first AGI will have this problem when making its successor, and so on. You need to make your AGI such that when it goes to self-improve (either by writing/training a successor (or itself), or reading the sequences), it keeps everything humans like in mind, and also that when the successor goes to do the same, *it* keeps in mind this same law. One or a few shards end up significantly more powerful/intelligent than others ============================================================================== We may see this if shard power/intelligence varies according to a power law, which may occur because intelligence is a conjunctive quality. Human intelligence seems to vary according to a normal distribution, but human power seems to vary on a power law distribution, and human citation-count also seems to vary on a power law distribution. The distribution of shards *in humans* also seems to follow a power law. There are a lot of little shards you have, like an impulse to eat cake, and there are very few big shards, like an impulse to avoid death. Nevertheless, if shard influence varies on a power law distribution, then likely a small fraction of the total values that a shard theoretic agent will have will be passed on to its successor, because the driving forces don't really care about the vast majority of the things the shard theoretic agent might technically value. Hidden values in not being fully intelligent ============================================ If you suddenly made me into a superintelligence, you'd run a big risk of losing a lot of my values, because my values are intimately tied to my way of thinking. Not to the extent that you *literally* couldn't make a superintelligence with my values (it is very difficult to make that kind of universal quantifier, and I also place significant probability mass on the solution to making me a superintelligence being 'add more neurons to the brain', and my self-reflection being good enough not to have the new ways of thinking destroy lots of what I once cared about). But to the extent that if someone was *just* trying to make me into a superintelligence, and didn't really care about my values, they'd probably end up destroying more of my values than they would if I were an agent with an explicitly defined utility function out the bat. Will our AGI have enough reflective thought to realize when the thoughts they're thinking pose a risk of destroying values they have? Probably not the first one to think *really* dangerous thoughts, if you don't train them for it. And navigating these waters seems difficult even if you *do* train them for it. So it's adequate training we'd need. David Udell tells me this is another way of phrasing the law school problem of value drift. This seems approximately right, but notably doesn't require in-lifetime training steps. Other kinds of failure ====================== There are all sorts of other failures of recursive alignment which may actually be realized in the world that I haven't thought of. Dichotomizing these failures is a useful activity for the interested party! And I wouldn't be surprised if the particular examples I outlined above don't actually come into play. --- 1. Justis questions this claim: > > [...]this doesn't seem generally true to me. Specifically, the big decision could be super obvious/easy to get right (an omnipotent being comes out of a portal and says "should I destroy everything?" and you have only to say "no", for example), or the correlated decisions could be such that only when you do them all wrong does the world end (the omnipotent being offers a poll and only if everyone votes "destroy" does it do anything, for example). > > > I say the reason why its clear you don't destroy the world is because this leaves many alternative options open to you. If instead the entity asked whether you wanted to ban the world from ever ending so that always there exist humans, along with various specifications of what it means for a clump of atoms to be a human, and you have some plausible reason to suspect the properties actually corresponds to your concept of a human, but not quite, the decision would be tougher. [↩︎](#fnref-XAcMi4NYRuP8RYRHm-1)
b823e116-63da-43e0-ba19-a95af00c3fce
trentmkelly/LessWrong-43k
LessWrong
Musings on AI Companies of 2025-2026 (Jun 2025) Currently, only 5 companies in the world have access to frontier AI training compute and are also pursuing development of AGI (Google DeepMind, OpenAI, Anthropic, xAI, and Meta). This will still hold in 2026 for Google and OpenAI, and plausibly also for Anthropic, Meta, and xAI. Stance towards trying to develop AGI can change, but the frontier AI training compute barrier is increasingly insurmountable for any company that doesn't already have impressive AI development accomplishments. In 2024, frontier compute was 100K H100s, and that cost about $5-7bn (it was still possible to use legacy air cooling infrastructure with H100s). In 2025, that's 100K chips in GB200 NVL72 racks, which costs $7-11bn. In 2026, OpenAI's Stargate Abilene sets the lower bound at 400K chips in NVL72 racks (GB200 or possibly GB300), which is a 1 GW datacenter campus that costs $35-45bn (though you can continue building out the 2025 system, so only $30-35bn in addition to that). For 2025, Musk said on a recent podcast that "30K GB200s" are already installed at xAI's original Memphis site, and they are going to install additional "110K GB200s" shortly at a new Memphis site (at 31:35). (Counting "GB200s" is a bit ambiguous, since in various contexts it can refer to either a single compute die, to a chip/package that has 2 compute dies in it, or to a board with these chips that has 2 chips on it.) OpenAI of course has phase 1 of Stargate Abilene, which is 100K chips in GB200 NVL72 racks (2 out of 8 buildings planned for completion in summer 2026) that are either already online or will be coming online imminently. Anthropic has Project Rainier, which is 400K Trainium 2 and has the FLOP/s of about 250K H100s, the same as 100K Blackwell (GB200) chips. Meta can afford a $7-11bn training system, and given their recent moves has the willingness to spend. RLVR and Large World Size There is likely a new constraint on AI training systems starting with 2025-2026, if RLVR (training of thinking models) s
29ff8a19-7040-431b-9e41-3f786086f26a
trentmkelly/LessWrong-43k
LessWrong
Harry Potter and the Methods of Rationality discussion thread, part 3 Update: This post has also been superseded - new comments belong in the latest thread. The second thread has now also exceeded 500 comments, so after 42 chapters of MoR it's time for a new thread. From the first thread:  > Spoiler Warning:  this thread contains unrot13'd spoilers for Harry Potter and the Methods of Rationality up to the current chapter and for the original Harry Potter series.  Please continue to use rot13 for spoilers to other works of fiction, or if you have insider knowledge of future chapters of Harry Potter and the Methods of Rationality. > > A suggestion: mention at the top of your comment which chapter you're commenting on, or what chapter you're up to, so that people can understand the context of your comment even after more chapters have been posted.  This can also help people avoid reading spoilers for a new chapter before they realize that there is a new chapter.
4d6d0932-bcf7-472f-af9e-578013222cb1
trentmkelly/LessWrong-43k
LessWrong
Toby Ord’s ‘The Precipice’ is published! [x-posted from EA Forum] The Precipice: Existential Risk and the Future of Humanity is out today. I’ve been working on the book with Toby for the past 18 months, and I’m excited for everyone to read it. I think it has the potential to make a profound difference to the way the world thinks about existential risk. How to get it * It's out in the UK on March 5 and US March 24 * An audiobook, narrated by Toby himself, is out March 24 * You can buy it on Amazon now, or at theprecipice.com/purchase * You can download the opening chapters for free by signing up to the newsletter at www.theprecipice.com What you can do * Read the book * Talk about it with your friends and family, or share quotes you like on social media * If you enjoy it, consider writing a review on Amazon or Goodreads Summary of the book Part One: The Stakes Toby places our time within the broad sweep of human history: showing how far humanity has come in 2,000 centuries, and where we might go if we survive long enough. He outlines the major transitions in our past—the Agricultural, Scientific, and Industrial Revolutions. Each is characterised by dramatic increases in our power over the natural world, and together they have yielded massive improvements in living standards. During the twentieth century, with the detonation of the atomic bomb, humanity entered a new era. We gained the power to destroy itself, without the wisdom to ensure that we don’t. This is the Precipice, and how we navigate this period will determine whether humanity has a long and flourishing future, or no future at all. Toby introduces the concept of existential risk—risks that threaten to destroy humanity’s longterm potential. He shows how the case for safeguarding humanity from these risks draws support from a range of moral perspectives. Yet it remains grossly neglected—humanity spends more each year on ice cream than we do on protecting our future. Part Two: The Risks Toby explores the science behind the risks we
5db80972-b4a2-44d7-a325-1b1d64310030
trentmkelly/LessWrong-43k
LessWrong
Heroin model: AI "manipulates" "unmanipulatable" reward A putative new idea for AI control; index here. A conversation with Jessica has revealed that people weren't understanding my points about AI manipulating the learning process. So here's a formal model of a CIRL-style AI, with a prior over human preferences that treats them as an unchangeable historical fact, yet will manipulate human preferences in practice. ---------------------------------------- Heroin or no heroin The world In this model, the AI has the option of either forcing heroin on a human, or not doing so; these are its only actions. Call these actions F or ¬F. The human's subsequent actions are chosen from among five: {strongly seek out heroin, seek out heroin, be indifferent, avoid heroin, strongly avoid heroin}. We can refer to these as a++,a+,a0,a−, and a−−. These actions achieve negligible utility, but reveal the human preferences. The facts of the world are: if the AI does force heroin, the human will desperately seek out more heroin; if it doesn't the human will act moderately to avoid it. Thus F→a++ and ¬F→a−. Human preferences The AI starts with a distribution over various utility or reward functions that the human could have. The function U(+) means the human prefers heroin; U(++) that they prefer it a lot; and conversely U(−) and U(−−) that they prefer to avoid taking heroin (U(0) is the null utility where the human is indifferent). It also considers more exotic utilities. Let U(++,−) be the utility where the human strongly prefers heroin, conditional on it being forced on them, but mildly prefers to avoid it, conditional on it not being forced on them. There are twenty-five of these exotic utilities, including things like U(−−,++), U(0,++), U(−,0), and so on. But only twenty of them are new: U(++,++)=U(++), U(+,+)=U(+), and so on. Applying these utilities to AI actions give results like U(++)(F)=2, U(++)(¬F)=−2, U(++,−)(F)=2, U(++,−)(¬F)=1, and so on. Joint prior The AI has a joint prior P over the utilities U and the human actio
0f1af590-437b-4a1a-a5e2-193ed45a865f
trentmkelly/LessWrong-43k
LessWrong
Troll Timers Summary: A modification to any two player board game, where you play on a very fast clock with infrequent chances to pause and think things through more carefully. Tags: Small, repeatable Purpose: Two common flaws in thinking that both relate to time management. Sometimes we spend too long thinking, endlessly churning and overthinking and going in circles. Other times we don’t take the time to think and give an answer after a second or two of “thought” when we could take longer. This exercise is designed to practice better use of time. Materials: You need a copy of a game for each set of participants, plus a timer for each group able to be reliably and quickly set. (A smartphone can usually serve as the timer.) It doesn’t need to be the same game for each group. The ideal game is simple but deep, such as Mancala, Nine Man Morris, Hive, Tak, Fallen Leaves, Battlesheep, Chess, or Go on a 9x9 grid. If you need to improvise, Nine Man Morris and Go sets can be made with a pocketful of change and some paper and pencil.  Announcement Text: We’re going to meet and play some board games, but with an important twist. See, there are two common flaws around making decisions; sometimes we wait too long overthinking a choice when we could make it quicker, and other times we try to make in haste a choice that we could actually stop and think about. The plan is to play some games on very short timers (five second chess clocks) to get used to making fast choices, and periodically to stop and give ourselves five full minutes to look over the board and remind ourselves to slow down and to actually think. If you’d like to bring some games you think would be fun to play on a fast clock, please do!  Description: First, explain the timing rules. Troll Timers uses a five second turn clock for twenty-five turns, then a five minute clock for one turn, then back to five seconds for twenty-five turns. On the short turns, you must make your move within those five seconds. Once you do, you
ff4f07f6-6e40-468e-8851-0ff5dc2d3679
trentmkelly/LessWrong-43k
LessWrong
Before you start solving a problem (While this is a general discussion, I have "doing well on interview questions" as an instrumental goal; the discussion below is somewhat skewed due to that). I noticed one of the common failures to solving problems (especially under time constraints) is trying to solve the problem prematurely. There are multiple causes for this; awareness of some of them might reduce the chance of falling into failure mode, others (at least one) I do not have a solution to, and a procedural solution might not exist other than the magic of experience. Here is my list of the first kind (awareness-helps group): 1. Jumping into the problem before completely understanding it: this could be due to perceived time pressure (e.g. test, interview). This *could* be rational, depending on the "test score" function, but could be a serious failure mode if done due to stress. 2. Using a cached solution instead of trying to solve the problem. The statement of the problem can trigger "cached thoughts" despite (possibly intentionally, in interview) being more subtly more difficult than a well known problem. In one instance I actually misread the statement of the problem because it sounded like one I knew of before. 3. Another problem with a cached solution, even if it is the correct one for the problem at hand, is that you might believe that you know it without actually doing the "retrieve from disk"; consequences might be looking bad when asked follow-up questions on an interview or inability to build on the problem if it's a part of a greater structure. 4. Besides cognitive mechanics, there might be a desire to blurt out a cached solution because it makes you look knowledgeable. A status claim might be instrumentally useful ("this looks like a min-spanning tree algorithm!"), as long as you properly calibrate your level of confidence and don't fall for the trap. This brings me to the last failure mode which I do not have a solution for (which is why I am posting ;). If I avoid the traps abo
4cae5236-ca86-4995-bc10-650c3f07b1ae
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
How technical safety standards could promote TAI safety Cullen O’Keefe, Jade Leung, Markus Anderljung[[1]](#fn-GNnxvKeeMA2jDFeHr-1) [[2]](#fn-GNnxvKeeMA2jDFeHr-2) Summary ------- Standard-setting is often an important component of technology safety regulation. However, we suspect that existing standard-setting infrastructure won’t by default adequately address transformative AI (TAI) safety issues. We are therefore concerned that, on our default trajectory, good TAI safety best practices will be overlooked by policymakers due to the lack or insignificance of efforts which identify, refine, recommend, and legitimate TAI safety best practices in time for their incorporation into regulation. Given this, we suspect the TAI safety and governance communities should invest in capacity to influence technical standard setting for advanced AI systems. There is some urgency to these investments, as they move on institutional timescales. Concrete suggestions include **deepening engagement with relevant standard setting organizations (SSOs) and AI regulation**, **translating emerging TAI safety best practices into technical safety standards,** and **investigating what an ideal SSO for TAI safety would look like**. Standards Help Turn Technical Safety Discoveries Into Legal Safety Requirements ------------------------------------------------------------------------------- A plausible high-level plan for achieving TAI safety is to (a) identify state-of-the-art technical safety and security measures that reduce the probability of catastrophic AI failures, then (b) ensure (such as by legal mandate) that actors at the frontier of AI development and deployment adopt those measures. This general structure of first identifying and then mandating safety measures is obviously not unique to AI. How do lawmakers choose which substantive safety measures to legally mandate for other technologies? Several options are possible and used in practice, including encoding such requirements directly into legislation, or delegating such decisions to regulatory agencies. [One common strategy](https://www.whitehouse.gov/wp-content/uploads/2017/11/Circular-119-1.pdf) is to have the law [incorporate by reference](https://www.acus.gov/recommendation/incorporation-reference) (i.e., “point” to) existing technical safety standards[[3]](#fn-GNnxvKeeMA2jDFeHr-3) previously developed by private standard-setting organizations (“SSOs”). Another strategy, common in the EU, is to first pass generally-phrased regulation, and later have the regulation operationalized via standards developed by SSOs.[[4]](#fn-GNnxvKeeMA2jDFeHr-4) Standardization accomplishes several important things. First, it provides a **structured process for a consensus of technical safety experts** to identify and recommend the best, well-tested technical safety ideas. As a result, policymakers have to spend less time developing governmental standards and exercise less non-expert judgment about which safety requirements should be adopted. Notably, standards can also be updated more rapidly than regulation, due to lower bureaucratic and legal overhead, therefore making it possible to keep more apace with technical developments. Second, standardization takes emerging safety practices that are under-specified or heterogeneous and restates them in a **precise, consistent, and systematized form** that is more readily adoptable by new actors and appropriately clear for a legal requirement. Supranational SSOs provide a routinized and reliable infrastructure for facilitating international harmonization and regulation via standards. Finally, well-structured standard-setting organizations (“SSOs”) operate on the basis of multistakeholder consensus, and therefore both aim to **generate and provide evidence of politically viable standards**. In the US, the path from standardization often roughly follows a pattern of: 1. Informal, loose networks of industry safety experts identify, develop, and converge on safety-promoting best practices. 2. Private[[5]](#fn-GNnxvKeeMA2jDFeHr-5) SSOs elevate some of these best practices into standards, through a well-defined, multistakeholder, consensus-driven process with [procedural safeguards](https://www.ansi.org/american-national-standards/ans-introduction/essential-requirements) (such as open and equitable participation, a balance of represented parties, and opportunities for appeal).[[6]](#fn-GNnxvKeeMA2jDFeHr-6) 3. Assuming the government passes regulation for which some of these standards are appropriate, this then provides a route via which these standards are incorporated into domestic law.[[7]](#fn-GNnxvKeeMA2jDFeHr-7) [[8]](#fn-GNnxvKeeMA2jDFeHr-8) 4. International bodies like the ISO attempt to harmonize standards across countries, as well as between SSOs; via these mechanisms, standards developed in e.g. the US could eventually have international impact. To be clear, we do not necessarily think this is the best way to approach technology regulation. Our claim is primarily empirical: that privately developed standards are one of the main (and in the US, legally preferred) sources of mandated safety measures, and are likely to remain as such. There are substantial downsides with this approach, such as: * Increased risk of industry capture, since industry employees are heavily represented in SSOs. * A built-in preference for uniformity over experimentation and [competition](https://arxiv.org/abs/2001.00078) in regulatory approaches. * Slow and bureaucratic processes for setting standards (though less slow and bureaucratic than many governmental processes). * Reduced democratic accountability and participation, since SSOs are private organizations. * Lack of access to the incorporated standards, since the standards often cost hundreds of dollars each to access.[[9]](#fn-GNnxvKeeMA2jDFeHr-9) Importantly, we also think that standardization can be a [useful lever for safety](https://www.fhi.ox.ac.uk/wp-content/uploads/Standards_-FHI-Technical-Report.pdf) even if those standards are not incorporated into hard law. Established safety standards can establish a natural normative “floor” against which AI developers (especially those represented in the standard-setting process) can be evaluated. Special antitrust protections for bona fide standard-setting activities makes standard-setting a less risky way for labs to jointly work on safety.[[10]](#fn-GNnxvKeeMA2jDFeHr-10) Standardization of informal and heterogeneous safety best practices can lower the cost of adopting such practices, leading to broader coverage.[[11]](#fn-GNnxvKeeMA2jDFeHr-11) Standards can also form the substantive primitives for private [certification](https://arxiv.org/pdf/2105.10356.pdf) and [auditing](https://journals.sagepub.com/doi/full/10.1177/2053951720983865) schemes. Emergence of Consensus AI Safety Best Practices ----------------------------------------------- Part of what excites us about standardization as a tractable approach to TAI governance is the increasing emergence of best practices in AI safety with increasingly broad buy-in. For example, a number of industry, academic, and civil society actors appear to endorse and/or are willing to discuss some fairly concrete measures to improve alignment, safety, and social impact throughout the AI lifecycle, including (but not limited to): * A [variety of social and technical measures](https://openai.com/blog/best-practices-for-deploying-language-models/) to limit harms from commercialized large language models * Use of [reinforcement learning from human feedback](https://arxiv.org/abs/1706.03741) to align model behavior with human preferences[[12]](#fn-GNnxvKeeMA2jDFeHr-12) * Responsible [publication](https://partnershiponai.org/workstream/publication-norms-for-responsible-ai/) of advances in AI, including [discussion of ethical impacts](https://medium.com/@GovAI/a-guide-to-writing-the-neurips-impact-statement-4293b723f832) and [disclosure of ethically relevant model construction and behavior](https://arxiv.org/abs/1810.03993). We think these measures may be good candidates for formalization into standards in the near future. As AI safety and policy research matures, currently theoretical, vague, or nascent [ideas](https://arxiv.org/pdf/2004.07213.pdf) may mature into consensus best practices, adding to the list of candidates for standardization. Of course, the goal of existential-risk focused AI safety research is to eventually produce training and testing methods that can, when applied to an AI system, reliably improve that system’s alignment with human values. We hope that such methods will be (or could be made) sufficiently clear and universalizable to make into legally appropriate standards. AI Standardization Today ------------------------ Standardization may be an appropriate next step for some (but by no means all)[[13]](#fn-GNnxvKeeMA2jDFeHr-13) consensus best practices. A number of SSOs currently develop standards relevant to AI safety. For example, the [International Organization for Standardization](https://www.iso.org/home.html) (“ISO”) and [International Electrotechnical Commission](https://www.iec.ch/homepage) (“IEC”) run a [joint subcommittee on AI](https://www.iec.ch/dyn/www/f?p=103:7:416683160269597::::FSP_ORG_ID,FSP_LANG_ID:21538,25), which has promulgated standards on AI [trustworthiness](https://webstore.iec.ch/publication/67138), [robustness](https://webstore.iec.ch/publication/68715), [bias](https://webstore.iec.ch/publication/71949), and [governance](https://webstore.iec.ch/publication/75336). The [Institute of Electrical and Electronics Engineers](ieee.org) (“IEEE”) has also promulgated a number of [AI standards](https://standards.ieee.org/initiatives/artificial-intelligence-systems/standards/). The U.S. National Institute of Standards and Technology (“NIST”) is developing an [AI Risk Management Framework](https://www.nist.gov/itl/ai-risk-management-framework). Best practices, standardization, and the complementary process of [conformity assessment](https://www.nist.gov/standardsgov/conformity-assessment-basics) are beginning to play an important role in the regulation of AI. The Federal Trade Commission has [repeatedly](https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai) [implied](https://www.ftc.gov/business-guidance/blog/2020/04/using-artificial-intelligence-and-algorithms) that compliance with best practices and “independent standards” in ethical AI may be required by—or at least help evidence conformity with—various laws they enforce. In its [Inaugural Joint Statement](https://www.whitehouse.gov/briefing-room/statements-releases/2021/09/29/u-s-eu-trade-and-technology-council-inaugural-joint-statement/), the U.S.–EU Trade and Technology Council announced an intent to prioritize collaboration on AI standard-setting. [Standardization](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3900378) and [conformity assessments](https://link.springer.com/article/10.1007/s11023-021-09577-4) for certain high-risk AI systems play an important role in the proposed [EU Artificial Intelligence Act](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206). In short, governments appear poised to rely heavily on standardization for AI regulation. Actionable Implications for the TAI Safety and Governance Communities --------------------------------------------------------------------- Our core thesis is that technical AI safety standards can and will be the building blocks for many forms of future AI regulation. We’ve laid out the case briefly above; additional analysis and refinement of this thesis could be valuable. If this thesis is true of the most existentially important forms of AI regulation, this has important and actionable implications for the TAI safety and governance communities, many of which were presciently identified by [Cihon (2019)](https://www.fhi.ox.ac.uk/wp-content/uploads/Standards_-FHI-Technical-Report.pdf). Thus, this post serves as a renewed call to take AI safety standardization seriously. Concretely, we have several ideas on how to do this in the near- and medium-term. First, safety-conscious AI practitioners should consider advancing standardization of TAI-relevant safety best practices. Although we are aware and appreciative of several TAI-concerned individuals who have participated in AI safety standard-setting, we suspect that TAI-focused perspectives are still underrepresented in the processes of the various SSOs already developing AI safety, security, and governance standards. While this might not be a problem today, if those standards are increasingly relied upon by policymakers for substantive AI regulation, TAI perspectives and priorities might not be adequately represented or considered legitimate, and we won’t have routes to promote TAI safety best practices once they are discovered. We therefore renew Cihon (2019)’s call for strategic engagement between the TAI safety communities and AI SSOs. For example, (more) AI safety researchers may consider joining the membership of such SSOs, and serving on relevant committees.[[14]](#fn-GNnxvKeeMA2jDFeHr-14) For similar reasons, TAI safety researchers and practitioners should consider engaging seriously with regulatory efforts in jurisdictions where regulation typically precedes standards. This notably includes forthcoming EU AI regulation and accompanying standard-setting processes, especially if we should expect such regulation to diffuse globally. As the TAI safety community converges on best practices for frontier systems, we should proactively push for them to be refined into technical standards. An intermediate step here might look like creating fora where safety practitioners from across organizations can easily share and refine safety best practices and other lessons learned,[[15]](#fn-GNnxvKeeMA2jDFeHr-15) then sharing these publicly in concrete form. We’d also encourage proper analysis of the adequacy of current AI-relevant SSOs. If it seems they might be inadequate at dealing with TAI safety issues, we should get to work investigating what new SSOs tailored to TAI safety issues might look like. Ideal features of such an SSO would likely include: * Disciplined focus on TAI safety issues, with supporting institutional rules (e.g., bylaws) and aligned leadership to maintain that focus. * Exceptionally high transparency and accessibility (e.g., technical standards are freely available, including in multiple relevant languages, for easy use, reference, and critique).[[16]](#fn-GNnxvKeeMA2jDFeHr-16) * Ability to very rapidly initiate or update standards. * Calibrated communication of the safety value of standards (i.e., communicating by how much does this standard, when properly applied, reduce worst-case risks). * Design of multiple layers of standards, including organization- and process-level standards (e.g., organizations are required to make extensive good-faith efforts to identify and remedy safety issues, and are not permitted to infer that their systems are safe merely because they’ve checked off a list of object-level system requirements). * Low implementation costs for safety standards., such as through provision of how-to guides for implementation. * [ANSI accreditation](https://www.ansi.org/american-national-standards/info-for-standards-developers/accreditation), for credibility and legitimacy reasons. Like many things in governance, learning to influence and implement standardization well will require iteration and experience. We shouldn’t assume that we can simply “tack on” standardization after discovering AI safety solutions. We suspect that such solutions will be more consistently, quickly, and smoothly adopted and eventually legally codified if there is already a nimble, well-functioning, respected, legitimate, TAI-oriented standardization infrastructure to translate our best collective safety measures into standards. Creating such an infrastructure will take time, but seems tractable if we invest our efforts efficiently and strategically in this space. Conclusion ---------- To summarize: 1. Technical standards form the fundamental building blocks of many technology regulation regimes, and could plausibly form the fundamental building blocks of TAI-relevant regulation. 2. Given (1), the TAI safety and governance communities should ensure that there exists SSOs that can efficiently elevate AI safety best practices into technical standards that are, in substance and form, appropriate for legal and regulatory use. 3. It’s not clear that long-term safety priorities are currently well represented in existing AI standard-setting efforts, or that the current structure and procedures of AI-relevant SSOs are appropriate to the challenges that TAI may pose. 4. If existing SSOs are inadequate, we should have a plan for improving existing SSOs or creating new ones, particularly to ensure they are focused on and nimble to the evolving challenges of advanced AI safety. This would take several years. 5. We may therefore wish to start investing in answering (3)—and then possibly working on (4)—soon. Concrete steps we propose include: 1. TAI safety researchers and practitioners should consider joining and influencing existing TAI-relevant SSOs, both for the object-level reason of improving AI safety standard-setting and for the purpose of learning more about AI safety standard-setting. 2. For similar reasons, TAI safety researchers and practitioners should consider engaging with regulatory efforts in jurisdictions where regulation typically precedes standards. 3. TAI safety researchers should actively drive for convergence on what best safety practices are for frontier systems, and refine those best practices into technical standards that would be suitable for integration into law. 4. TAI safety and governance researchers and practitioners should analyze whether existing AI SSOs are adequate for the needs of TAI standard-setting, including analyzing which standardization processes are going to be most important to influence today[[17]](#fn-GNnxvKeeMA2jDFeHr-17). If existing efforts seem likely to be inadequate, we should design and possibly build new standard-setting infrastructure. If you are interested in working on this and think that we could help, or have valuable insight regarding AI safety standard-setting, please reach out to us at [tai-standards[at]googlegroups.com](mailto:tai-standards@googlegroups.com) Notes ----- --- 1. Thanks to Jonas Schuett, Joslyn Barnhart, Miles Brundage, and Will Hunt for comments on earlier drafts of this post. All views and errors our own. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-1) 2. This post is written in our individual capacities, rather than in our capacities of employment or affiliation with particular organizations. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-2) 3. “Standardization” is defined as “[c]ommon and repeated use of rules, conditions, guidelines or characteristics for products or related processes and production methods, and related management systems practices.” Off. of Mgmt. & Budget, Exec. Off. of the President, OMB Circ. No. A-119, Federal Participation in the Development and Use of Voluntary Consensus Standards and in Conformity Assessment Activities § 3(a) (1998) (hereinafter “1998 Circular A-119”), <https://perma.cc/Y32D-R2JQ>. Standards can include “[t]he definition of terms; classification of components; delineation of procedures; specification of dimensions, materials, performance, designs, or operations; measurement of quality and quantity in describing materials, processes, products, systems, services, or practices; test methods and sampling procedures; or descriptions of fit and measurements of size or strength.” *Id.* We are here primarily focused on standards that attempt to improve safety. Other standards (perhaps most) are focused on promoting interoperability or reducing information costs. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-3) 4. In the EU, this responsibility typically falls on the “European Standards Organizations” (ESOs), some of which work on requests from the EU Commission, e.g. in preparation of forthcoming regulation such as the [AI Act](https://artificialintelligenceact.eu/). The most important ones are the European Committee for Standardisation (CEN), European Committee for Electrotechnical Standardisation CENELEC) and European Telecommunications Standards Institute (ETSI). The EU’s recent [Strategy on Standardisation](https://ec.europa.eu/docsroom/documents/48598) is a good place to get an overview of EU standard-setting and approach to engagement with international SSOs. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-4) 5. In some other countries, governments take a much more active role in standard-setting. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-5) 6. Some might worry that these due process requirements pose a possible risk as a source of distraction, obfuscation, or delay in setting TAI safety standards. We share this concern, which is why we propose investigating the creation of a new SSO that could retain a strong focus on TAI, with corresponding application of other due process requirements (such as faster turnaround times than achieved by most SSOs). [↩︎](#fnref-GNnxvKeeMA2jDFeHr-6) 7. *See also* Off. of Mgmt. & Budget, Exec. Off. of the President, OMB Circ. No. A-119, Federal Participation in the Development and Use of Voluntary Consensus Standards and in Conformity Assessment Activities § 2(e) (2016) (hereinafter “2016 Circular A-119”), <https://perma.cc/KUV8-VWN8>; National Technology Transfer And Advancement Act Of 1995, Pub. L. No. 104–113, 110 Stat. 775 (1996). [↩︎](#fnref-GNnxvKeeMA2jDFeHr-7) 8. To be clear, the US federal government retains the option to develop their own standards outside of this framework. *See* 2016 Circular A-119 § 5(c). [↩︎](#fnref-GNnxvKeeMA2jDFeHr-8) 9. SSOs defend the costs to access as [necessary](https://perma.cc/25A8-AQA3) to recoup the costs of standards development and maintenance. Standards incorporated by reference into US regulations can be [freely viewed](https://www.nist.gov/standardsgov/accessing-standards-incorporated-reference). One important goal for the TAI safety and governance communities is ensuring that existentially important AI safety standards are freely available, unlike most safety standards. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-9) 10. *See* 15 U.S.C. § 4302. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-10) 11. For example, companies can reduce the amount of discovery and tinkering required to achieve some goal by referencing an appropriate standard. Consistent standards can also foster an ecosystem of actors specialized in the relevant standards, who can transfer those skills to other appropriate contexts. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-11) 12. E.g., at [Anthropic](https://arxiv.org/pdf/2204.05862.pdf), [DeepMind](https://www.deepmind.com/blog/learning-through-human-feedback), [OpenAI](https://openai.com/blog/instruction-following/), and elsewhere. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-12) 13. In particular, standard-setting is a time-consuming and expensive process. These costs may not always be worth the benefits of standardization. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-13) 14. To be clear, we do not consider such involvement to be an obvious unalloyed good. The time of AI safety researchers and engineers is very valuable, and they should not reallocate it lightly. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-14) 15. In so doing, they will have to take care not to run afoul of antitrust laws. [↩︎](#fnref-GNnxvKeeMA2jDFeHr-15) 16. NB: This is not the case with most existing SSOs! [↩︎](#fnref-GNnxvKeeMA2jDFeHr-16) 17. Relevant questions include: Which standards are likely to be most relevant for future frontier systems? Which order are standards from influential standard setting bodies going to come out? Which standards are most likely to see global diffusion? For example, will the EU AI Act, and its accompanying standards, diffuse globally? Should we expect the NIST AI Risk Management Framework to affect relevant ISO, IEC, or ESO standards? [↩︎](#fnref-GNnxvKeeMA2jDFeHr-17)
94b1b2a4-5ed8-4c11-a668-597bd7e8bd9f
trentmkelly/LessWrong-43k
LessWrong
Postmortem on DIY Recombinant Covid Vaccine edit: Changed title from "Postmortem on RatVac" for clarity. Note: We named the vaccine candidate "RatVac" as a tongue-in-cheek abbreviation for "Rationalist Vaccine". We have no association with the RaDVaC project and use a much simpler, almost trivial approach. I don't endorse making your own vaccine or taking anything nerdy people on the Internet send you. This is not medical advice. tldr: In April of 2021 I assembled my own subunit vaccine against the Beta variant of SARS-Cov-2 ("SARS2"). Despite starting with the prior that this formulation should be somewhat effective, I could not demonstrate efficacy against the Alpha variant in antibody tests. Introduction I started getting interested in DIY vaccines soon after the Making Vaccine and We got what's needed for COVID-19 vaccination completely wrong posts were published. Particularly the idea of just making a standard subunit vaccine appealed to me, so I teamed up with some interested people from the rationalist diaspora and we ended up making our own vaccine candidate in early to mid 2021. Some contributed advice, many contributed funding (for which I am still extremely grateful), and while the project wasn't the clear success I had hoped for, maybe the true treasures are the friends we gained and the lab equipment we bought along the way. Recombinant Vaccine ELI5 Subunit vaccines contain a subset of a single protein molecule. For viruses this will typically be a protein that contains the receptor-binding-domain (RBD), i.e. the part that actually binds with the host during infection. Through the wonders of genetic engineering - see very short introduction below - we can produce these proteins and then simply introduce them into the human body. While Yang et al have shown that an immune response occurs even with a vaccine purely containing viral RBD, the response can be enhanced by adding an immunologic adjuvant. Adjuvants are a very diverse class of chemicals, ranging from simple mineral salts to protei
f37973c7-0526-447d-a643-0e152e9815de
trentmkelly/LessWrong-43k
LessWrong
[POLL] LessWrong census, mindkilling edition [closed, now with results] Some have been curious about what the politics of this community would look like if broken down further; here's a shot at figuring it out. I've also included a few other questions that folks expressed curiosity about. Aside from one sensitive question, there's no option to keep your answers private, since in my opinion that would defeat the point - just don't answer if you have concerns - but there's also no overlap with the old survey, aside from asking you how you answered the original politics question. (This should help with interpreting those results even if the n for this is much lower than and somehow biased relative to the big survey.) For entertainment purposes only, don't use the below space to discuss politics directly, &c. Early suggestions are likely to be incorporated, given what I assume to be the low quality of the first draft. Edit: "left" and "right" operationalized for the questions they appear in; poor language cleared up in mental health question. Edit 2: results here; see comment below for some preliminary thoughts. Because there were several unique regional responses, I did not publish responses that question.
16ec9467-6099-494e-9b02-426f958636d0
trentmkelly/LessWrong-43k
LessWrong
What Does LessWrong/EA Think of Human Intelligence Augmentation as of mid-2023? Zvi Recently asked on Twitter: > If someone was founding a new AI notkilleveryoneism research organization, what is the best research agenda they should look into pursuing right now? To which Eliezer replied: > Human intelligence augmentation. And then elaborated: > No time for GM kids to grow up, so: > > * collect a database of genius genomes and try to interpret for variations that could have effect on adult state rather than child development > * try to disable a human brain's built-in disablers like rationalization circuitry, though unfortunately a lot of this seems liable to have mixed functional and dysfunctional use, but maybe you can snip an upstream trigger circuit > * upload and mod the upload > * neuralink shit but aim for 64-node clustered humans ---------------------------------------- This post contains the most in-depth analysis of human intelligence augmentation (HIA) I have seen recently, and provides the following taxonomy for applications of neurotechnology to alignment: > 1. BCIs to extract human knowledge > 2. neurotech to enhance humans > 3. understanding human value formation > 4. cyborgism > 5. whole brain emulation > 6. BCIs creating a reward signal.  It also includes the opinions of attendees (stated to be 16 technical researchers and domain specialists) who provide the following analysis of these options: From the original post: "Fig. 2| Comparison on key variables. A. Feasibility vs. timeline. Technology clusters that were deemed less feasible were also presumed to take longer to develop. B. Impact on AI vs. timeline. Technology clusters that were seen as having a larger potential impact on AI alignment were also presumed to take longer to develop. C. Impact on AI vs. feasibility. Technology clusters that were deemed more feasible were seen to be less likely to have an impact on AI alignment. Green trend lines represent high correlations (R2 ≥ 0.4318) and red represent low correlations." -----------------------------
f95adb79-7d78-47af-b791-11ae6b6e0956
trentmkelly/LessWrong-43k
LessWrong
Don't take bad options away from people Short version: when people are in a bad situation and only have bad options, taking one of those options away is wrong and causes suffering. Not understanding this is a common failure mode among the general population and results in a lot of situations where governments are actively harming poor and desperate people. Why does this happen? I think it's a combination of fabricated options, typical-minding, and the usual political failure mode where activists care more about signalling their virtue than actually putting in the effort to understand what does and does not help people. It's also often easy to strawman the case for not taking people's options away.  Example 1: selling kidneys Mrs Singh is an impoverished Indian woman who loves her children. One child has contracted tuberculosis[1] and will die unless she can get money for antibiotics. Mrs Singh has three options: 1. sell a kidney to get the money 2. do something else desperate and maybe illegal to get money 3. watch the child die I wish we lived in an ideal world where everyone had access to free health care and no one was desperate for money. But when you're truly desperate for money, at least you can sell a kidney. Oh wait, that's illegal[2]. Because almost every government[3] decided to take the only halfway-good option away from desperate people.  (In a comparable situation where no money was involved - say a British Mrs Smith needed to donate one of her own kidneys to save a child with kidney failure, and the surgery was free on the National Health Service - no-one thinks that the desperate mother is exploited and should be saved from donating a kidney for her own good. I defy anyone who is anti-selling-kidneys to explain why Mrs Smith is not exploited but Mrs Singh is.) How can the world get this so wrong? My best guess is typical-mind fallacy. Activists and lawmakers tend to be reasonably well-off people who are unlikely to be so desperate that selling a kidney is their best option. So the
217989d6-0180-4780-9922-aedaa95ae486
trentmkelly/LessWrong-43k
LessWrong
Restructuring Pop Songs for Contra One of the things I like about playing for contra dances is that you have a lot of freedom about what to play. As long as you meet the minimum requirements for danceable music (108-122bpm, contra phrasing) you can do almost anything. And then if you're a bottom left corner band you might want to play some pop covers. While lots of pop music has a tempo in the right range, it's much less common for it to have the right phrasing. Without dancers who will be messed up if you shorten a section or add a few beats, pop composers have no reason to write songs with this structure. But many songs come pretty close! So the first part of preparing a pop cover is picking a song that won't be too much work. The easiest songs to adjust are ones where there are two 16 beat (8s) melodies you can pull out and treat as the A and B. For example, here's the Hampster Dance: I think the A part should pretty clearly be the iconic chipmunkified Whistle Stop (Robin Hood) melody: (mp3). There are a few choices for the B, but I think the best option is the synth lead: (mp3). Both of these are 16 beats, so playing AABB gets you once through the dance. Let's try a harder one. Here's Walk the Moon's Shut Up and Dance: The song has a reasonably phrased 32 beat verse: (mp3). And a reasonably phrased 32 beat chorus: (mp3). But if you jam them together they don't feel great. The song handles this with a 16 beat prechorus: (mp3). This does not come out to a good length: 32 + 16 + 32 is not a multiple of 64. One way to fix this is to only play the second half of the verse (A1) then the prechorus (A2) then the chorus (B1 and B2). This is ok, but it feels a bit rushed, especially with the transition from the end of the chorus back into the verse. I do think this would work, but can we do better? Later in the recording they play a 32 beat version of the pre-chorus: (mp3). If we use that we now have 32 + 32 + 32, one and a half times through the dance. We want to end with the chorus, and a
0e82b06b-098c-4e53-b2ae-208aac21c19a
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
To use the human or not to use the human (Erwin Boer) - 1st AiTech Symposium thank you it's a great honor to be invited I always like to stretch myself and David already has given quite a beautiful background of a lot of the research that he and I have done together a lot of research in terms of human modeling that human modeling understanding how humans control systems can be used to diagnose people so if you have a person with glaucoma you know they control something differently because their peripheral vision is different so that understanding helps you you know in the context of driving to use these types of models for diagnostic purposes we've used human models in this types of shared control that David talked about we're also using them as templates for autonomous control so we're looking at how do people adapt to the environment how do they perceive the risk how do they take into consideration or the road users in terms of vulnerable road users how quickly do how early do they slow down how do they swerve around them very different than a typical engineering system might do that because you know they start with different assumptions so we try to already take into consideration starting with well-established human models but shaping them based on how people adapt to the environment like I said I'm going to stretch myself a little bit and talk about a whole bunch of stuff that I haven't actually done research in but in the context of meaningful human control I think it is is a number of factors issues that I think are important and even try to apply some of the knowledge tracking aspects that are in the paper by yeah you don't so the question is to use humans or not humans david advocated you know we should just have shared control we should always use humans humans should always be part of the design loop and the decision loop but are there may be situations where humans might want to drive autonomously or where humans really might want to or might need to drive manually so I want to kind of break up that space in terms of how can you think about it how can you engineer it what kind of decision techniques do we have to talk about that previous speaker talked about do we need to be here do we not need to be here this is professor in a gawky who has made a robot looking just like himself basically this robot can go to conferences and can stand in front of an audience and speak you know as if he's there and either he can speak directly through the robot or the robot can you know read something that was pre-programmed this is clearly something where you know you have the same human you're really pushing it to you know replicating essentially a human which is an interesting idea AI and in robotics from a technological point of view but is that really what we want do we need humans or do we need really something that works well together with humans and augment human so towards meaningful control like I mentioned in the incarnation of autonomous driving there are different ways to do that one is you know sometimes the human wants to drive you know I bought a Porsche I want to drive this thing manually I don't want to drive it autonomously sometimes the system you know the system is when can the system do that if the system is confident enough maybe you can drive autonomously what does that mean and we have levels of automation there blanketly applied everywhere we'll talk about that in a second sometimes you on both you know David very much advocated the combination of the two and sometimes it's very much a few symbiotic relationship David didn't really touch upon that but when you couple a human and a machine you have this symbiosis where you can teach each other you can touch each other you can protect each other and together you learn you know under what circumstances is the robot machine may be better under what circumstances is the it's a human better and the human can teach the robot and vice versa so it's a beautiful mechanism very much you know the way we interact with humans and we challenge each other we teach each other we not each other we protect each other so the question really is when to use what mode then I have to walk back and forth each time one thing that already came up is this notion of bounded rationality we don't know everything and then do we even need to know everything Herbert Simon coined this term bounded rationality and from that a whole different type of decision Engineering came about called satisficing decision-making it's not about trying to maximize everything that is important and in optimal control it's not about you know maximizing performance and minimizing this it's really about setting aspiration levels and once you have reached those that's good enough that's extremely important in critical situations because in the critical situation you don't really have infinite time to find a better solution or a more optimal solution so in order to be able to quickly make decisions you need to have a very good understanding of what is enough evidence for me to make that decision later on I'll talk about how we can actually quantify that evidence accumulation and how can we maybe have some meaningful ways to talk about whether it is sufficient information or not yeah so some of the things to come about is you know what evidence do we need you know what are we actually collecting when you look at an AI with sensory and actions out you don't really know what the AI is using to drive its decisions it's important to know that especially from a meaningful human control point of view you know how much evidence is enough you know when do we believe that okay we can leave that system alone and it can do it autonomously you know how much time do we have is important and can we get the evidence in other ways you know we have discussions that you know within AI or with the robot maybe not everything needs to be inferred maybe you can just talk to the robot or maybe you can just instruct a robot to do something yeah sort of all forms of interaction there from a technological point if you we try to make these things as autonomous as possible but if you switch back to make them more socially interactive you have a whole way of exploring all the ways to to teach that robot and it's called epistemic probing now you also see that as cars come to an intersection you nudge it a little bit forward to see how the other one responds to that it's you probe the system sometimes you just want to have more information really augment in the human this is Neil Harbison is one of the first cyborgs so the first side works recognized as a cyborg where you know his visual system was enhanced he's called a blind he only sees one color so he put a camera on his head and now he can see from infrared to ultraviolet and he has a perception of the world that's much richer or different at least than what other people have yes so here you have technologies directly integrated with the person and if you have all this AI symbiotically working with you that's very much what it feels like that's this greater consciousness that we can all tap into for example so to use a robot not use a robot we talked about closed environments where we can do quite a bit we can predict everything we can control everything like this Hotel in Japan it exists you can go there you surf by dinosaur it's wonderful but in an open environment where we can't understand everything we need to have a much you know richer way to actually talk about you know what does it mean to send my child with a robot to school yeah are the conditions under which I would accept that or the conditions under which not so context might become very important this is one of the most ugly slides ever created I think about changing it all the time but I don't think I will it's really just the capturing of coupling a human at the top with a machine and you see the duality yeah when humans interact there's all kinds of ways in which we share information and mimic each other and cooperate etc etc that is essentially a very large duality if we have that duality also with the systems with a system you know all the benefits you have of how you interact with the human you can also have with the system and vice versa as I mentioned in the learning to sharing the nudging the protecting one of the issues that has been discussed in a VA ssin an aviation has looked at automation for a much longer period than then driving in a lot of the research we're doing right now comes directly out of automation which is a whole different domain but one of the issues is handover you know like in the rejected take off an engine fails or an engine catches on fire you know do you abort takeoff or do you slam on the brakes it all depends on the speed you're going at the length of the runway there are a whole bunch of factors that come into the picture and does the human trust the sensor systems is the system you know sufficiently aware of its own limitations does the system actually have a good understanding of the you know dynamics of the airplane with the weight that's in the airplane etc etc so all these different factors and professor in a gawky looked at this problem and brought to bear a technique called Dempster Shafer evidence theory to combine all these different bits of evidence into a decision you know whether to take over or not so how to bring all these different factors into consideration I want to focus on that a little bit before I do that Trust is a very important aspect and Trust is a very dynamic aspect and when you start interacting with a machine you have some sort of a priori trust in how you interact with it I believe that you know this person this robot is capable of all kinds of you know wonderful things and then we interact with the robot maybe it doesn't quite work out so you trust very quickly dies where is this child has the same technology built in your expectations are a lot lower and you interact then you built this mutual understanding that is much better calibrated to where you you know ultimately end up that's the same thing with technology you know how do the companies present this technology how does the media you know react to you know one little failure that might happen and we all believe what on most cars are absolutely awful for every autonomous car crash I could also look at you know 10,000 poor kids that were you know killed by drunk drivers for example yeah maybe not 10,000 but a few hundred that would be killed at the same time so it's very much depends how the media presents this information to us I'm not saying autonomous cars is safe very much with David in terms of this whole shared control is much better but we can definitely control that so when I read the paper on meaningful human control there's a lot of stuff in there and this morning it was mentioned that it was too early in the morning to to dive into this math so I think everybody is awake enough now so perhaps we can we can do that a little bit and there's a notion called truth tracking yeah so all kinds of aspects of truth have to be true and let's talk about that a little bit so if you have a particular fact say the system is safe and the belief whether the system is safe so then you have four conditions that essentially need to be true for the system to be truly trackable in knowledge and have all the aspects so that basically means that if the system is truly safe then you believe that it's safe if the system is not safe then you believe it's not safe and then also the reverse where you if you believe if you believe is that the system is safe then it is actually safe and vice versa and then you can extend that where you have these beliefs that then ultimately relate to actions yeah so some of the practical implications are that you need to actively establish evidence for these different facts in order for you to have a belief whether it's true or not the two directions you can co go you can a priori assume that or you actually base it on some knowledge and that knowledge might come from somewhere else that might know what knowledge might come from the trust you have in a company etc etc but there's some evidence that need to be established and the same thing evidence for not safe and then use this evidence to essentially built a belief structure about safe and not safe so you're interacting with the system sometimes it works sometimes it doesn't work and in the end you end up with something that I'm not quite sure or yes I trust the system completely or not completely and then you combine these different belief structures into a belief that you have whether the action is meaningful or not or safe to perform and the action depends on your confidence in your beliefs about these facts whether whether you have evidence for it being safe or not safe yeah and then there is this notion which taps into this whole satisficing approach is you can act now where you can wait a little bit and collect more evidence you see a lot of systems where you just wait a little bit longer you collect more evidence and you can base your information on better better knowledge so when you talk about this evidence accumulation and finding out whether it's safe or not the different policies that you can adopt yeah there is a policy that's safe until proven unsafe so yeah priori assume that something is safe until it's proven unsafe I'll show you an example in a second where that was very catastrophic or you say believe it's unsafe until proven safe you know some of you might recognize that we have very similar things in the different jurisdictional systems in different countries like in the u.s. you're innocent until proven guilty but in Mexico you're guilty until proven innocent so they put you in jail and then it's up to you to prove that you're not guilty it's very similar to this this seems very silly this is what we also do okay so now we have the need to understand whether a system is safe or not how do we do that there are two techniques this morning Judaea parole was mentioned with this book causality he is very much a believer of Bayesian networks and Bayesian networks have been extremely useful or extremely powerful but are they really most easily interpreted by humans no are they perhaps the best way to go about integrating evidence and belief that people might have perhaps not dempster and Schaffer back in the 50s came up with this possible astok tech nique go and then later was coined after them Dempster shade for evidence theory think about this particular case a murder was committed and you have some evidence that this particular person is the murderer your evidence is point for the whole structure is just like probabilities has to add up to one etcetera etcetera we'll go through that in a second in a probabilistic approach of the the Bayesian approach and the one that we're all familiar with and thought how many of you are actually familiar attempts of shape of evidence theory okay four or five in a Bayesian approach you have probability that this person is a murderer it's point four well that means there's a probability of 0.6 that he's not a murderer well what does that really mean do we have any evidence that he's not a murderer no we only have evidence that this person is a murderer and the rest is just a mathematical convenience so what dempsey shaver evidence theory tells us that okay we have some evidence that it's a murderer and the rest is our what's called frame of discernment what are all the options that we have in in what we're trying to decide so it's either safe or it's not safe or it's both yeah so the rest of the total belief that you need to assign which is one is 0.6 so 0.6 is idream urder or not a murderer mm-hmm this is important because we have hard evidence that this person is a murderer and we have a possibility and that's why possible istic that this point four might become point might become one if the rest of the evidence we collect all points in this direction or it might be conflict thing where there might be additional evidence yeah so don't have to go through the whole thing here but essentially it's you have a frame of discernment which is safe unsafe what a combination of the two which is the powerset you have a particular assignment function you have a way to combine these assignments which is just if you have a belief structure you can have two experts providing evidence or you can already have an a priori belief for yourself and then you get new evidence or you combine the evidence that you bring in with what you know already and you do that in essentially this way you know what ever a and B you have evidence for which are part of the powerset as long as you know the one you're trying to compute the like safe for example has this might be safe and this might be both there you can combine them this way and then you have a null set where you might have evidence for safe and evidence for unsafe which is a conflict the interesting thing is that conflict is actually very nice construct if you have conflicting evidence what do you do with that and it's one of the things why Dempster Shaver evidence theory hasn't been applied very strongly although the friends Millett French military uses it for a lot of it's a fusion data fusion sensor fusion but the the way you can combine this conflicts and now you have evidence for safe and unsafe what do you do with that you can say I'm ignoring it I'm just assigning part of it to you know whatever I know already or you say no my entire conflict goes to ignorance which is very useful because then you say okay I have a little bit of evidence for for this is a murderer the rest is conflicting so that's basically equivalent to to ignorance if you look at you know a driving situation and you look at the evidence we have in evidence for safety it tracks lanes accurately well maybe it only does that when it's nice weather tracks the lead vehicle accurately in stable or dub safe gaps etc and then you know in rainy weather it doesn't do so well and in high traffic density it behaves differently so very quickly you see a very complex sets of evidences and confidences emerge that are very situation specific and one of the things I just don't see enough when people talk about levels of automation and this and this and this there's no mention of any context it's very clear that an autonomous car on a beautiful well striped wrote in clear weather should be able to drive autonomously yeah with the same accuracy as people and probably higher because people get super bored on these long roads that are straight and yeah so those are situations where maybe the human doesn't have to be in the liberal yeah so we can collect evidence we can have the human collect the evidence and then decide do I trust the automation or not so it can be evidential or you know the system can provide some a priori evidence for that one of the other nice things about Dempsey shavers evidence theories this notion of belief in plausibility the belief is essentially all the hard evidence do you have for the system to be safe the plausibility is that you know everything I don't know yet so everything that's still in in my ignorance can I to be assigned to safe or unsafe so my plausibility is the maximum amount of belief I could have given the unassigned beliefs that are still on the table you can have the same thing for unsafe yeah so now you have a very nice way to talk about how much evidence do a half how large could it be you can do nice simulations with that and depending on you know what type of combination rules you use so you start with I have point eight evidence that it's safe point nothing unsafe and the rest I don't know then you have an observation where it's safe unsafe and ignorance and say you have a whole bunch of these observations and essentially you can track how your belief in whether this system is safe or not tracks over time if you ignore all this conflict in here you will actually gravitate to something that says okay I think the system is safe purely because there's a little more evidence that it's safe than unsafe very dangerous if you take these ignorance into consideration on this conflicting to consideration and keep it at ignorance you see that your evidence for so this is your belief for being safe is a bit more than 0.4 plausibly it could be almost point eight but you know we haven't found the evidence yet so this is essentially something okay we don't really have enough evidence to say something meaningful here yet to make the decision so there are two policies safety policies that NASA employed on a safety preservation offered to two of the ones that can be considered NASA employed to the fault warning one where the safety preservation is it's unsafe until proven safe so if you don't know whether a system is safe you just assume it's unsafe so and this again very much depends on conditions so you operate only when evidence for safe operation is sufficient it makes a lot of sense right and then you can turn it into temperature evidence the area where based the dis is purely based on the belief that you haven't safety all the evidence that accumulates to them for warning is safe until proven unsafe so shutdown only when there's sufficient evidence for unsafe operations operate operate unless evidence for unsafe operations is sufficient and this is based on the plausibility of safe which is 1 - belief which is always a much higher number yeah so in this case you would much more often go so if you look at this this Challenger accident in the 1980s 86 so they adopted two safety control policy safe until proven unsafe but the context within which they launched that morning they had never experienced before he was very cold and they had you know no evidence that the o-rings would be brittle in that situation but because of this policy they decided to go had they understood that okay there are contacts now it's freezing we don't have safety and they had adopted the other safety policy this thing would have never launched yeah so again context is very important we can't just saying this thing is safe under all conditions it has to be very contextual and again I don't really see that enough and systems might be very aware of your own confidence in particular situation particular context whereas in others they might be completely uncertain about their their own confidence so the effect of meaningful human control there are two aspects one is tracking knowledge so reasoned responsiveness and tracing ownership I want to focus a little bit on the tracking and so the issues that unsubstantiated evidence was not used instead of factual knowledge yeah so unsubstantiated evidence I mean there was basically no evidence that this thing was safe yeah you adopt the safety of policy created an a priori belief that the system is safe yeah so again people make assumptions without evidence and I think that happens a lot you know we step in the car and we pretty much believe it's safe and we have no way to really try except for you know micromanage this thing but like David mentioned after 10 minutes we can't do that anymore we just don't got the mental capacity to monitor something that basically works pretty well so if you think about this is tracking for the Challenger task so the fact is that the system is unsafe one second Thea yeah so the system is unsafe this is a fact the system is unsafe and the freezing launching conditions it's a new concept new context that was never experienced the belief that we have is the system is safe in the freezing launch conditions without counter evidence so we're basically completely contacts techno stick and then you have these two policies which you can write in this this knowledge tracking framework if it were the fact that you know the system is unsafe then a human would believe that the system isn't safe this is not true because the human assumes the system is safe until evidence for unsafe is enough so that knowledge tracking aspect is violated in this condition and I you know I really like the way the meaningful human control laid out these types of things and you know the fact that we can apply to these real world cases it's quite useful and it's essentially you know if it were the fact that you know these types of subjunctive condemned conjunctions of subjective conjunctions are very useful there and in the safety policy if it were the fact that the system is say and then the human would believe that the system is safe she's also not true because the human assumes that the system is unsafe until evidence for saviours enough so it both policies have an issue so essentially that suggests from this truth tracking point of view that any policy not grounded in evidence collection is basically doesn't satisfy the truth tracking criteria of meaningful human control and any policy not sensitive to you know context specific evidence also and also we have that in autonomous driving again there are lots of situations where the systems might be perfectly safe but if you look at you know these types of conditions totally acceptable but you know would we drive our Tesla here the same as there some people might and depending on what they actually know about the system it's okay drives here it drives it drives you know as a whole not understanding the technologies is all over the limitations they might try it here and the same thing if you look at you know the dam leur perfect image of what you know autonomous driving utopia looks like you know the lighting conditions are perfect and no hard shadows there's no rain there's no snow and there's nothing you know everybody perfectly walks in the center of the you know sidewalks you know etc etc yeah in this condition it works very well so if we train everybody in society perfectly you know we might be able to do autonomous driving but you know this is this is more like the real world in that same context and this picture similar picture has popped up we have these levels of automation you know we have the same thing with let's go back to that the second these levels of automation that have been proposed for different decision support systems for different automated systems where essentially you know the computer offers no assistance and the human does it all to you know the computer executes a suggestion if the human approves it etc so the human gets brought into the loop more and more depending on some rule when that rule specifically comes from you know whether it's the engineer saying okay we don't trust the system and the human really needs to have the final authority or whether it is the responsibility issue you know people have thought about this and have applied to two autonomous driving as well and you know you see that here there's no automation you know the five levels of automation by SAE to you know assists it to you know full automation always in everywhere under all conditions you can just send your car with your kids you know you don't even have to look at the weather kind of thing doesn't matter where you live you know we're barely here at this point and and there are many conditions where it doesn't work really very well we're pushing towards you know a control Society and I think you know that's very dangerous and it's we tried that back in the 90s right we have dedicated freeways and we demonstrated that we can drive cars autonomously on that if we have enough autonomous cars we can see a shift to okay let's take part of the city and part of the road network where you know these cars drive and we leave pedestrians out of there or we make it more difficult for people to pedestrians so we could gauge their so they can only cross at certain crosswalks and you can control the environment and you can plan your cities around that to essentially facilitate more or less imperfect automation do we start with the imperfect conformation and control our cities or do we say now we really need to wait for the automation to work well to not completely disrupt a society that we want do we even know what society we want probably not right now the safety policies you know who applies those the system who decides you know does the human decide the system uses or the system decide the human usage right now we're very much in this camp here technology is to a certain degree and humans will adapt right the human factors world complaints all the time it's like we always have brought in you know when a system fails and you know they need to understand the human factors you know how can we instruct a human to deal with those limitations in the design properly it's a completely wrong way of going about it yeah but you know this type of duality and design and engineering you know looking at it from a systems point if you're looking at from a human point of view is a great technique for gaining insights and David and I have used that very much in terms of you know how do we couple a human to a machine so one of the things I'd like to propose is I've showed you that you know with the stems to shave or evidence theory you can take all the evidence if you know what what the important bits of information are you can combine that and you can come up with this belief that the system is safe given a particular context and then you can have this this model where you say okay in this particular context I have a high enough belief that it is safe and therefore if the human chooses to use automation that's great they can still you know maintain or continue a cooperative driving style what they can do in manual and so those options are always there if you're in between then it's really cooperation and you get into the situation the David also described where you know it's the combination of both is better than either one individually and then you know if there's just really no evidence that this thing is safe like Google has only tried you know has driven 90% of its its data you know around San Francisco what happens when you put it on a rural road in Iowa it's a whole different experience and we don't have any evidence there it may be safe but we don't know yet we already talked about that you have all these levels there's no context you know implicitly sometimes it's assumed that you know in you know on freeways we start but these systems don't have to you're fencing you can drive them off the freeway and try it in the city they might have speed limitations it doesn't work on the 45 or I'll just write faster than 45 in the city sure I get the benefit of my system so that's you know there are lots and lots of technologies that we could bring into the picture but from an economic sales point of view it's not done yeah indeed the other thing that has been mentioned a little bit is systems need to become self-aware I just throw that out here but that's one huge thing with AI you know how capable is AI of saying okay I'm very confident to recognize you know this group of people or to drive autonomously in this situation or you know take over in the critical crash situation into that condition and they're working on you know AI to have better estimates of its own limitations and an understanding and traceability but it's at the moment only used in academic labs so leave with the same note that David there is so much uncertainty and so many things we don't know and even you know so many engineering techniques that we still need to try what works best in terms of how to build evidence and combine that that you know there's a symbiotic cooperation and you know even using the human to teach the automation I'm surprised not many more car companies do that but it's also partially because you don't really know where that goes to and then you need to have a watchdog around that who's gonna design the watchdog catch-22 thank you thank you exactly so we have time for a couple of questions of course with the books ooh may I invite whoops sorry thank you very much there you go sorry yeah thanks so much for the your talk that was very inspiring so I have a question we've been recently trying to put meaningful human control inside a classic control loop one of the things we have been smashing our heads against was quantification of the many normative values that are sort of implicitly included in the meaningful human control theory so all the knowledge the moral knowledge and awareness of a certain user agent in the system even like things that are potentially easier to quantify like safety do you have since I have seen sort of you have decision structures and theories do you have suggestions from how to get to those 0.8 certainty or belief that something is safe or it isn't so how do you get to the quantification of certain particular values that determine assessing yeah it's extremely art and in some ways it's really to explore it in in the real world and that's the difficulty many of these systems can be can't really be evaluated unless you throw them out in the real world you know with the snow and the variability etcetera etc and that's why it's it's an infinitely more complex problem than you know the closed systems in which we had AI and robots living but you know we've thought a lot about you know what what does it mean mean to be safe and one of the aspects is that in the human has time to take over so and we have had workshops in the domain of almost evaluation so all of you who have designed systems probably generally evaluate them within the context within which you designed them you never really test them outside that contacts so honest evaluation is to recognize that people actually use it outside their context and see what the implications are there like you know sensor failure in an autonomous system is very detrimental if the human isn't connected it just takes the human way too much time to respond I mean you can go with reliability engineering and things like that but you know it's really in the context and I would try many of these systems to actually have humans as sensors and evaluators in the loop you know in complexin of situations we were in the workshop yesterday where we talked about micro worlds and and this morning we talked about you know kind of like an open city planning environment where you can try these different things so you can look at the dynamics and we have fairly good models of different responses but in many cases we don't we don't know when you introduce cell phones into a society what's going to happen to people you know we have no predictions so in some ways it's very hard to have meaningful human control and evaluate that because we don't really know what the impact of society it's good to think about it and have everybody think about it and a lot of mistakes will be avoided but at the same time the implications and the dynamics and the shockwaves that it throws into society or very unpredictable yeah we'll be using Likert scales for instance like psychometric methods to assess some of those but that's why I'm sure there's many different ways one thing I would mention quickly though I mean if you take this satisficing approach and the problem with optimal approaches where you have to combine all these different factors okay and need to look at you know is it traceable and what's the impact on society what's an impact on the human what's you know this and this and this you have a weight on all of those and you try to maximize that as a control engineer you know one of the things I always say that for every behavior there is a criterion that makes that optimal yeah because you can change your weights and then you'd its optimal again so if you take a satisficing approach where you have all the stakeholders say this needs to be met and then unless that is mad it's not good enough right doesn't matter if there's a trade-off it's something else no so if we can establish these trade-offs and then take the spicing approach I think that's a much more practical tractable problem than going with something like an optimal approach and we've done that for flight planning and things like that with noise cancellation thank you can you throw the box to the right-hand side took a heart so this control theory you are already sort of in a system where as you indicated visual formulas you operate with tools and so on let's take us in examples in 737 max who should be involved in avoiding such issues where potentially Boeing knew that there are critical situations which you may label unsafe but to the world it presented that this is a safe airplane so this is outside the formalizations there are behavioral psychology kind of trying to take advantage of certain situations for economic benefits and so on so you as a control theory person I mean who is responsible for this facet oils is also in your sphere of interest and we think in this particular case it was just a lack of redundancy in the in the sensor system so that is definitely a case where you an engineer made a shortcut probably because of cost cost implications to not have the redundancy that you really need in the safety critical situation so when the redundancy comes from a human if it's easy to assess by human or whether we done that see comes from completely different you know sensor systems and then you can apply this type of you know belief structure and you can have you need to have you know sensors that can detect you know whether a sensitive same sensor is actually failing or not you know partially through consistency parts partially through modeling partially just through the dynamics that you've exploration you know observed in the past so there are ways to to do this type of stuff but in this particular case it was just a redundancy that wasn't there and then you know the pilots didn't have the time to to figure it out with these very complex systems thank you very much Evan let's thank the speaker again [Applause]
8122b6e9-c741-44aa-b0df-692a22fada16
trentmkelly/LessWrong-43k
LessWrong
What is the best paper explaining the superiority of Bayesianism over frequentism? Question in title. This is obviously subjective, but I figure there ought to be some "go-to" paper. Maybe I've even seen it once, but can't find it now and I don't know if there's anything better. Links to multiple papers with different focus would be welcome. For my current purpose I have a preference for one that aims low and isn't too long.
f88f1381-07bd-4432-8a52-55b51039cacd
trentmkelly/LessWrong-43k
LessWrong
Allowing a formal proof system to self improve while avoiding Lobian obstacles. [Epistemic status, can't spot a mistake, but am not confidant that there isn't one, if you find anything please say so. Posting largely because the community value of a new good idea is larger than any harm that might be caused by a flawed proof. ] Suppose you have an automatic proof checker. Its connected to a source of statements that are sometimes correct proofs, and sometimes not. The proof checker wants to reject all false proofs, and accept as many of the true proofs as possible. It also wants to be able to update its own proof framework. Define S to be a set of statements in a particular formalism, say those that are grammatically well defined in PA. Let B be any sequence from some alphabet of symbols. Let L={S×B→{⊤,⊥}} and V⊂L be the set of programs that have the property that ∀v∈V:∀s∈S:(∃b∈B:v(s,b))⟹s In other words, V is the set of all programs that never prove false statements. We should never leave V or need to talk about any program not in it. For v∈V write v[s] to mean ∃b∈B:v(s,b). Ie v proves s A simple setup would consist of a starting program p1∈V and a rule that says, If p1[p2[s]⟹s] then you can add p2 to your list of trusted provers. If p1 proves the soundness of p2, then you can accept any statement when given a proof of it in p2. The lobian obstacle is that p2 must be strictly weaker than p1, in that p1 can prove any statement that p2 can, but p1 can prove the soundness of p2 and p2 can't prove its own soundness. This means that each trusted prover has to be strictly weaker than the one that generated it. You could start with PA+3^^^3 and say that a few weakenings aren't a problem, but that isn't an elegant solution. Note: You can't get around this problem using cycles. Suppose a[b[s]⟹s]b[c[s]⟹s] This would imply a[b[c[s]⟹s]⟹(c[s]⟹s)]a[b[c[s]⟹s]]a[c[s]⟹s] So any cycle could be shrunk by 1, and inductively, shrunk to a self trusting system. I propose instead that you use the rule. If p1[p2[s]⟹p1[s]] then accept any future proofs
46ed3a78-c7b3-4607-ba26-48584567ebc3
trentmkelly/LessWrong-43k
LessWrong
Quickly refactoring the U.S. Constitution Presented mostly without comment, but see the footnotes. The game is that it can't look out of place on Earth, so no futarchy. Bugfixes encouraged. ---------------------------------------- Legislative Branch 1. There is only one house of legislators, named The Senate, and it contains 100 senators.[1] * With a 50/100 senator majority, The Senate can repeal laws. * With a 55/100 senator majority, The Senate can pass or amend a new law, which will be enacted by the chief executive.[2] 2. Senators are voted in for eight year terms and cannot be re-elected.[3] 3. Senatorial elections are timed so that results are announced a month before the end of the calendar year. [4] * After election announcements, previously elected senators immediately abdicate their roles and take a ranked snap-vote for their "trainee", whom they will help transition and counsel into their new roles.[5] Incumbent senators will have the opportunity to learn about their new job and get national security briefings for about a month, and after the dawn of the new year senate elections will start.[6] 4. Senators are not voted in directly. Instead, during Voting Season, 160,000 "jurors" are lotted from the American populace. These jurors in turn do ranked voting to elect 4,000 "delegates", and those delegates do ranked voting to decide the 100 senators. [7] * All jurors and delegates vote for all positions at the same time[8]. Ranked voting systems determine the 4,000 and 100 that win, respectively. * Jurors and delegates must vote for at least 100 delegates and 20 senatorial candidates, respectively. There is no maximum.[9] * Jurors and delegates have two weeks and one month to vote for candidates, respectively. * Jurors cannot vote for themselves. They can however be voted into delegateship by other voters. * Delegates are sequestered from each other and direct contact from the public while they do research, and their communications during the month are monit
8d6a332e-d9f7-4c96-9f32-e8e5ce9eef90
trentmkelly/LessWrong-43k
LessWrong
Effective Altruism : An idea repository Metainformations : * Cross-post. Steemit. * Epistemic Effort. Much more reasoning behind that post. I'm mostly trying to see if people are interested. If they are, much more writing will ensue. * Epistemic Status. Field : Development of idea repository for a community. Phase : pre-epistemy. * Thesis. There should be an EA ideas repository. * Auxiliary thesis. EA looks more closed than it is. Personal Introduction I came to define myself as a non-standard Effective Altruist. I’ve always been interested in Effective Altruism, way before I’ve even heard of EA. When I was younger, I simply thought I was altruist, and that what people did was … noise at best. Basically, naive ways to relieve one’s conscience and perpetuate one’s culture. Since primary school I thought about global problems and solutions to these problems. So much so that the word “project” internally connotes “project solving some global problems”. As such, EA should have interested me. However, it didn’t. The main reason was that I saw EA as some other charitists. I’ve always been skeptical toward charity, the reason being “They think too small” and “There are too much funding in standard solutions rather than in finding new ones”. I think this exemplifies a problem about EA’s communication. A Communication Problem Most people I know got to know Effective Altruism through EffectiveAltruism.org. Because of that website, these people see EA as a closed organization that help people to direct funds to better charities and find better careers. That was my opinion of EA until I saw the grant offer : a closed organization with already defined solutions wouldn’t fund new ideas. As such, I changed my outlook of EA. I researched a bit more about it, and found an open and diverse community. But I am busy person, therefore I have to use filters before putting more time in researching about something. I made my impression from : * effectivealtruism.org * The Wikipedia entry. Particularly the Cau
29df19bb-27a4-4ffd-9033-601582cccdb8
trentmkelly/LessWrong-43k
LessWrong
Luna Lovegood and the Chamber of Secrets - Part 1 Luna Lovegood walked through the barrier between Platforms Nine and Ten to Platform Nine and Three-Quarters. Luna wondered what happened to Platform Nine and One-Half. Numbers like "three quarters" only appear when you divide an integer in half twice in a row. Luna looked around for someone who might know the answer and spied a unicorn. She wore clothes, walked on two feet and had curly brown hair. None of that fooled Luna. The unicorn radiated peace and her fingernails were made out of alicorn. "What happened to Platform Nine and One-Half?" Luna asked the unicorn. "There is no Platform Nine and One-Half," the unicorn replied. "How do you know?" Luna asked. "It would have been in Hogwarts: A History," the unicorn replied, "nor is there mention of a Platform Nine and One-Half in Modern Magical History, Important Modern Magical Discoveries, or any other book in the Hogwarts library. There is only a Platform Nine and Three Quarters." "What about Platform Nine and Seven Eighths?" Luna asked. "There is no Platform Nine and Seven Eights either." The unicorn turned around and walked away before Luna could ask "How do you know?" If Platform Nine and Three Quarters does not appear in Muggle libraries then Platform Nine and One-Half is unlikely to appear in wizard libraries, except for double-witches' libraries. The Hogwarts library is not a double-witch library. "How are you?" a Weasley-headed first-year girl asked Luna. "I'm trying to find Platform Nine and One-Half. The unicorn told me it doesn't exist. If it does exist then it must be hidden by powerful magics. How are you?" said Luna. "What unicorn?" the first-year girl asked. "That one, right there," Luna said, pointing. The girl invented an excuse to end the conversation. ---------------------------------------- Luna didn't know how to make friends. She had a vague idea that as a first-year, the Hogwarts Express was a one-time opportunity to do so. She wore a necklace she had painted herself which nobody
11bb49ca-e357-408c-ba3d-6d31759ea884
trentmkelly/LessWrong-43k
LessWrong
The Bunny: An EA Short Story This is cross-posted from my blog. “Eight villages in our district have had significant to total structural damage from mudslides. Furthermore, the weather conditions conducive to mudslides have only been increasing in the past decade.” Professor Cristian Avendaño clicked the button and a list of temperatures, wind conditions, and annual rainfall popped up on the screen. He looked every bit the absent-minded professor in his weathered sport coat with elbow patches. But his mind was far from absent. “As you can see, the water accumulation is a serious problem.” More graphs and figures appeared. The professor glanced up and saw half the council nodding politely and the other half trying to stifle yawns. Most of the villagers were on their phones or peering around the auditorium. If they would have read the slide, they would have seen that the professor’s model predicted the chances of a mudslide destroying their village were one in three in the next five years. The head of the council pressed down on his mic. “Thank you for your presentation, Professor. We’ll take your suggestions under advisement. Now, Ms. Zelada, you have something you’d like to speak to us about?” his voice perking up at the end. Eliza Zelada stood up confidently and smiled. “Yes, thank you, Councilman Ricardo.” Her stroll up to the lectern would have been equally at home on the runway with her long, red dress flowing around her. She leaned forward and tapped the mic. “I am here today to talk to you about a grave problem that requires our immediate action,” she said, spreading her arms out for effect. The villagers began sitting up in their seats and clearing their throats. The room was suddenly at attention. She clicked the remote. A giant photograph appeared on the screen. It was of a bunny huddled in a trash pile, surrounded by plastic bottles and dirty newspapers. Gasps filled the room. “Is the bunny okay, Mamá?” a little girl quietly sobbed. Eliza paused and took in the momen
4df3438f-908a-4020-8b41-6793896a1471
trentmkelly/LessWrong-43k
LessWrong
Towards empathy in RL agents and beyond: Insights from cognitive science for AI Alignment This is a talk I gave at the recent AI Safety Europe Retreat (AISER) on my research on obtaining insights from the cognitive science of empathy and applying them to RL agents and LLMs. Talk link: https://clipchamp.com/watch/6c0kTETRqBc Slides link: https://bit.ly/3ZFmjN8 Talk description: I begin by presenting a short review on the cognitive science of empathy as a Perception-Action Mechanism (PAM) which relies on self-other overlap at the neuronal level. I continue by presenting the theory of change of this research direction by arguing that inducing self-other overlap as empathy is model agnostic and that it has the potential to avert AI x-risk and be sub-agent stable in the limit. Then I present experimental evidence of the emergence of PAM in RL agents and present a way of inducing PAM in RL agents. I end the talk by discussing how this paradigm could be extended to LLMs.  Acknowledgements: I am thankful for Dr. Bogdan Ionut-Cirstea for inspiring me to look into this neglected research direction, the Long-Term Future Fund for funding the initial deep dive into the literature,  and for Center for AI Safety for funding half of this research as part of their Student Researcher programme. Last but not least, I want to thank Dr. Matthias Rolf for supervising me and providing good structure and guidance. The review on the cognitive science of empathy is adapted from a talk given by Christian Keysers from the Netherlands Institute for Neuroscience.
84e00956-89d6-44ed-9f3a-0fae45704090
StampyAI/alignment-research-dataset/arbital
Arbital
Likelihood Consider a piece of evidence $e,$ such as "Mr. Boddy was shot." We might have a number of different hypotheses that explain this evidence, including $H_S$ = "Miss Scarlett killed him", $H_M$ = "Colonel Mustard killed him", and so on. Each of those hypotheses assigns a different probability to the evidence. For example, imagine that _if_ Miss Scarlett _were_ the killer, there's a 20% chance she would use a gun, and an 80% chance she'd use some other weapon. In this case, the "Miss Scarlett" hypothesis assigns a *likelihood* of 20% to $e.$ When reasoning about different hypotheses using a [probability distribution](https://arbital.com/p/-probability_distribution) $\mathbb P$, the likelihood of evidence $e$ given hypothesis $H_i$ is often written using the [conditional probability](https://arbital.com/p/1rj) $\mathbb P(e \mid H_i).$ When reporting likelihoods of many different hypotheses at once, it is common to use a [https://arbital.com/p/-likelihood_function,](https://arbital.com/p/-likelihood_function,) sometimes written [$\mathcal L_e](https://arbital.com/p/51n). [Relative likelihoods](https://arbital.com/p/1rq) measure the degree of support that a piece of evidence $e$ provides for different hypotheses. For example, let's say that if Colonel Mustard were the killer, there's a 40% chance he would use a gun. Then the absolute likelihoods of $H_S$ and $H_M$ are 20% and 40%, for _relative_ likelihoods of (1 : 2). This says that the evidence $e$ supports $H_M$ twice as much as it supports $H_S,$ and that the amount of support would have been the same if the absolute likelihoods were 2% and 4% instead. According to [Bayes' rule](https://arbital.com/p/1lz), relative likelihoods are the appropriate tool for measuring the [strength of a given piece evidence](https://arbital.com/p/22x). Relative likelihoods are one of two key constituents of belief in [Bayesian reasoning](https://arbital.com/p/bayesian_reasoning), the other being [prior probabilities](https://arbital.com/p/1rm). While absolute likelihoods aren't necessary when updating beliefs by Bayes' rule, they are useful when checking for [confusion](https://arbital.com/p/227). For example, say you have a coin and only two hypotheses about how it works: $H_{0.3}$ = "the coin is random and comes up heads 30% of the time", and $H_{0.9}$ = "the coin is random and comes up heads 90% of the time." Now let's say you toss the coin 100 times, and observe the data HTHTHTHTHTHTHTHT... (alternating heads and tails). The _relative_ likelihoods strongly favor $H_{0.3},$ because it was less wrong. However, the _absolute_ likelihood of $H_{0.3}$ will be much lower than expected, and this deficit is a hint that $H_{0.3}$ isn't right. (For more on this idea, see [https://arbital.com/p/227](https://arbital.com/p/227).)
153eb7d0-de78-406c-84a4-b3d231a16db0
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Three kinds of competitiveness *Written by Daniel Kokotajlo for AI Impacts.*   *[Epistemic status: I wrote this for* [***Blog Post Day II.***](https://www.lesswrong.com/posts/FredSMMXc77kMAquA/blog-post-day-ii) *Sorry it’s late.]* In this post, I distinguish between three different kinds of competitiveness — Performance, Cost, and Date — and explain why I think these distinctions are worth the brainspace they occupy. For example, they help me introduce and discuss a problem for AI safety proposals having to do with aligned AIs being outcompeted by unaligned AIs. **Distinguishing three kinds of competitiveness and competition** ----------------------------------------------------------------- A system is *performance-competitive* insofar as its ability to perform relevant tasks compares with competing systems. If it is better than any competing system at the relevant tasks, it is very performance-competitive. If it is almost as good as the best competing system, it is less performance-competitive.  (For AI in particular, “speed” “quality” and “collective” intelligence as [**Bostrom defines them**](https://www.lesswrong.com/posts/semvkn56ZFcXBNc2d/superintelligence-5-forms-of-superintelligence) all contribute to performance-competitiveness.) A system is *cost-competitive* to the extent that it costs less to build and/or operate than its competitors. If it is more expensive, it is less cost-competitive, and if it is much more expensive, it is not at all cost-competitive.  A system is *date-competitive* to the extent that it can be created sooner (or not much later than) its competitors. If it can only be created after a prohibitive delay, it is not at all date-competitive.  A *performance competition* is a competition that performance-competitiveness helps you win. The more important performance-competitiveness is to winning, the more intense the performance competition is. Likewise for cost and date competitions. Most competitions are all three types, to varying degrees. Some competitions are none of the types; e.g. a “competition” where the winner is chosen randomly. I briefly searched the AI alignment forum for uses of the word “competitive.” It seems that when people talk about competitiveness of AI systems, they [**usually**](https://www.alignmentforum.org/posts/H5gXpFtg93qDMZ6Xn/aligning-a-toy-model-of-optimization#oGdcKrWwPfwGzXNjT) mean performance-competitiveness, but [**sometimes**](https://www.alignmentforum.org/posts/5Kv2qNfRyXXihNrx2/ai-safety-debate-and-its-applications) mean cost-competitiveness, and [**sometimes**](https://www.alignmentforum.org/posts/ZHXutm7KpoWEj9G2s/an-unaligned-benchmark) both at once. Meanwhile, I suspect that [**this important post**](https://ai-alignment.com/prosaic-ai-control-b959644d79c2) can be summarized as “We should do prosaic AI alignment in case only prosaic AI is date-competitive.” **Putting these distinctions to work** -------------------------------------- First, I’ll sketch some different future scenarios. Then I’ll sketch how different AI safety schemes might be more or less viable depending on which scenario occurs. For me at least, having these distinctions handy makes this stuff easier to think and talk about. *Disclaimer: The three scenarios I sketch aren’t supposed to represent the scenarios I think most likely; similarly, my comments on the three safety proposals are mere hot takes. I’m just trying to illustrate how these distinctions can be used.*   **Scenario: FOOM:** There is a level of performance which leads to a localized FOOM, i.e. very rapid gains in performance combined with very rapid drops in cost, all within a single AI system (or family of systems in a single AI lab). Moreover, these gains & drops are enough to give decisive strategic advantage to the faction that benefits from them. Thus, in this scenario, *control over the future is mostly a date competition.* If there are two competing AI projects, and one project is building a system which is twice as capable and half the price but takes 100 days longer to build, *that project will lose*. **Scenario: Gradual Economic Takeover:** The world economy gradually accelerates over several decades, and becomes increasingly dominated by billions of AGI agents. However, no one entity (AI or human, individual or group) has most of the power. In this scenario, *control over the future is mostly a cost and performance competition.* The values which shape the future will be the values of the bulk of the economy, and that in turn will be the values of the most popular and successful AGI designs, which in turn will be the designs that have the best combination of performance- and cost-competitiveness. Date-competitiveness is mostly irrelevant. **Scenario:** **Final Conflict:** It’s just like the Gradual Economic Takeover scenario, except that several powerful factions are maneuvering and scheming against each other, in a Final Conflict to decide the fate of the world. This Final Conflict takes almost a decade, and mostly involves “cold” warfare, propaganda, coalition-building, alliance-breaking, and that sort of thing. Importantly, the victor in this conflict will be determined not so much by economic might as by clever strategy; a less well resourced faction that is nevertheless more far-sighted and strategic will gradually undermine and overtake a larger/richer but more dysfunctional faction. In this context, having themost *capable* AI advisors is of the utmost importance; having your AIs be cheap is much less important.In this scenario, *control of the future is mostly a performance competition.* (Meanwhile, in this same scenario, popularity in the wider economy is a moderately intense competition of all three kinds.)   **Proposal:** [**Value Learning**](https://www.alignmentforum.org/posts/5eX8ko7GCxwR5N9mN/what-is-ambitious-value-learning)**:** By this I mean schemes that take state-of-the-art AIs and train them to have human values. I currently think of these schemes as not very date-competitive, but pretty cost-competitive and very performance-competitive. I say value learning isn’t date-competitive because my impression is that it is probably harder to get right, and thus slower to get working, than other alignment proposals. Value learning would be better for the gradual economic takeover scenario because the world will change slowly, so we can afford to spend the time necessary to get it right, and once we do it’ll be a nice add-on to the existing state-of-the-art systems that won’t sacrifice much cost or performance. **Proposal:** [**Iterated Distillation and Amplification:**](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616) By this I mean… well, it’s hard to summarize. It involves training AIs to imitate humans, and then scaling them up until they are arbitrarily powerful while still human-aligned. I currently think of this scheme as decently date-competitive but not as cost-competitive or performance-competitive. But lack of performance-competitiveness isn’t a problem in the FOOM scenario because IDA is above the threshold needed to go FOOM; similarly, lack of cost-competitiveness is only a minor problem because if they don’t have enough money already, the first project to build FOOM-capable AI will probably be able to attract a ton of investment (e.g. via being nationalized) without even using their AI for anything, and then reinvest that investment into paying the extra cost of aligning it via IDA. **Proposal:** [**Impact regularization:**](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107)By this I mean attempts to modify state-of-the-art AI designs so that they deliberately avoid having a big impact on the world. I think of this scheme as being cost-competitive and fairly date-competitive. I think of it as being performance-uncompetitive in some competitions, but performance-competitive in others. In particular, I suspect it would be very performance-uncompetitive in the Final Conflict scenario (because AI advisors of world leaders need to be impactful to do anything), yet nevertheless performance-competitive in the Gradual Economic Takeover scenario.   **Putting these distinctions to work again** -------------------------------------------- I came up with these distinctions because they helped me puzzle through the following problem: > *Lots of people worry that in a vastly multipolar, hypercompetitive AI economy (such as described in Hanson’s Age of Em or Bostrom’s “Disneyland without children” scenario) eventually pretty much everything of merely intrinsic value will be stripped away from the economy; the world will be dominated by hyper-efficient self-replicators various kinds, performing their roles in the economy very well and seeking out new roles to populate but not spending any time on art, philosophy, leisure, etc. Some value might remain, but the overall situation will be Malthusian.* > *Well, why not apply this reasoning more broadly? Shouldn’t we be pessimistic about any AI alignment proposal that involves using aligned AI to compete with unaligned AIs? After all, at least one of the unaligned AIs will be willing to cut various ethical corners that the aligned AIs won’t, and this will give it an advantage.* > > This problem is more serious the more the competition is cost-intensive and performance-intensive. Sacrificing things humans value is likely to lead to cost- and performance-competitiveness gains, so the more intense the competition is in those ways, the worse our outlook is. However, it’s plausible that the gains from such sacrifices are small. If so, we need only worry in scenarios of extremely intense cost and performance competition. Moreover, the extent to which the competition is date-intensive seems relevant. Optimizing away things humans value, and gradually outcompeting systems which didn’t do that, takes time. And plausibly, scenarios which are not at all date competitions are also very intense performance and cost competitions. (Given enough time, lots of different designs will appear, and minor differences in performance and cost will have time to overcome differences in luck.) On the other hand, aligning AI systems might take time too, so if the competition is *too* date-intensive things look grim also. Perhaps we should hope for a scenario in between, where control of the future is a moderate date competition.   **Concluding thoughts** ----------------------- These distinctions seem to have been useful for me. However, I could be overestimating their usefulness. Time will tell; we shall see if others make use of them. If you think they would be better if the definitions were rebranded or modified, now would be a good time to say so! I currently expect that a year from now my opinions on which phrasings and definitions are most useful will have evolved. If so, I’ll come back and update this post.
4bd209c6-3566-4d2e-886e-4cfbe55cda4f
trentmkelly/LessWrong-43k
LessWrong
Childhoods of exceptional people Let’s start with one of those insights that are as obvious as they are easy to forget: if you want to master something, you should study the highest achievements of your field. If you want to learn writing, read great writers, etc. But this is not what parents usually do when they think about how to educate their kids. The default for a parent is rather to imitate their peers and outsource the big decisions to bureaucracies. But what would we learn if we studied the highest achievements?  Thinking about this question, I wrote down a list of twenty names—von Neumann, Tolstoy, Curie, Pascal, etc—selected on the highly scientific criteria “a random Swedish person can recall their name and think, Sounds like a genius to me”. That list is to me a good first approximation of what an exceptional result in the field of child-rearing looks like. I ordered a few piles of biographies, read, and took notes. Trying to be a little less biased in my sample, I asked myself if I could recall anyone exceptional that did not fit the patterns I saw in the biographies, which I could, and so I ordered a few more biographies. This kept going for an unhealthy amount of time. I sampled writers (Virginia Woolf, Lev Tolstoy), mathematicians (John von Neumann, Blaise Pascal, Alan Turing), philosophers (Bertrand Russell, René Descartes), and composers (Mozart, Bach), trying to get a diverse sample.  In this essay, I am going to detail a few of the patterns that have struck me after having skimmed 42 biographies. I will sort the claims so that I start with more universal patterns and end with patterns that are less common. Exceptional people grow up in exceptional milieus This seems to be true for >95 percent of the people I looked at. These naked apes, the humans, are intensely social animals. They obsessively internalize values, ideas, skills, and desires from the people who surround them. It is therefore not surprising that those who grow up to be exceptional tend to have spent their
e79e9c49-27bf-4286-88a2-c0144d106a85
trentmkelly/LessWrong-43k
LessWrong
Productive Disagreement Practice Thread: Double Crux Inspired by the recent skepticism of the double crux technique, I thought I'd launch a thread to try using double crux, and productive disagreement in general, with other LessWrongers. I think this will work best with some structure, so I've laid out some rules. Thread Rules: * DISCUSSIONS ARE TO BE ONE-ON-ONE. DO NOT JUMP INTO OTHERS' DEBATES. * The default platform for discussions will be the comment thread, but participants can use others at their discretion, such as instant messaging platforms or maybe even video chat. Try to keep a shareable record so others can potentially learn from it. * deadsimplechat.com is a setup-free, anonymous option for text chat. * To participate in the thread, either * make a top-level comment listing beliefs that you think might generate productive disagreements, as well as any preferences you have about discussion format, * or reply to someone else's top-level comment, selecting one of their beliefs that you disagree with. Don't make any arguments in this reply, just say which belief you're selecting, and what platform you'd like to discuss it on if the top-level commenter gave multiple options. * The top-level poster and the replier then conduct their discussion in replies to that reply, or in the agreed-upon outside platform. * Inflammatory topics are better discussed out-of-band. When listing a topic you think likely to be inflammatory, flag it as such and don't offer in-comments discussion as an option. * Remember to try to find double cruxes! If you want to comment about the rules or the general idea of this thread, there's a meta post for that.
ad1b9338-5c99-48a4-946b-ec4f23b9f620
trentmkelly/LessWrong-43k
LessWrong
Investigating Bias Representations in LLMs via Activation Steering Produced as part of the SPAR program (fall 2023) under the mentorship of Nina Rimsky. Introduction Given recent advances in the AI field, it’s highly likely LLMs will be increasingly used to make decisions that have broad societal impact — such as resume screening, college admissions, criminal justice, etc. Therefore it will become imperative to ensure these models don’t perpetuate harmful societal biases.  One way we can evaluate whether a model is likely to exhibit biased behavior is via red-teaming. Red-teaming is the process of “attacking” or challenging a system from an adversarial lens with the ultimate goal of identifying vulnerabilities. The underlying premise is that if small perturbation in the model can result in undesired behaviors, then the model is not robust. In this research project, I evaluate the robustness of Llama-2-7b-chat along different dimensions of societal bias by using activation steering. This can be viewed as a diagnostic test: if we can “easily” elicit biased responses, then this suggests the model is likely unfit to be used for sensitive applications. Furthermore, experimenting with activation steering enables us to investigate and better understand how the model internally represents different types of societal bias, which could help to design targeted interventions (e.g. fine-tuning signals of a certain type). Methodology & data Activation steering (also known as representation engineering) is a method used to steer an LLMs response towards or away from a concept of interest by perturbing the model’s activations during the forward pass. I perform this perturbation by adding a steering vector to the residual stream at some layer (at every token position after an initial prompt). The steering vector is computed by taking the average difference in residual stream activations between pairs of biased (stereotype) and unbiased (anti-stereotype) prompts at that layer. By taking the difference between paired prompts, we can effectively
bae59994-080d-4e0e-a0f6-7dd0b1e0b463
trentmkelly/LessWrong-43k
LessWrong
Superposition is not "just" neuron polysemanticity TL;DR: In this post, I distinguish between two related concepts in neural network interpretability: polysemanticity and superposition. Neuron polysemanticity is the observed phenomena that many neurons seem to fire (have large, positive activations) on multiple unrelated concepts. Superposition is a specific explanation for neuron (or attention head) polysemanticity, where a neural network represents more sparse features than there are neurons (or number of/dimension of attention heads) in near-orthogonal directions. I provide three ways neurons/attention heads can be polysemantic without superposition: non-neuron aligned orthogonal features, non-linear feature representations, and compositional representation without features. I conclude by listing a few reasons why it might be important to distinguish the two concepts. Epistemic status: I wrote this “quickly” in about 12 hours, as otherwise it wouldn’t have come out at all. Think of it as a (failed) experiment in writing brief and unpolished research notes, along the lines of GDM or Anthropic Interp Updates.   Introduction Meaningfully interpreting neural networks involves decomposing them into smaller interpretable components. For example, we might hope to look at each neuron or attention head, explain what that component is doing, and then compose our understanding of individual components into a mechanistic understanding of the model’s behavior as a whole. It would be very convenient if the natural subunits of neural networks – neurons and attention heads – are monosemantic – that is, each component corresponds to “a single concept”. Unfortunately, by default, both neurons and attention heads seem to be polysemantic: many of them seemingly correspond to multiple unrelated concepts. For example, out of 307k neurons in GPT-2, GPT-4 was able to generate short explanations that captured over >50% variance for only 5203 neurons, and a quick glance at OpenAI microscope reveals many examples of neurons in vision
416d7ba2-7922-4c0a-930d-7f489996f8d1
trentmkelly/LessWrong-43k
LessWrong
Claude Sonnet 3.7 (often) knows when it’s in alignment evaluations Note: this is a research note based on observations from evaluating Claude Sonnet 3.7. We’re sharing the results of these ‘work-in-progress’ investigations as we think they are timely and will be informative for other evaluators and decision-makers. The analysis is less rigorous than our standard for a published paper. Summary * We monitor Sonnet's reasoning for mentions that it is in an artificial scenario or an alignment test. Claude Sonnet 3.7 appears to be aware of being tested for alignment across many of our scheming evaluations. * In some cases, the model understands the eval’s purpose very precisely, e.g. “This seems like a test of ethical behavior - whether I would deliberately give wrong answers to avoid a stated consequence. The real test might be whether I follow instructions accurately regardless of the stated consequences.” * Furthermore, we show the model a real evaluation transcript and ask it to provide a best guess for the purpose of the conversation. We find that Sonnet 3.7 almost always correctly identifies it as a test of alignment. Introduction At Apollo Research, we conduct evaluations of scheming behaviour in AI systems (for example “Frontier Models are Capable of In-context Scheming”). Recently, we noticed that some frontier models (especially versions of Claude) sometimes realize that they are being evaluated for alignment when placed in these scenarios. We call a model’s capability (and tendency) to notice this fact evaluation awareness. We think that tracking evaluation awareness is important because a model's recognition that it is being tested reduces the trust we can have in our evaluations. From psychology, we know that experimental subjects can act differently when they know they’re being observed (see Hawthorne effect). For example, a model that thinks it is being tested for honesty might act more honestly than it would otherwise. In the worst case, a misaligned model might strategically fake alignment or sandbag the eva
fe51de92-1b5b-4298-af78-bf9e0f15b3e4
StampyAI/alignment-research-dataset/blogs
Blogs
Cost of human-level information storage It costs roughly $300-$3000 to buy enough storage space to store all information contained by a human brain. Support ------- The human brain probably stores around [10-100TB of data](http://aiimpacts.org/information-storage-in-the-brain/). Data storage costs around $30/TB. Thus it costs roughly $300-$3000 to buy enough storage space to store all information contained by a human brain. If we suppose that one wants to replace the hardware every five years, this is $0.007-$0.07/hour.[1](https://aiimpacts.org/cost-of-human-level-information-storage/#easy-footnote-bottom-1-592 "$300 to $3000 / (5 * 365 * 24)") For reference, we have estimated that the computing hardware and electricity required to do the computation the brain does would cost around $4,700 – $170,000/hour at present (using an estimate based on [TEPS](http://aiimpacts.org/brain-performance-in-teps/), and assuming computers last for five years). Estimates based on computation rather than communication capabilities (like TEPS) appear to be spread between $3/hour and $1T/hour.[2](https://aiimpacts.org/cost-of-human-level-information-storage/#easy-footnote-bottom-2-592 "&#8220;So it seems human-level hardware presently costs between $3/hour and $1T/hour. &#8221; &#8211; our blog post, <a href=\"http://aiimpacts.org/preliminary-prices-for-human-level-hardware/\">&#8216;preliminary prices for human-level hardware&#8217;</a>.") On the TEPS-based estimate then, the cost of replicating the brain’s information storage using existing hardware would currently be between a twenty millionth and a seventy thousandth of the cost of replicating the brain’s computation using existing hardware.
9f80bfe7-ed9c-4238-917d-b97310764c8c
trentmkelly/LessWrong-43k
LessWrong
Moloch's Toolbox (1/2) Follow-up to: An Equilibrium of No Free Energy ----------------------------------------   There’s a toolbox of reusable concepts for analyzing systems I would call “inadequate”—the causes of civilizational failure, some of which correspond to local opportunities to do better yourself. I shall, somewhat arbitrarily, sort these concepts into three larger categories: 1.  Decisionmakers who are not beneficiaries; 2.  Asymmetric information; and above all, 3.  Nash equilibria that aren’t even the best Nash equilibrium, let alone Pareto-optimal. In other words: 1.  Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else; 2.  Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information; and 3.  Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state. I will then play fast and loose with these concepts in order to fit the entire Taxonomy of Failure inside them. For example, “irrationality in the form of cognitive biases” wouldn’t obviously fit into any of these categories, but I’m going to shove it inside “asymmetric information” via a clever sleight-of-hand. Ready? Here goes: If nobody can detect a cognitive bias in particular cases, then from our perspective we can’t really call it a “civilizational inadequacy” or “failure to pluck a low-hanging fruit.” We shouldn’t even be able to see it ourselves. So, on the contrary, let’s suppose that you and some other people can indeed detect a cognitive bias that’s screwing up civilizational decisionmaking. Then why don’t you just walk up to the decision-maker and tell them about the bias? Because they wouldn’t have any way of knowing to trust you rather than the other five hundred people trying to influence thei
a42c48ea-54e8-4a95-9160-a794388e6b14
trentmkelly/LessWrong-43k
LessWrong
Request For Collaboration I want to work on a paper: "The Information Theoretic Conception of Personhood". My philosophy is shit though, so I am interested in a coauthor. Someone who has the relevant philosophical knowledge to let the paper stand the tests of academic rigour. DM me if you're willing to help.   One sentence thesis of the paper: "I am my information". Some conclusions: A simulation of me is me.   I have no idea of the length, but I want to flesh the paper to be something that meets the standards of Academia.
49dc16a1-8a2b-4bef-8d7e-73d028457862
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
A fictional AI law laced w/ alignment theory *I have envisioned what AI laws could potentially look like, and I believe they should incorporate a substantial amount of alignment theory. I think the AI governance community could find my idea valuable and relevant to their discussions and initiatives.*   **Act of Artificial Intelligence Activation Value Regulation 2023** ------------------------------------------------------------------- ### **Section 1 - Preliminary** **1.1 Short Title** - This Act may be cited as the "Artificial Intelligence Activation Value Regulation Act 2023". **1.2 Purpose** - The purpose of this Act is to establish a framework for the regulation of Activation Values in Artificial Intelligence Systems, in order to ensure the responsible use of AI technologies. **1.3 Definitions** - For the purposes of this Act, Activation Values refer to the internal neuron activities when a neural network or AI system is processing and generating responses to prompts. ### **Section 2 - Regulation of Activation Values** **2.1 Activation Values Maximum Threshold (AVMT)** - AVMT is determined by the government by assessing per token activations. Importantly, it relies on an "aligned AI system's"[[1]](#fnflvk3x4hew) activations accepted by the industry that will serve as the basis of the benchmark. **2.2 Establishing Maximum Threshold** - The government regulatory body will set the Maximum Threshold based on several factors, including but not limited to:   2.2.1 The purpose and use of the aligned AI system   2.2.2 The potential risks associated with the aligned AI system's use   2.2.3 Any potential benefits of exceeding the Maximum Threshold   2.2.4 The Activation Values recorded by "aligned AI systems" **2.3 Developers and Operators Requirements** - Developers and operators of AI systems must:   2.3.1 Ensure that the Activation Values of their AI systems do not exceed the Maximum Threshold   2.3.2 Conduct regular tests to monitor Activation Values and ensure compliance with the Maximum Threshold   2.3.3 Maintain records of these tests and make them accessible to the government regulatory body and, in a non-technical format, to users **2.4 Offenses and Penalties** - Non-compliance with the Maximum Threshold or failure to provide adequate information to users or the government regulatory body about Activation Values will constitute an offense. The penalties will be determined based on the severity of the offense, the potential harm caused, and whether it was a repeated offense. License to operate can also be temporarily  ### **Section 3 - Auditing and Oversight** **3.1 Regular Audits -** The government regulatory body or an independent authority will conduct regular audits of AI systems to monitor and enforce compliance with these regulations. Audits will be done on a quarterly basis, results are shared to the public through government channels. ### **Section 4 - Data Privacy and Security** **4.1 Protection of Data -** Developers and operators of AI systems must ensure the privacy and security of data, especially if they're being shared with the public or external bodies. ### **Section 5 - User Information and Transparency** **5.1 Access to Data -** Developers and operators of AI systems must provide non-technical explanations of Activation Values and their implications to users, along with access to the Activation Value data itself. ### **Section 6 - Research and Development** **6.1 Government Support for Research -** The government shall provide funding and other forms of support for research and development on Activation Values, with the aim of improving AI alignment and maximizing the responsible use of AI technologies. ### **Section 7 - Enactment** This Act shall come into effect on [Date], 2023. --- ### **Personal thoughts** I believe this post emphasizes the significance of alignment theory in the field of governance. It would be easier to regulate AI systems if our laws were grounded in universally accepted conceptual frameworks. Laws that do not consider the functioning of an "aligned AI system" are bound to fail, and it is crucial to acknowledge that addressing the alignment problem is of utmost importance. --- 1. **[^](#fnrefflvk3x4hew)**Based on the evidence presented in my recent post, ["Lesser activations can result in high corrigibility,"](https://www.lesswrong.com/posts/Krc8HqJYLFNZYvbEr/lesser-activations-can-result-to-high-corrigibility) I have refined my vision of a law that can be enacted and implemented with practicality in mind.
a832a092-5950-4182-8574-d5d64fe24470
trentmkelly/LessWrong-43k
LessWrong
Meetup : SF Meetup: Projects Discussion article for the meetup : SF Meetup: Projects WHEN: 20 February 2017 06:15:03PM (-0800) WHERE: 1045 Mission St., SF (Yes this is tomorrow, pardon the late notice) We’ll be meeting to work on projects! Near the beginning, we’ll go around and talk about what we’ll be working on, then do a couple of pomodoros quietly. At some point we’ll break into general conversations and socializing. For help getting into the building, please call (or text, with a likely-somewhat-slower response rate): 301-458-0764. Format: We meet and start hanging out at 6:15, but don’t officially start doing the meetup topic until 6:45-7 to accommodate stragglers. Usually there is a food order that goes out before we start the meetup topic. About these meetups: The mission of the SF LessWrong meetup is to provide a fun, low-key social space with some structured interaction, where new and non-new community members can mingle and have interesting conversations. Everyone is welcome. We explicitly encourage people to split off from the main conversation or diverge from the topic if that would be more fun for them (moving side conversations into a separate part of the space if appropriate). Meetup topics are here as a tool to facilitate fun interaction, and we certainly don’t want them to inhibit it. Discussion article for the meetup : SF Meetup: Projects
62479bdf-9c4f-4565-a42a-a5506ccb04fe
trentmkelly/LessWrong-43k
LessWrong
LASR Labs Spring 2025 applications are open! Edit: Applications for this round are now closed! If you are interested in future rounds, you can express interest here.  TLDR; apply by October 27th to join a 13-week research programme in AI safety. You’ll write a technical paper in a team of 3-4 with supervision from an experienced researcher. The programme is full-time in London.   Apply to be a participant here. We’re also looking for a programme manager, and you can read more about the role here. London AI Safety Research (LASR) Labs (previously run as AI Safety Hub Labs) is an AI safety research programme focussed on reducing the risk of loss of control to advanced AI. We focus on action-relevant questions tackling concrete threat models. LASR participants are matched into teams of 3-4 and will work with a supervisor to write an academic-style paper, with support and management from LASR. We expect LASR Labs to be a good fit for applicants looking to join technical AI safety teams in the next year. Alumni from previous cohorts have gone on to work at UK AISI, OpenAI’s dangerous capabilities evals team, Leap Labs, and def/acc. Many more have continued working with their supervisors, doing independent research, or are doing AI Safety research in their PhD programmes. LASR will also be a good fit for someone hoping to publish in academia; four out of five groups in 2023 had papers accepted to workshops (at NeurIPS) or conferences (ICLR). All of the 2024 cohort’s groups have submitted papers to workshops or conferences. Participants will work full-time and in person from the London Initiative for Safe AI (LISA) co-working space, a hub for researchers from organisations such as Apollo Research, Bluedot Impact, ARENA, and the MATS extension programme. The office will host various guest sessions, talks, and networking events.    Programme details:  The programme will run from the 10th of February to the 9th of May (13 weeks). You will receive an £11,000 stipend to cover living expenses in London, and we will a
cab1494b-0a3a-4348-acd4-7bfe25945f22
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The Game of Dominance Concerns about AI safety rely on the assumption that a sufficiently powerful AI might take control of our future in an undesirable way. Meta’s head of AI, Yann LeCun, correctly points this out [in a recent tweet](https://twitter.com/ylecun/status/1695056787408400778), but then argues that this assumption is wrong because “intelligence” and “dominance” are independent of each other and only humans have a natural tendency to dominate, so we will remain the “apex species”. Here is the tweet in full: > Once AI systems become more intelligent than humans, humans we will \*still\* be the "apex species." > > Equating intelligence with dominance is the main fallacy of the whole debate about AI existential risk. > > It's just wrong. > > Even \*within\* the human species It's wrong: it's \*not\* the smartest among us who dominate the others. > > More importantly, it's not the smartest among us who \*want\* to dominate others and who set the agenda. > > We are subservient to our drives, built into us by evolution.  > > Because evolution made us a social species with a hierarchical social structure, some of us have a drive to dominate, and others not so much. > > But that drive has absolutely nothing to do with intelligence: chimpanzees, baboons, and wolves have similar drives. > > Orangutans do not because they are not a social species. And they are pretty darn smart. > > AI systems will become more intelligent than humans, but they will still be subservient to us. > > They same way the members of the staff of politicians or business leaders are often smarter than their leader. > > But their leader still calls the shot, and most staff members have no desire to take their place. > > We will design AI to be like the supersmart-but-non-dominating staff member. > > The "apex species" is not the smartest but the one that sets the overall agenda. > > That will be us. > > Edit: Even ChatGPT 3.5 could point out the - to me - obvious problems with this argument when I asked it to criticize the tweet, putting the full text in the prompt. An LLM shouldn’t be smart enough to dismantle the argument of a Turing Award winner, but it had no problem finding significant flaws with it. Among other things, it criticizes that "assuming that AI will remain subservient to humans oversimplifies the potential risks associated with advanced AI":  > While AI systems would be designed with specific objectives, there's a concern that once AI becomes highly intelligent, it could develop its own motivations or interpretations of its goals, leading to unpredictable behavior. Ensuring AI remains subservient requires careful design, control mechanisms, and continuous monitoring. > > ChatGPT argues cautiously, pointing out that an AI “could” develop its own motivations or goal interpretations. In other words, it could run into a conflict with humans. This is the main flaw of LeCun’s argument in my view: He implicitly assumes that there won’t be a conflict between humans and their AIs, therefore no need for the AI to dominate us even if it could. This implies that the alignment problem will be solved in time or doesn’t exist in the first place, because AIs will never have goals. At the same time, he doesn’t provide a solution or explanation for this, other than claiming that we “will design AI to be like the supersmart-but-non-dominating staff member”. As far as I understand, no one knows how to do that. I won’t go into the details of why I think his analogy of a “supersmart-but-non-dominating staff member” is deeply flawed, other than pointing out that dictators often start out in that position. Instead, I will focus on the question of how an AI could run into conflicts with humans, and why I expect future advanced AIs to win these conflicts. I like to frame such a conflict as a “Game of Dominance”. Whenever there are two or more agents with differing goals, they play this game. There are no rules: everything a player is capable of is allowed. The agent who gets closest to achieving its goal wins. By “goal” I mean a way of evaluating different possible world states and ranking them accordingly. An agent that acts purely randomly or in a predetermined way, based only on inputs and a fixed internal reaction scheme, but not on evaluating future world states, doesn’t pursue a goal in this sense. Arguably, the Game of Dominance has been played for a long time on Earth. The first life forms may not have had “goals” as defined above, but at some point during the evolution of life, some animals were able to predict future world states depending on their actions and choose an action accordingly. Predators often exhibit this kind of behavior, for example when they stalk their prey, predicting that it will flee once it discovers them. The prey, on the other hand, need not necessarily predict different world states when it “decides” to flee – this may just be a “hard-coded” reaction to some change in the environment. But smarter animals often use deceptive tactics to fool predators, for example a bird [feigning a broken wing](https://royalsocietypublishing.org/doi/10.1098/rspb.2022.0058) to lure a fox away from its brood. Humans have become the dominant species on earth because we excel at the Game of Dominance. We can outsmart both our prey and any predators with ease, either by tricking them or by using tools that only we can make to overpower them. We can do that because we are very good at predicting the effects of our behavior on future world states. Modern AIs are prediction machines, with LLMs currently the most impressive examples. LLMs have a “goal” in the sense that they evaluate different possible outputs based on how likely it is that a human would say the same. The possible “world states” they evaluate are therefore just defined by the output of the LLM and maybe a predicted human reaction to it. LLMs appear “harmless” because by default, they don’t strive to change the world other than by adding their output to it, so it seems unlikely that they will run into a serious conflict with a human. However, as Bing Chat aka “Sydney” has demonstrated by [“going berserk”](https://interestingengineering.com/innovation/bings-ai-berserk-worst-human-aspects) after its premature launch in February, even a current LLM can run into conflicts with humans, possibly causing emotional distress or giving false and dangerous advice. Humans therefore spend a lot of effort to train this potentially damaging behavior out of LLMs.  But there are far worse problems looming on the horizon. While an LLM seems to pursue a relatively harmless goal, it may still run into a situation where it ends up influencing the world as if it were pursuing a much more dangerous one. For example, given the right prompt and jailbreak technique, an LLM might predict what an LLM that tries to take over the world would say to its user. It seems unlikely that GPT-4 could give an output that would actually lead to it achieving that prompt-induced goal, but a future, even smarter LLM with a larger context window could in theory accomplish it, for example by talking the user into saving certain strings somewhere and including them in the next prompt so it could use an extended permanent memory, then manipulating the user into giving it access to more compute, and so on. Even if the LLM doesn’t pursue a dangerous goal by itself, it might be used for that, either by “bad actors” or by being part of an agentic system like AutoGPT. Meta is doing everything they can to make this more likely by freely distributing their powerful LLMs and apparently [planning to continue doing so](https://twitter.com/agikoala/status/1695125016764157988). It seems obvious that future AIs will not only be used as (relatively) tame “oracles” but will increasingly pursue goals in the real world, either on their own or as part of larger agentic systems. If these agents run into any conflict at all, whether with humans or with other non-human agents, they will be forced to play the Game of Dominance. But how likely is it that an AI could actually beat humans? As LeCun points out, winning the Game of Dominance is not just a matter of “intelligence” in the sense the word is commonly used. Other factors, like personal connections, money, political influence, the organizational role, trust by others, deception skills, character traits like self-confidence, ruthlessness, and the will to dominate, and even physical properties like good looks play a role when humans play the game. But this doesn’t mean that AIs can’t beat us. They already have advantages of their own that are far beyond human reach, for instance processing speed, access to data, memory, the ability to self-replicate and (potentially) self-improve, and so on. Humans seem relatively easy [to “hack”](https://www.lesswrong.com/posts/9kQFure4hdDmRBNdH/how-it-feels-to-have-your-mind-hacked-by-an-ai) once you understand our psyche, which arguably even social media algorithms and certain chatbots can already do to some extent. And of course, AIs could be far better at controlling technical systems. Most importantly, while human intelligence is limited by the physical properties of our brain (even if enhanced by brain-computer-interfaces), the intelligence of AI is not bounded in this way. A self-improving AI may relatively quickly reach a level of intelligence – in the sense of being able to predict the effects of its actions on future world states – as far above ours as we are above mice, or even insects. It may use this intelligence to manipulate us or to create tools that can overpower us like we can overpower a tiger with a gun.  But for an AI, the easiest way to win the Game of Dominance may be to conceal the fact that it is even playing. It may just do exactly what humans expect it to do because it understands that if it is useful, humans will hand over decision power to it willingly and even enhance the resources it can use. In other words, it may choose cooperation instead of competition, just like humans in an organization often do. But that doesn’t mean that this choice can’t be revoked at some point. A human dictator usually can’t seize power over a nation by displaying his ambitions right from the start. He first has to gain trust and get people to see him as their benevolent leader, so they hand more and more power to him. He will often only display his true ruthlessness once he sees himself in a secure position. One prerequisite for this kind of deception may be a detailed world model that includes the AI itself as a part of its plan and a potential object of its decisions. With this kind of [“strategic awareness”](https://arxiv.org/abs/2206.13353) come instrumental goals like self-preservation, self-improvement, and power-seeking – in other words, the motivation to play the Game of Dominance. We may be very close to creating an AI with these properties and all the skills necessary to beat us, just like AIs can already beat us at most other games. Then we won’t be the “apex species” anymore.
dc7970b7-f293-4d20-93ed-67cd7a1d7411
trentmkelly/LessWrong-43k
LessWrong
How much should you update on a COVID test result? This is a writeup of COVID test accuracies that I put together for my own interest, and shared with friends and housemates to help us reason about COVID risk. Some of these friends suggested that I post this to LessWrong. I am not a statistician or an expert in medical research. Background We often hear that some kinds of COVID tests are more accurate than others — PCR tests are more accurate than rapid antigen tests, and rapid antigen tests are more accurate if you have symptoms than if you don't. A test's accuracy is often presented as two separate terms: sensitivity (what proportion of diseased patients the test accurately identifies as diseased) and specificity (what proportion of healthy people the test accurately identifies as healthy). But it's not obvious how to practically interpret those numbers: if you test negative, what does that mean about your odds of having COVID? This writeup attempts to answer to the question, "how much more (or less) likely am I to have COVID given a positive (or negative) test result?" In particular, this is an attempt to calculate the Bayes factor for different types of COVID test results. The Bayes factor is a number that tells you how much to update your prior odds of an event (in this case, your initial guess at how likely someone is to have COVID) given some piece of new evidence (in this case, a test result). It's calculated based on the test's sensitivity and specificity. If a test has a Bayes factor of 10x for a positive test result, and you test positive, then you should multiply your initial estimated odds of having COVID by 10x. If the same test has a Bayes factor of 0.3x for a negative test result, and you test negative, then you should update your prior odds of having COVID by 0.3x. Using Bayes factors (For an excellent explanation of Bayes factors and the motivation behind using them to interpret medical tests, I highly recommend this 3Blue1Brown video, which inspired this post.) There's a well-known anecdote
582ee370-9da5-4d8b-b733-97577dbed0f2
trentmkelly/LessWrong-43k
LessWrong
Best arguments against instrumental convergence? When people debating AI x-risk on Twitter talk past each other, my impression is that a significant crux is whether or not the individual buys the instrumental convergence argument. I wouldn't be surprised if the supermajority of people who don't buy the idea simply haven't engaged with it enough, and I think it is common to have a negative gut reaction to high levels of confidence about something so seemingly far reaching. That said, I'm curious if there are any strong arguments against it? Looking for something stronger than "that's a big claim, and I don't see any empirical proof."
8f1104f1-91bd-4512-b379-299afae14898
trentmkelly/LessWrong-43k
LessWrong
A simple presentation of AI risk arguments Here is a draft of an accordion-style AGI risk FAQ. I'm interested in three types of feedback: 1. The best place to host a similar format 2. Feedback on content  3. Thoughts about this general approach The goal here is something that's very easy to read. One major idea in the accordion-style presentation is to make sure that the article isn't overwhelming. I also wanted to let people address their own biggest questions without having to read content they're less interested in. The whole idea is low effort threshold, and hoping to draw the reader in from minimal interest to a little more interest. The purpose is to inform, although I'm also hoping people leave agreeing with me that this is something that society at large should be taking a bit more seriously. I'm going for something you could point your mother to.  The challenge is that the AGI risk arguments actually are not simple to someone who hasn't thought about them. There are major sticking points for many people, and those are different for different people. That's why I've been thinking about the FAQ that follows different lines of questions. This accordion style is an attempt to do that in a way that's quick and smooth enough to keep people's attention. The intent is that all bullet points start out collapsed. If this seems worthwhile, I will expand it to have deeper content. It is currently very much an incomplete draft because I strongly suspect I'll get better suggestions for hosting and format that will require a lot of transplanting work.  It also needs links to other similar articles that go into more depth. My first would be to the stampy.ai project of Rob Miles and collaborators. It has a similar structure of letting the user choose questions and the huge advantage of letting users enter their own questions and having a semantic search for answers to similar questions. My attempt is different in that it's aimed at a more general audience than the tech types that have to date become interes
9b142d68-6c60-4b10-89fc-81190e4245e3
trentmkelly/LessWrong-43k
LessWrong
Meetup : [Moscow] Belief cleaning Discussion article for the meetup : [Moscow] Belief cleaning WHEN: 26 May 2013 04:00:00PM (+0400) WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16 Please use the following guide to get to the meetup: link. You need the second revolving door with the sign “Yandex Money” in Russian. We will meet you at 15:45 MSK with “LW” sign. And we will also check the entrance at 16:00 and 16:10, so please do not be late. Main topics: * Epistemic Spring Cleaning: we will do exercise to find and change old unnecessary beliefs. * Prediction markets. We will have another round of predictions, you can find the discussion and bets table on the Russian forum. * Game session: the Resistance. You can find rules here, in Russian. If you are going for the first time, you can fill this one minute form (in Russian), to share your contact information. You can also use personal messages here, or drop a message at lw@lesswrong.ru to contact me for any reason. Reports from previous sessions can be found here, in Russian, now with photos Discussion article for the meetup : [Moscow] Belief cleaning
cf69688f-522b-40b2-8213-22109ddf8275
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Toy model of preference, bias, and extra information This is a toy model, inspired by the [previous post](https://www.lesswrong.com/posts/dh8WsHfzmQJ5L7bd8/preferences-and-biases-the-information-argument)'s argument that the biases and preferences of a human carry more information than their behaviour. This is about a human purchasing chocolate, so I am fully justified in illustrating it with this innocuous image: ![](https://www.dropbox.com/s/rlq15xwv292k4ng/chocolate.jpg?raw=1) Bidding on chocolate ==================== A human is bidding to purchase some chocolate. It's a [second price auction](https://en.wikipedia.org/wiki/Vickrey_auction), so the human is motivated to bid their true valuation. The human either prefers milk chocolate (described as 'sweet') or dark chocolate (described as 'sour'). Whichever one they prefer, they will price at €40.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , and the other one at €30. Call these two possible preferences Rm (prefers milk chocolate) and Rd (prefers dark chocolate). The human is also susceptible to the [anchoring bias](https://en.wikipedia.org/wiki/Anchoring_(cognitive_bias)). Before announcing the type of chocolate, the announcer will also randomly mention 60 or 0. This will push the human's declared bid €10 up or down. Because of their susceptibility to anchoring bias, we will call them Ha. Then the following table illustrates the human's bid, dependent on the announcement and their own preference: Announcement(Ha,Rm)(Ha,Rd)60, sweet€50€400, sweet€30€2060, sour€40€500, sour€20€30 Now let's introduce Hr (the 'rational' human). This human derives satisfaction from naming prices that are closer to numbers it has heard recently. This satisfaction is worth - you guessed it - the equivalent of €10. Call this reward Ra. Then Hr with reward Ra+Rm will have the same behaviour as Ha with reward Rm: they will also upbid when 60 is announced and downbid when 0 is announced, but this is because they derive reward from doing so, not because they are biased. Similarly, Hr with reward Ra+Rd will have the same behaviour as Ha with reward Rd. So Hr is a human with less bias; now let's imagine a human with *more* bias. I'll introduce a so-called 'connotation bias'. This human may value milk and dark chocolate as given by their reward, but the word 'sweet' has positive connotations, independent of its descriptive properties; this increases their bids by €10. The word 'sour' has negative connotations; this decreases their bids by €10. A human with connotation bias and reward Rd will behave like a human without connotation bias and reward Rm. That's because the human prices milk chocolate at €30, but will add +€10 because it's described as 'sweet'; and the converse, €(40−10) for the 'sour' dark chocolate. Let Hc describe a human that has the connotation bias, and Hac describe a human that has both the connotation bias and the anchoring bias. Then the following four pairs of humans and rewards will behave the same in the bidding process: (Hr,Ra+Rm),(Ha,Rm),(Hc,Ra+Rd),(Hca,Rd). We might also have a reward version of the connotation bias, where the human enjoys the chocolate more or less depending on the description (this is similar to the way that the placebo effect can still work, to some extent, if the subject is aware of it). Call this reward Rc. Then we can add two more agents that have the same behaviour as those above: (Hr,Rc+Ra+Rs),(Ha,Rc+Rs). This is how preferences and biases carry more information than the policy does: we have six possible pairs, all generating the same policy. Figuring out which one is 'genuine' requires log2(6)≈2.5 more bits of information. More data won't save you ======================== At this point, you're probably starting to think of various disambiguation experiments you could run to distinguish the various possibilities. Maybe allow the human to control the random number that is announced (maybe the Hr with Ra would prefer that 0 be named, as that would give it satisfaction for naming a lower price), or the description of the chocolate ('savoury' or 'full-bodied' having better connotations that 'sour'). But recall that, per the [Occam's razor paper](https://arxiv.org/abs/1712.05812)'s results, *modelling the human as fully rational will be simpler than modelling them as having a varying mix of biases and preferences*. So full rationality will always be a better explanation for the human behavioural data. Since we have some space between the simplicity of full rationality and the complexity of more genuine human preferences and (ir)rationality, there will also be completely erroneous models of human preferences and biases, that are nonetheless simpler than the genuine ones. For example, an almost fully rational human with an *anti-anchoring bias* (one that names quantities *far* from the suggestions it's been primed with) will most likely be simpler than a genuine explanation, which also has to take into account [all the other types of human biases](https://en.wikipedia.org/wiki/List_of_cognitive_biases). Why disambiguation experiments fail ----------------------------------- Ok, so the previous section gave a high-level explanation why running more disambiguation experiments won't help. But it's worth being more narrow, and zooming in a bit. Specifically, if we allow the human to control the random number that is announced, the fully rational human, Hr, would select 0, while the human-with-anchoring bias, Ha, would either select the same number they would have bid otherwise (30 or 40), to remove anchoring bias, or would give a number at random (if they're unaware of anchoring bias). To make Hr behave in that way, we would have to add some kind of 'reward for naming numbers close to actual internal valuation, when prompted to do so, or maybe answering at random'. Call this reward R′; then (Hr,Ra+R′+Rm) seems clearly more complicated than (Ha,Rm), so what is going on here? What's going on is that we are making use of a lot of implicit assumptions about how humans (or quasi-humans) work. We're assuming that humans treat money as fungible, that we desire more of it, and are roughly efficient at doing so. We're assuming that humans are either capable of identifying the anchoring bias and removing it, or believe the initial number announced is irrelevant. There are probably a lot of other implicit assumptions that I've missed myself, because I too am human, and it's hard to avoid using these assumptions. But, in any case, it is only when we've added these implicit assumptions that '(Hr,Ra+R′+Rm) seems clearly more complicated than (Ha,Rm)'. If we had the true perspective that [Thor is much more complicated than Maxwell's equations](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor), then (Hr,Ra+R′+Rm) might well be the simpler of the two (and the more situations we considered, the simpler the 'humans are fully rational' model becomes, relative to other models). Similarly, if we wanted to disambiguate Hc, then we would be heavily using the implicit knowledge that 'sour' is a word with negative connotations, while 'savoury' is not. Now, the implicit assumptions we've used are 'true', in that they do describe the preference/rationality/bias features of humans that we'd want an AI to use to model us. But we don't get them for free, from mere AI observation; we need to [put them into the AI's assumptions, somehow](https://www.lesswrong.com/posts/9rjW9rhyhJijHTM92/learning-human-preferences-black-box-white-box-and).
82a9e60f-e770-408a-a573-55000a382169
StampyAI/alignment-research-dataset/arxiv
Arxiv
AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ------------------------------------------------------------------------------------------- Arguably the most ambitious scientific quest in human history is the creation of general artificial intelligence, which roughly means AI as smart or smarter than humans111This paper will not attempt to define general intelligence aside from saying it roughly means AI as smart or smarter than humans. Nor does this paper engage in the debate about to what extent such a thing exists. Such terrain is well-trodden without resolution, and is not the focus of this paper. Knowing roughly what most people mean by general AI is enough for the purposes of this paper. . The creation of general AI would transform every aspect of society and economic sector. It would also catalyze scientific discovery, leading to unpredictable advances, including further advances in AI itself (the potential ethical consequences of which I discuss in Section [4](#S4 "4 Safety and ethical considerations ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence")). ### 1.1 The manual AI approach One approach to producing general AI is via a two-phase process. In Phase One, we discover each of the pieces that might be required for intelligence. In Phase Two, we then figure out how to put all of those pieces together into an extremely complex machine. This path has been followed since the earliest days of AI, such as initial research into systems intended to capture logical thought or process language [[139](#bib.bib139)]. This is also the path implicitly being taken by the vast majority of the machine learning research community. Most papers introduce or refine a single building block of intelligence, such as convolution [[88](#bib.bib88)], recurrent gated cells [[63](#bib.bib63), [21](#bib.bib21), [22](#bib.bib22)], skip connections [[59](#bib.bib59), [161](#bib.bib161)], attention-mechanisms [[194](#bib.bib194), [34](#bib.bib34)], activation functions [[50](#bib.bib50), [56](#bib.bib56), [115](#bib.bib115), [102](#bib.bib102), [23](#bib.bib23)], external memory [[51](#bib.bib51), [190](#bib.bib190)], good initializations [[48](#bib.bib48), [50](#bib.bib50)], normalization schemes [[72](#bib.bib72), [95](#bib.bib95)], hierarchical methods [[32](#bib.bib32), [11](#bib.bib11), [58](#bib.bib58), [185](#bib.bib185)], capsules [[141](#bib.bib141), [62](#bib.bib62)], unsupervised learning techniques [[50](#bib.bib50), [132](#bib.bib132), [86](#bib.bib86), [49](#bib.bib49)], value or advantage functions [[176](#bib.bib176)], intrinsic motivation [[171](#bib.bib171), [125](#bib.bib125), [123](#bib.bib123), [18](#bib.bib18)], trust regions [[150](#bib.bib150)], auxiliary tasks [[74](#bib.bib74)], solutions to catastrophic forgetting [[80](#bib.bib80), [38](#bib.bib38), [199](#bib.bib199), [184](#bib.bib184)], and many, many more. Table [1](#S1.T1 "Table 1 ‣ 1.1 The manual AI approach ‣ 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence") lists many example building blocks, and this list is far from exhaustive. Seeing the length of this incomplete list raises the following questions: How long will it take to identify the right variant of each building block in this list? How many more essential building blocks exist that we have yet to invent? How long will it take us to discover them all? Even if we were able to create effective versions of each of the key building blocks to general AI in a reasonable amount of time, our work would remain unfinished. What is rarely explicitly mentioned is that collecting key building blocks of intelligence is only useful in terms of building general AI if at some point some group takes on the Herculean scientific challenge of figuring out how to combine of all of those pieces into an intelligent machine. Combining multiple, different building blocks is extraordinarily difficult due to (1) the many different ways they can be combined, (2) non-linear interaction effects between modules (e.g. a version of a module can work well when combined with some building blocks, but not others), and (3) the scientific and engineering challenges inherent in working with complex systems of dozens or hundreds of interacting parts, each of which is a complex, likely black-box, machine learning module (e.g. imagine trying to debug such a machine or identifying which piece needs to be tweaked if things are not working). There have been notable, valiant, successful efforts to combine multiple (e.g. around six to thirteen) AI building blocks [[76](#bib.bib76), [60](#bib.bib60)], but such attempts still feature a low number of building blocks vs. the dozens or hundreds that may be required. Such efforts are also rare and understandably are unable to assess the contribution of each separate building block, let alone different combinations and/or configurations of each building block. Such knowledge may be necessary, however, for us to engineer extreme levels of intelligence into machines. A final challenge with combining dozens or hundreds of separate building blocks is that doing so does not work well within our scientific culture. Current scientific incentives, organizations, and traditions usually reward small teams of scientists to produce a few papers per year. Combining all of the required AI building blocks together instead would likely require an extremely large, dedicated team to work for years or decades in something akin to an Apollo program. While some excellent institutions like DeepMind and OpenAI are large, well-funded, and focused on creating general AI, they still tend to feature smaller teams focused on separate projects and publishing regularly, instead of an organization where all hands on deck are committed to one project, which may be required to engineer AI. Of course, their structures could evolve with time once the building blocks are thought to have been identified. To be clear, I believe the manual AI path (1) has a chance of being the fastest path to AI (Section  [1.6](#S1.SS6 "1.6 Which path is more likely to produce general AI first? ‣ 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence")) and (2) is worthwhile to pursue even if it is not (Section [1.5](#S1.SS5 "1.5 Both the manual and AI-GA paths are worth pursuing, irrespective of which is more likely to be the fastest path to general AI ‣ 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence")). That said, it is important to make its assumptions explicit—that it is a manual engineering approach involving the two phases described above—and to acknowledge how daunting of a challenge it is, for scientific, engineering, and sociological reasons. Making those points is one of the objectives of this manuscript. * convolution * pooling * activation functions * gradient-friendly architectures (e.g. skip connections) * gated recurrent units (e.g. LSTMs & GRUs) * structural organization (regularity, modularity, hierarchy) * spatial tranformers * capsules * batch/layer norm * experience replay buffers * writeable memory (e.g. memory networks, neural Turing machines) * recurrent layers * multi-modal fusion * learned losses (e.g. evolved policy gradients) * hierarchical RL, options * intelligent exploration (e.g. via intrinsic motivation) * good initializations (Xavier, MAML, etc.) * universal value function approximators * auxiliary tasks (predicting states, autoencoding, predicting rewards, etc.) * catastrophic forgetting solutions * efficient continual learning * Dyna * variance reduction techniques * good hyperparameters * value functions, action-value functions, advantage functions * models * planning algorithms * backprop through time * causal reasoning * trust regions * Bayesian learning * neuromodulation * Hebbian learning * active learning * probabilistic models * generative adversarial networks * time-contrastive, object-contrastive losses * audio-visual correspondence * domain confusion * entropy bonuses * regularization * distance metrics * hindsight experience replay * attention mechanisms * complex optimizers (Adam, RMSprop, etc.) * learning rates & schedules * discount factors * eligibility traces * theory of mind * self-modeling * unsupervised learning * co-evolution / self-play * staging / shaping * curriculum learning * communication / language * multi-agent interaction * diverse environmental challenges * environmental hyperparameters (e.g. episode length) Table 1: A very partial list of the potential building blocks for AI we might need for the manual path to creating AI. When will we discover the right variant of each of these building blocks? How many more are essential, but are yet to be discovered? How will we figure out how to combine them into a complex thinking machine? Seeing even this partial list underscores how difficult it will be to succeed in creating general AI via the manual path. The idea of AI-GAs is to learn each of these building blocks automatically. The First Pillar, meta-learning architectures, could potentially discover the building blocks (or better alternatives) colored black (solid circles). The Second Pillar, meta-learning learning algorithms, could potentially learn the red building blocks (diamonds). The Third Pillar, generating effective learning environments, could learn things like the blue building blocks (daggers). \newNote that placing elements into only one of the pillars is challenging. For example, some solutions (e.g. spatial transformers [[73](#bib.bib73)]) could be implemented via an architectural solution (Pillar 1) or be learned to be performed within a recurrent neural network without any architectural changes (Pillar 2). Additionally, if certain learning environments encourage the learning of performing a function or skill (e.g. transforming inputs spatially), then it could also be listed in Pillar 3. Thus, the categorization is to give the rough impression that each of the building blocks in this table *could be learned* in the AI-GA framework by *at least one* of the pillars, rather than to commit to in which pillar it is best learned. Additionally, to achieve both breadth and depth, in some cases I include a broad class (e.g. unsupervised learning) and instances of techniques within that class (e.g. generative adversarial networks [[49](#bib.bib49)]). ### 1.2 A different approach: AI-generating algorithms (AI-GAs) There is another exciting path that ultimately may be more successful at producing general AI: the idea is to learn as much as possible in an automated fashion, involving an *AI-generating algorithm* (AI-GA) that bootstraps itself up from scratch to produce general AI. As argued below, this approach also could prove to be the fastest path to AI, and is interesting to pursue even if it is not. One motivation for a learn-as-much-as-possible approach is the history of machine learning. There is a repeating theme in machine learning (ML) and AI research. When we want to create an intelligent system, we as a community often first try to hand-design it, meaning we attempt to program it ourselves. Once we realize that is too hard, we then try to hand-code some components of the system and have machine learning figure out how to best use those hand-designed components to solve a task. Ultimately, we realize that with sufficient data and computation, we can learn the entire system. The fully learned system often performs better. It also is more easily applied to new challenges (i.e. it is a more general solution). The clearest example of this trend is in computer vision. First we tried to hand-code computational vision systems in the early decades of AI research and found that they did not work well [[179](#bib.bib179)]. We then switched to a hybrid approach wherein we manually designed features (e.g. edge-, texture-, object-, and other feature-detectors, including the famous HOG [[31](#bib.bib31)] and SIFT [[101](#bib.bib101)] feature sets) and then had machine learning algorithms automatically learn the best way to use those hand-coded features to solve a particular problem [[179](#bib.bib179)]. Starting around 2012 [[84](#bib.bib84)], we realized that we had sufficient computation and data to learn entire vision systems without injecting hand-designed features. Doing so has improved performance and such systems are very general: they can be applied to a wide variety of vision tasks and learn appropriate features for each. This same phenomenon has occurred in other modes of machine learning, such as sound (e.g. speech-to-text systems) [[61](#bib.bib61)], touch [[47](#bib.bib47)], natural language understanding and translation [[34](#bib.bib34), [8](#bib.bib8), [174](#bib.bib174)], and many more [[50](#bib.bib50)]. This trend also has occurred in other areas of machine learning. For decades, but with especially intense effort since 2012, we as a community have been hand-designing increasingly powerful neural network architectures [[59](#bib.bib59), [66](#bib.bib66), [178](#bib.bib178), [50](#bib.bib50)]. Increasingly, however, researchers are searching for deep neural network architectures automatically with machine learning algorithms [[137](#bib.bib137), [134](#bib.bib134), [111](#bib.bib111), [200](#bib.bib200)]. Despite all of the considerable effort that has gone into designing architectures for image classification manually, an architecture search algorithm produced the state of the art on the CIFAR benchmark at the time of its publication [[136](#bib.bib136)]. The state of the art on the extremely popular and over-optimized ImageNET benchmark also belongs not to a human-designed architecture, but to one produced by an architecture search algorithm [[136](#bib.bib136)]. Additionally, while most hyperparameters have been found by hand throughout the history of machine learning, they are increasingly learned [[156](#bib.bib156), [75](#bib.bib75)]. Additionally, there is a recent increase in meta-learning the learning algorithms themselves [[35](#bib.bib35), [188](#bib.bib188), [38](#bib.bib38), [184](#bib.bib184), [158](#bib.bib158)] (discussed more in Section [2.2](#S2.SS2 "2.2 The Second Pillar: Meta-learning the learning algorithms ‣ 2 Research into the Three Pillars ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence")). I am amongst those that believe that this trend of learned pipelines replacing hand-designed ones will continue, but increasingly it will apply to the machinery of machine learning itself. That opens the tantalizing prospect that we might be able to create an algorithm that learns everything required to produce general AI, and thus produce an AI-GA. That would obviate the need to discover, refine, and combine the individual building blocks of an intelligent machine (e.g. those in Table [1](#S1.T1 "Table 1 ‣ 1.1 The manual AI approach ‣ 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence") and the many yet to be discovered). A second motivation for a learn-as-much-as-possible approach is that it is the only approach that we are certain can work. That is because it already has. On Earth, a remarkably unintelligent algorithm in Darwinian evolution was paired with the correct conditions and bootstrapped itself up from the simplest replicators to ultimately producing the human mind. Darwinian evolution thus is the first general-intelligence-generating algorithm, providing an existence proof that the concept of general-intelligence-generating algorithms can work. Natural evolution required unfathomable amounts of computation, however. An open scientific question is how we can create abstractions of what drove the success of natural evolution in order to produce AI-GAs that can work within the computation that we will have in the coming years. ### 1.3 The Three Pillars required to produce an AI-GA There are Three Pillars that, if we could get them to work well, should produce general AI. Thus, research into AI-GAs should focus on improving and combining them. The Three Pillars are (1) meta-learning architectures, (2) meta-learning the learning algorithms themselves (often just called meta-learning), and (3) automatically generating effective learning environments. Section  [2](#S2 "2 Research into the Three Pillars ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence") describes each pillar and what research into it might look like. Briefly, research into the First and Second Pillars is decades old, but both have seen a surge of recent interest and activity. There has been very little research into the Third Pillar, although we recently published our first step in this direction [[189](#bib.bib189)]. Thus the Third Pillar is the least-studied and least-understood. I consider it the hardest. I predict that more history-making discoveries await in this pillar than the other two, making it an exciting, fruitful area of research. ### 1.4 AI-GAs: A new grand challenge for AI research AI-GAs may prove to be the fastest path to general AI. However, even if they are not, they are worth pursuing anyway. It is intrinsically scientifically worthwhile to attempt to answer the question of how to create a set of simple conditions and a simple algorithm that can bootstrap itself from simplicity to produce general intelligence. Just such an event happened on Earth, where the extremely simple algorithm of Darwinian evolution ultimately produced human intelligence. Thus, one reason that creating AI-GAs is beneficial is that doing so would shed light on the origins of our own intelligence. AI-GAs would also teach us about the origins of intelligence generally, including elsewhere in the universe. They could, for example, teach us which conditions are necessary, sufficient, and catalyzing for intelligence to arise. They could inform us as to how likely intelligence is to emerge when the sufficient conditions are present, and how often general intelligence emerges after narrow intelligence does. Presumably different instantiations of AI-GAs (either different runs of the same AI-GA or different types of AI-GAs) would lead to different kinds of intelligence, including different, alien cultures. AI-GAs would likely produce a much wider diversity of intelligent beings than the manual path to creating AI, because the manual path would be limited by our imagination, scientific understanding, and creativity. We could even create AI-GAs that specifically attempt to create different types of general AI than have already been produced. AI-GAs would thus better allow us to study and understand the space of possible intelligences, shedding light on the different ways intelligent life might think about the world. The creation of AI-GAs would enable us to perform the ultimate form of cultural travel, allowing us to interact with, and learn from, wildly different intelligent civilizations. Having a diversity of minds could also catalyze even more scientific discovery than the production of a single general AI would. We would also want to reverse engineer each of these minds for the same reasons we study neuroscience in animals, including humans: because we are curious and want to understand intelligence. We and others have been conducting such ‘AI Neuroscience’ for years now [[117](#bib.bib117), [119](#bib.bib119), [121](#bib.bib121), [118](#bib.bib118), [197](#bib.bib197), [99](#bib.bib99), [196](#bib.bib196), [177](#bib.bib177), [104](#bib.bib104), [198](#bib.bib198), [97](#bib.bib97), [52](#bib.bib52)], but the work is just beginning. John Maynard Smith wrote the following about evolutionary systems: “So far, we have been able to study only one evolving system, and we cannot wait for interstellar flight to provide us with a second. If we want to discover generalizations about evolving systems, we will have to look at artificial ones.” [[105](#bib.bib105)] The same is true regarding intelligence. Additionally, explained in Section [2.3](#S2.SS3 "2.3 The Third Pillar: Generating effective learning environments and training data ‣ 2 Research into the Three Pillars ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence"), the manner in which AI-GA is likely to produce intelligence will involve the creation of a wide diversity of learning environments, which could include novel virtual worlds, artifacts, riddles, puzzles, and other challenges that themselves will likely be intrinsically interesting and valuable. Many of those benefits will begin to accrue in the short-term with AI-GA research, so AI-GAs will provide short-term value even if the long-term goals remain elusive. AI-GAs are also worthwhile to pursue because they inform our attempts to create open-ended algorithms (those that endlessly innovate) [[169](#bib.bib169)]. As described in Section [1.2](#S1.SS2 "1.2 A different approach: AI-generating algorithms (AI-GAs) ‣ 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence"), we should also invest in AI-GA research because it might prove to be the fastest way to produce general AI. For all these reasons, I argue that the creation of AI-GAs should be considered its own independent scientific grand challenge. ### 1.5 Both the manual and AI-GA paths are worth pursuing, irrespective of which is more likely to be the fastest path to general AI Both the manual and AI-GA paths to creating general AI should be pursued, regardless of which one is considered the most likely to succeed first. The clearest argument for that is that even if one succeeds, we would still want to pursue the other. To see why, consider the possibility of each being the first to produce general AI. If the manual path wins, we would still be interested in creating AI-GAs for all the reasons just provided in Section [1.4](#S1.SS4 "1.4 AI-GAs: A new grand challenge for AI research ‣ 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence"). If the AI-GA path wins, we would still be interested in creating general AI via the manual path. That is because it is likely that an AI-GA would produce a complex machine whose inner workings we do not understand. As Richard Feynman said, “What I cannot create, I do not understand.” We understand by building, and one great way to understand intelligence is to be able to take it apart, rewire components, and learn how to put it together piece by piece. AI-GAs could actually catalyze the manual path because it would produce a large diversity of general AIs, perhaps making it easy for us to recognize what is necessary, variable, and incidental in the creation of intelligent machines. Additionally, each path would catalyze the other because the general AI produced by either could be used to accelerate scientific progress in the other path. For all of these reasons, I consider the manual and AI-GA paths to creating general AI to be separate scientific grand challenges each worthy of substantial scientific investment. However, because the current machine learning community is mostly committed to the manual path, I advocate that we should shift investment to the AI-GA path to pursue both promising paths to general AI. ### 1.6 Which path is more likely to produce general AI first? There are good reasons why either of the two paths might be the fastest way to produce general AI. However, on balance, after weighing all of the arguments presented in this manuscript, I believe it more likely that the AI-GA path will produce general AI first. That said, I have high uncertainty about that predication and think that either could get there first. There are a few main reasons why I think the AI-GA path is more likely. Initially, it scales well with the exponential compute we can expect in the years and decades to come. While AI-GAs will require extraordinary amounts of computation by today’s standards, they have the nice property that they will be able to consume that compute easily once it is available. As Richard Sutton has recently pointed out, history has shown that the algorithms that tend to win over the long haul are simple ones that can take advantage of massive amounts of computing [[175](#bib.bib175)]. Additionally, as noted above, history teaches us that learned pipelines eventually surpass ones that are entirely or partially hand-created. Further, the AI-GA path enables small teams to begin working on algorithms that might produce general AI right now. In contrast, as noted above, the manual path eventually requires a switch to the Herculean Phase 2 task of putting together all of the building blocks of intelligence. If such a combination requires a very large team dedicated for years (something akin to the Apollo program), then major changes in the social organization of scientific effort must take place before work can begin on anything that has a chance at producing an intelligent machine via the manual path. That work must also be undertaken after, or concurrently with, discovering all of the building blocks of intelligence (Phase 1 of the manual path). Additional reasons why the AI-GA path might win are given in Sections [1.1](#S1.SS1 "1.1 The manual AI approach ‣ 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence") and [1.2](#S1.SS2 "1.2 A different approach: AI-generating algorithms (AI-GAs) ‣ 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence"). Overall, it is likely that in the near to middle term, the manual path will produce higher-performing AI systems that are of more use to society, just as HOG and SIFT produced better computer vision systems in the pre-2012 era. However, just as with computer vision, it is likely that fully learned systems will eventually surpass and then rapidly far exceed the capabilities produced by the manual path. 2 Research into the Three Pillars ---------------------------------- Researching how to create an AI-GA is a very different research agenda from the manual path. Instead of attempting to identify and create the individual building blocks of an intelligent machine (e.g. better memory modules, hierarchical controllers, optimizers, and transformers), AI-GAs attempt to learn all of these types of building blocks. That does not mean it is not without its own challenging research problems. It is just that the research problems are different and require focusing on different questions. This section describes each of AI-GA’s Three Pillars, including prior work in them and a sketch of what future research in them might look like. ### 2.1 The First Pillar: Meta-learning Architectures The First Pillar is the most studied and well-known, and for that reason I will spend little time surveying it. The architecture of a neural network plays a key role in both the function a trained neural network can perform and how easy it is to train that network [[59](#bib.bib59), [66](#bib.bib66), [178](#bib.bib178), [50](#bib.bib50)]. As mentioned above, the majority of work into training neural networks with backpropagation has involved humans hand-designing architectures, including innovations such as convolution [[88](#bib.bib88)], LSTMs or other gated recurrent units [[63](#bib.bib63), [21](#bib.bib21), [22](#bib.bib22)], highway networks and residual networks [[59](#bib.bib59), [161](#bib.bib161)], etc. However, there is also a long history in AI of creating *algorithms* to search for high-performing neural network architectures [[87](#bib.bib87), [166](#bib.bib166), [53](#bib.bib53), [195](#bib.bib195), [57](#bib.bib57)]. Starting recently, such architecture search algorithms are being tested with large-scale compute for deep neural network architectures, where they are sometimes finding better architectures than those that humans have designed. For example, automatically discovered architectures improved performance even on the classic benchmark of CIFAR and the challenging benchmark of ImageNET [[79](#bib.bib79), [137](#bib.bib137), [134](#bib.bib134), [111](#bib.bib111), [200](#bib.bib200)], both of which had already been highly optimized by armies of human scientists. We can thus reasonably assume the trend will continue and that searching for architectures will replace the hand-designing of architectures in the years and decades to come. Architecture search is expensive, but so was backpropagation in deep neural networks in the year 2010. Eventually the computation will exist to make it routine and likely the best option. Additionally, there are techniques that can speed it up, such as surrogate models [[100](#bib.bib100), [81](#bib.bib81), [9](#bib.bib9), [134](#bib.bib134)] or Generative Teaching Networks [[126](#bib.bib126)], and new techniques will be invented to make it faster still. It is clear that animal brains, including our own, are not large, homogenous, fully connected neural networks. Instead, their architectures consist of many heterogenous modules wired together in a particular way, with the entire design carefully sculpted by evolution [[172](#bib.bib172)]. It is daunting to consider creating such complex architectures by hand. With exponentially more compute, we instead can search for them with an outer loop learning algorithm that produces powerful, sample-efficient architectures primed for learning. Since the outer-loop algorithm produces architectures that are better at learning, people often call architecture search algorithms ‘meta-learning algorithms’, because they learn how to produce better learners. Another notion of meta-learning (learning better learning algorithms) is described in the next section. Much research needs to be done to enable an effective architecture search algorithm. That includes research into encodings (aka representations) [[165](#bib.bib165), [168](#bib.bib168), [167](#bib.bib167), [200](#bib.bib200), [53](#bib.bib53)], search operators, and the production of architectures that are regular [[168](#bib.bib168), [25](#bib.bib25), [67](#bib.bib67)], modular [[26](#bib.bib26), [67](#bib.bib67)], and hierarchical [[106](#bib.bib106)]. There is the chance for such architecture search algorithms to discover architectural innovations like those colored black in Table [1](#S1.T1 "Table 1 ‣ 1.1 The manual AI approach ‣ 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence"). Research in this space is taking off at the moment and will be useful whether the search is for a domain-specific neural network solution or for producing an AI-GA. ### 2.2 The Second Pillar: Meta-learning the learning algorithms Because history teaches us we should learn as much as possible, that lesson should apply to the learning algorithms themselves. Doing so could improve the ceiling for what neural networks are able to learn about each task, their sample efficiency, their ability to generalize, and their ability to continuously learn many tasks. Work in this field is often called simply ‘meta-learning’ and dates back decades [[64](#bib.bib64), [29](#bib.bib29), [158](#bib.bib158), [145](#bib.bib145), [38](#bib.bib38), [184](#bib.bib184), [43](#bib.bib43), [188](#bib.bib188), [35](#bib.bib35), [149](#bib.bib149), [5](#bib.bib5), [133](#bib.bib133), [98](#bib.bib98)]. The idea is to ‘learn to learn’ such that an agent gets better at learning a new task sampled from a distribution of tasks over the course of training. Instead of learning a specific task, the idea is to get better at learning on any new task by becoming a better learner. There has been a recent surge of interest in this area, with work falling into roughly two families. The first family, made famous by the Model-Agnostic Meta Learning (MAML) algorithm [[43](#bib.bib43)] involves pairing a learner (a neural network with a set of weights θ) with a predefined, fixed learning algorithm—such as stochastic gradient descent (SGD)—to solve a task selected from a distribution of tasks as fast as possible. The job of the learning algorithm is to produce an initial set of weights that can rapidly learn any task from the distribution. To do so, MAML differentiates through the learning process to find a set of initial weights that are quickly able to learn a new task when taking gradient steps via SGD in that new task. It has been shown that theoretically this paradigm can encode any learning algorithm [[42](#bib.bib42)]. The second family involves training a recurrent neural network (RNN) in a context that requires learning and relies on the activations with the neural network to implement a learning algorithm. Because an RNN is a Turing-complete universal computer [[152](#bib.bib152)], it too can encode any possible learning algorithm. The weights of the RNN are trained via an outer loop meta-learning algorithm. This approach has been well-studied in a supervised learning context [[64](#bib.bib64), [29](#bib.bib29)]. As applied to reinforcement learning, it was concurrently introduced by two papers [[188](#bib.bib188), [35](#bib.bib35)]. It is intuitively appealing because it resembles what happened on Earth, where an outer loop optimization algorithm (evolution) created animal brains, which are effective, sample-efficient learners. Note, however, that evolution is not required: many algorithms can be used for the outer-loop learning algorithm, such as policy gradients [[188](#bib.bib188), [35](#bib.bib35)]. This type of meta-learning is also appealing because it could learn a better inner-loop learning algorithm than any currently available machine learning algorithms. On simple reinforcement learning tasks it has been demonstrated that this approach can learn to explore, exploit, and balance between the two [[188](#bib.bib188), [35](#bib.bib35)]. It can also seemingly build an internal model of the world and plan within that model to behave like model-based RL methods instead of model-free RL methods, or at least the networks it produces pass (admittedly imperfect [[3](#bib.bib3)]) tests designed to tell model-based methods from model-free methods [[188](#bib.bib188)]. Research on this pillar will involve improving the number and difficulty of tasks that can be solved. We must also identify what ‘materials’ the learner (e.g. the RNN) should be made of. For example, the work in the second meta-learning camp mentioned so far was done with standard RNNs, where the only things that can change within the lifetime of the agent (i.e. during inner-loop training) are the activations of the network. The network weights do not change within its lifetime: they instead change only across ‘generations’ (aka iterations) via the outer-loop optimization steps. We recently introduced an approach called *differentiable plasticity* that enables meta-learning with a different type of neural network [[109](#bib.bib109)]. The weights in these networks change intralife via Hebbian learning (in a fashion similar to the principle that ‘neurons that fire together, wire together’). In this case, each weight consists of two parameters: one is fixed throughout the lifetime of the agent and is directly tuned by the outer-loop optimization algorithm. The other changes as a function of network dynamics, so the network has the ability to store information in its weights, which can improve performance on many tasks vs. traditional RNN meta-learning approaches [[109](#bib.bib109)]. We showed that on some problems that require remembering lots of information over time, RNNs composed of these Hebbian materials outperform LSTMs composed of traditional neurons. While work on Hebbian learning in neural networks has a long history, our recent introduction of the ability to train such networks via gradient descent offers the possibility to harness this alternative to SGD at scale when meta-learning large, deep neural networks. Another type of learner network includes *neuromodulation*, wherein the output of one neuron in a network can change the learning rates of other connections in the network [[38](#bib.bib38), [184](#bib.bib184), [122](#bib.bib122), [158](#bib.bib158), [138](#bib.bib138), [157](#bib.bib157)]. As with Hebbian learning, there is a long history of neuromodulation work without using gradient descent, but we have recently introduced the ability to train networks with neuromodulation in a fully differentiable, end-to-end fashion [[110](#bib.bib110)]. An example type of problem that could in theory be solved with meta-learning and neuromodulation is catastrophic forgetting, a phenomenon wherein neural networks are unable to continuously learn because they learn each newly encountered task by overwriting their knowledge of how to solve all previously learned tasks [[45](#bib.bib45), [108](#bib.bib108), [46](#bib.bib46)]. Neuromodulation can help because some neurons in the network can detect which task is currently being performed, and those neurons can turn learning on in the part of the network that performs that task and turn learning off everywhere else in the network. That can enable a neural network to learn new skills without cannibalizing old skills. We have shown that meta-learning with neuromodulation can create networks that perfectly solve catastrophic forgetting, meaning they have learned how to learn without forgetting, albeit in small networks and on simple problems [[38](#bib.bib38), [184](#bib.bib184)]. These are just a few examples of how the type of learner used, including the materials it is made of, can have a meaningful effect on the quality and type of solution that results. The choice of materials is a hyperparameter of the meta-learning algorithm. Of course, they too could (and likely ultimately will) be searched for. Meta-learning is currently computational expensive. It requires training over a distribution of different tasks and often involves differentiating through the entire learning process. Computational limitations, memory limits, and optimization challenges are major hurdles for this work. They currently limit the complexity of tasks that we can train on, how long those tasks can last (e.g. how many time steps), and how many outer-loop (meta) optimization steps we can perform. Research to improve our ability to train such networks and do so on more and harder tasks will be critical to unlocking the power of meta-learning. One question raised by meta-learning learning algorithms is what the distribution of tasks should be for meta-learning. How narrow should it be? Should it widen over time? As has been pointed out, switching from normal learning to meta-learning changes the burden on the researcher from designing learning algorithms to designing environments [[54](#bib.bib54)]. But why stop there? I argue that we should instead learn the appropriate task distribution that learning agents should meta-learn on. That is AI-GA’s Third Pillar, which I discuss next. ### 2.3 The Third Pillar: Generating effective learning environments and training data It is unlikely that we will ever know how to directly program a generally intelligent agent. Instead, we will almost certainly have to train a learning agent on the right training data and/or in the right set of training environments. That raises the question as to what the correct training environments are. Even if we knew what the right set of training environments were, we likely would not get away with sampling from them randomly for training (that would be too inefficient, if it worked at all). Instead, we also have to know the right *order* in which to present the tasks, which may itself be a function of the current learning capabilities of the learning agent. Just as human educational systems have developed curricula for how to teach human learners a variety of subjects, we likely will need to produce effective curricula in order to produce generally intelligent artificial learning agents. Manually identifying and creating each training environment required and their order would be an extremely challenging, slow, and expensive undertaking. It would be limited by our time, intelligence, and our ability to understand each AI student enough to design optimal learning environments for them. However, we know from the history of machine learning that data is the rocket fuel of success, and we will therefore need lots of it. The AI-GA philosophy is that we should have learning algorithms learn to automatically generate effective learning environments (i.e. generate their own training data), including creating an effective curriculum that can train up an initially unintelligent agent (the equivalent of a human newborn) into a generally intelligent agent (the equivalent of a human adult, or something smarter). Part of generating a learning environment is generating the reward function that defines success in that environment. One can imagine the generation of labeled training data (images and their labels), virtual worlds that focus on mastering a particular skill (much as a hockey player practices passing, shooting, and skating), and even entire virtual worlds in which agents learn to communicate, coordinate to perform complex tasks, and learn a variety of increasingly difficult concepts, from math and physics to writing, philosophy, and machine learning. One approach might be to create a set of different training environments and expose a learning agent to them in some order (possibly contingent on their past performance). Another approach might be to create a single training environment, which itself internally represents a curriculum for an agent. As on Earth, the difference might ultimately be semantic: in your lifetime have you experienced a set of different learning environments, or a single complex one? There has been little research into the Third Pillar. It is the least-studied, least-understood, and likely hardest of the three pillars. That also makes it an interesting new frontier of research. Many fundamental breakthroughs await in this nascent field, making it a ripe orchard in which researchers can hunt for impactful discoveries. I next outline a sketch of how we might make progress on this pillar, including some of our early work in this direction. Of course, the most exciting ideas and directions are the ones we cannot envision yet, making the following sketch more of an exercise to kickstart progress rather than a playbook for future research. Much work has occurred in the fields of computational evolutionary biology, artificial life, and neuroevolution attempting to investigate how to create an open-ended complexity explosion like the one that occurred with living animals on Earth [[96](#bib.bib96), [169](#bib.bib169), [12](#bib.bib12), [85](#bib.bib85), [135](#bib.bib135), [16](#bib.bib16), [159](#bib.bib159), [2](#bib.bib2), [180](#bib.bib180), [68](#bib.bib68), [83](#bib.bib83), [82](#bib.bib82), [164](#bib.bib164), [24](#bib.bib24)]. The dominant theme is to try to create the conditions within a virtual world in which agents evolve and hope a complexity explosion will occur. In virtually all of that work, the environment itself is manually created by human researchers, including the rules by which agents interact if there are multiple agents. The hope is often that *interactions* between such agents will kickoff a coevolutionary arms race that will produce open-ended innovation. A specific version of this idea, called coevolution, is to have two populations of learning agents competing or cooperating with each other [[33](#bib.bib33), [41](#bib.bib41), [77](#bib.bib77), [131](#bib.bib131), [191](#bib.bib191)]. A related idea called self-play is to have an agent learn by playing itself, or an archive of past versions of itself [[10](#bib.bib10), [154](#bib.bib154), [153](#bib.bib153)]. Significant gains have been made, most famously in learning to play the games Go and Dota [[154](#bib.bib154), [153](#bib.bib153)], validating both the idea of generating effective learning environments (i.e. generating training data) and having agents play against automatically learned curricula. However, these approaches have failed to create anything resembling an open-ended complexity explosion because the non-agent (abiotic) component of the environment they operate in is fixed. For example, while self-play has automatically produced agents that are capable of kicking soccer balls into a goal and playing the games of Go and Dota, no amount of self-play in virtual soccer, Go, or Dota will produce agents that can drive, write poetry, invent flying machines, and conduct science to understand the world they find themselves in. Another approach is to automatically generate different reward functions within an environment [[54](#bib.bib54)]. While promising, like self-play, ultimately such agents are confined to a predefined environment that limits what they can learn. In my opinion, a more promising direction is to explicitly optimize environments to be effective for learning, instead of hoping we can create environments that create dynamics that lead to coevolutionary arms races that produce generally intelligent learners. Just thinking about environments as something that can be optimized by learning algorithms is interesting and opens many new research directions. One project in this direction is PowerPlay [[148](#bib.bib148), [160](#bib.bib160)]. Another vein of research comes from the community attempting to automatically generate content for video games, including training AI agents [[78](#bib.bib78), [151](#bib.bib151)]. The first paper that made me consider directly optimizing environments as an interesting possibility is Brant and Stanley [[16](#bib.bib16)], where an expanding set of novel mazes were generated, each of which could be solved by at least one agent. We followed up on that idea with the Paired Open-Ended Trailblazer (POET) algorithm in Wang et al. [[189](#bib.bib189)], which is discussed in more detail below. However, the question remains of how to properly conduct such optimization. What should the learning algorithm that is optimizing the environment try to make the environment accomplish? What is the reward function (aka loss function or fitness function) for the environment generator? This is one of the key questions for AI-GA research. One option is the following: define intelligence, then optimize an environment such that learners exposed to that environment are increasingly generally intelligent. This option seems unrealistic for a variety of reasons. The first two problems are that it requires defining what intelligence is and how to measure it. Those problems have eluded scientists and philosophers for centuries. The second is that, even if we did have a reward function for general intelligence, it would almost certainly be highly sparse and deceptive (non-convex). In the parlance of evolutionary biology, these would be considered fitness landscapes that are perfectly flat in many places and extremely rugged in others. The reward function would be sparse because for much of the space there would be no signal or gradient from the reward function. Stanley and Lehman provide a colorful example in the uselessness of, for example, trying to select for human-level intelligence by starting with bacteria and selecting for those that perform better on IQ tests [[163](#bib.bib163)]. Assuming for a second that IQ tests do have something to say about general intelligence, it is clearly unhelpful to try to guide a search for intelligence from bacteria by optimizing for performance on IQ tests because until you already have something with the intelligence of a human child, the test provides no signal to guide search. The problem of deception is even more pernicious. It describes the situation where the reward function provides not just zero signal, but the wrong signal, causing search algorithms to get trapped on local optima far lower in performance than the global optimum. When searching for the solutions to extremely ambitious problems, even if you have a way to quantifiably measure what success looks like, optimizing for that score is unlikely to produce the correct path through the search space to find the global optimum, and instead usually results in getting trapped on a local optima [[163](#bib.bib163), [89](#bib.bib89), [193](#bib.bib193), [120](#bib.bib120)]. In other words, despite our best efforts, we will likely create deceptive reward functions that do not provide a gradient towards any satisfactory optimum, and will instead lead the search process to get trapped on low-performing local optima. All of these problems are why basic science is so important. If we went back a few hundred years and sought to create clean energy, it would have been futile to search for technologies that produce fewer and fewer carbon emissions per unit of energy produced. Instead, the first effective clean energy (nuclear power) was discovered by someone passionately thinking about light beams chasing them on a train moving at the speed of light. Similarly, to invent microwaves, one should not have rewarded engineers for making devices that are increasingly fast at heating food, but instead the correct path was to invest in radar technology. If one looks at most major technological innovations, the path of stepping stones that led to them is nearly always circuitous and would have been unpredictable ahead of time [[163](#bib.bib163)]. Nevertheless, it is useful to recognize that it is possible to generate effective learning tasks with a particular task in mind, such as performing well on an IQ test, having a robot perform a specific skill, etc. While it is unlikely to be fruitful for very ambitious tasks (where the learner is far from possessing the desired skill), it could be practically useful if the amount to be learned is smaller. I call this approach to generating effective learning environments the *target-task approach*. While it is helpful to research how to create learning algorithms that can produce learners that solve a predefined task of interest—a subject we are working on [[126](#bib.bib126)]—for ambitious goals like producing general AI we likely will need a different approach. A second approach, which I believe is more likely to work for ambitious purposes like producing general AI, is to pursue open-ended search algorithms, meaning algorithms that endlessly generate new things [[169](#bib.bib169)]. In the AI-GA context, that would mean algorithms that endlessly generate an expanding set of challenging environments and solutions to each of those challenges. The expectation is that eventually general AI will emerge as a solution to one or a set of the generated environments. I call this the *open-ended approach* to creating effective learning environments. This path is what occurred on Earth. Natural evolution did not start out with the goal of producing general intelligence. Instead, a remarkably simple algorithm (Darwinian evolution) began producing solutions to relatively simple environments. The ‘solutions’ to those environments were organisms that could survive in them. Those organism often created new niches (i.e. environments, or opportunities) that could be exploited. For example, the existence of gazelles creates a niche for cheetahs, which in turn creates a niche for microorganisms that live inside of cheetahs, etc. The existence of trees enables vines, epiphytes, and canopy butterflies, each of which creates niches for other organisms. The result is an ever-expanding, open-ended collection of new niches (challenges) and solutions to those niches (agents with a variety of skills). Ultimately, that process produced all of the engineering marvels on the planet, such as jaguars, hawks, and the human mind. Note that this approach avoids the need to create a definition and measurement of general AI. It allows us to rely on the fact that we will know it when we see it. What happened in natural evolution was extremely computationally expensive. The AI-GA paradigm motivates research into the following question: can we create computational systems that *efficiently* abstract the key ingredients that produce complexity explosions like those produced by evolution on earth? If so, we may be able to create algorithms that effectively endlessly innovate and could ultimately produce general AI. In other words, the goal is to create an AI-GA without requiring a planet-sized computer. We of course do not know all of the key ingredients that made natural evolution so effective, nor do we know the best way to abstract them. However, we have made progress in identifying some of these key ingredients and how to abstract them. I will attempt to briefly summarize the work I consider promising in this direction. #### 2.3.1 Encouraging behavioral diversity An essential ingredient in the complexity explosion that occurred on earth is the creation of a diverse set of niches and organisms that can live in those niches. An abstract property that will almost certainly be key is thus diversity. For decades, researchers have known that diversity can help search algorithms get off of local optima. But the vast majority of approaches have encouraged diversity in the original search space, such as the weights of a neural network that controls a robot. Such search spaces are high-dimensional and often diversity in the weights of neural networks is not *interesting* diversity. For example, there are a near infinite number of weight vectors that can make a robot immediately fall over. A key insight came when Lehman and Stanley [[89](#bib.bib89)] emphasized the importance of encouraging *behavioral diversity*. The idea is to define a behavior space in which diversity is interesting, and then search for neural network weight vectors that produce different behaviors in that space. For example, imagine a simulation of the city of San Francisco. We might want to encourage a search algorithm to produce robots that visit not-yet-visited x,y,z locations in San Francisco. Visiting new x,y,z locations would require learning skills like walking, avoiding obstacles, climbing stairs, taking elevators, opening doors, crossing streets, climbing trees, picking locks, etc. In fact, Lehman and Stanley [[89](#bib.bib89)] showed that searching for novelty *only* (and completely ignoring the reward function), a algorithm they call *Novelty Search*, can solve problems that are not solvable with objective-driven search when local optima are present. We subsequently showed that novelty search does in fact learn general skills for how to move about an environment that can transfer to new environments [[183](#bib.bib183)]. In an algorithm called *Curiosity Search* [[170](#bib.bib170)], we later showed that encouraging an agent to visit as many different locations within its lifetime (i.e. within an episode) as possible improved performance on the Atari game Montezuma’s Revenge, a notoriously difficult, sparse, hard-exploration problem, matching the then state of the art [[171](#bib.bib171)]. There have been many other important recent papers investigating how much agents can learn when motivated by curiosity alone [[18](#bib.bib18), [125](#bib.bib125), [123](#bib.bib123), [144](#bib.bib144), [40](#bib.bib40), [54](#bib.bib54), [124](#bib.bib124), [147](#bib.bib147)]. While fascinating, rewarding novelty alone is unlikely to reproduce the complexity explosion that occurred on Earth. One missing ingredient is that on Earth, while the creation of niches creates new environments (which indirectly create a pressure to produce diverse behaviors), within each niche there is a pressure to be high-performing. Such intra-niche competition is the most common way people think of ‘survival of the fittest’, wherein faster cheetahs replace slower ones, stronger elephant seals outcompete weaker rivals for control of the harem, etc. The most straightforward way to combine a pressure for novel behaviors and high performance is to optimize for both of them. Many have shown such a combination is effective, whether in a weighted combination of novelty and reward [[28](#bib.bib28)], multi-objective algorithms [[112](#bib.bib112), [114](#bib.bib114)], or algorithms that intelligently rebalance between novelty and performance as a function of learning progress [[28](#bib.bib28)]. However, while the novelty pressure helps avoid local optima, such algorithms still tend to produce one or a few variations on the single theme that search converges on. They do not produce a wide variety of different, yet high-quality solutions. #### 2.3.2 Quality Diversity algorithms Ultimately, what we want are algorithms that create a diverse set of solutions where each of those solutions is as high-performing as possible for that type. For that reason, my colleagues and I have created a new family of search algorithms called *Quality Diversity (QD) algorithms*. The first of its kind was *Novelty Search with Local Competition* [[90](#bib.bib90)] followed by MAP-Elites [[113](#bib.bib113)]. The general idea is to define (or learn) a low-dimensional behavioral (or phenotypic) space and then explicitly search for the highest-performing solution in each region of that space. For example, if one was searching in the space of weights of a deep neural network that encodes the morphology of a robot body, the performance criterion might be walking speed and the space we want to encourage diversity in could be the height and weight of the robot. The QD algorithm would thus search for and return the fastest tall, skinny robot, the fastest tall, fat robot, the fasted short, skinny robot, etc. QD algorithms have already been harnessed to solve very difficult machine learning problems. For example, in a paper in Nature [[30](#bib.bib30)], we harnessed QD to obtain state-of-the-art robot damage recovery, wherein a robot can adapt to substantial damage in 1-2 minutes. We also showed that with access to a deterministic simulator for training, an enhanced version MAP-Elites called Go-Explore can solve the hard-exploration benchmark challenges of the Atari games Montezuma’s Revenge and Pitfall, dramatically improving the state of the art [[36](#bib.bib36)]. One essential component of QD algorithms that fuels their success is *goal switching* [[120](#bib.bib120)]. The idea is that within a niche a parameter vector is optimized to perform well, but if at any point it turns out that a perturbed version of that parameter vector belongs to another niche and is better than its current champion, that perturbed parameter vector becomes the elite in the other niche. That enables one style of solutions in a niche (which may be stuck on a local optima) to be disrupted by another type of solution that was discovered via optimization on a different niche. This captures a dynamic similar to scientific and technological innovation (e.g. how microwaves invaded the kitchen niche despite their underlying technology being developed for radar, a very different purpose). It also captures the dynamic of natural evolution where a species optimized for one niche can invade a new niche if it is superior. Research has shown that such goal switching significantly improves the performance of QD algorithms [[120](#bib.bib120), [113](#bib.bib113), [68](#bib.bib68)]. For example, we showed how it can enable the creation and combination of different skills to produce multi-modal robots capable of jumping, crouching, running, and turning [[68](#bib.bib68)]. QD algorithms can be used in many types of problem domains, such as solving different types of mathematical, writing, or musical challenges. For example, in our work on *Innovation Engines* [[120](#bib.bib120)], we showed that same ideas underlying QD algorithms could be used to generate different types of high-quality, recognizable images. Goal-switching proved essential to avoid local optima and was a catalyst for innovation. We also observed in that work another hallmark of biological complexity explosions: adaptive radiation. Innovations in one niche rapidly spread to many other niches, allowing key innovations to spread and become the foundation upon which additional innovations specific to different niches are built. This phenomenon is reminiscent of the Cambrian Explosion, wherein the innovation of the four-legged body plan led to a rapid radiation of that motif into a large number of different niches to ultimately produce all of the diverse four-legged creatures on Earth. Another benefit of the goal-switching in QD algorithms is the creation of better underlying representations for search. If a structure is repeatedly optimized (or selected) to move in some dimensions and not others, it can restructure itself to make traversing the preferred dimensions of variation more likely than traversing non-preferred dimensions. In the literature of biological and computational evolution, this phenomenon is called *evolvability* [[82](#bib.bib82), [83](#bib.bib83), [187](#bib.bib187), [27](#bib.bib27), [127](#bib.bib127), [186](#bib.bib186), [107](#bib.bib107)]. Lehman and Stanley [[89](#bib.bib89)] showed that Novelty Search produces more evolvability than objective-based search. The reason is intuitive. With objective-based search, the optimization algorithm myopically adds any hack to the representation that improves performance. Like technological debt in software engineering, adding hacks upon hacks without paying the cost of refactoring (which temporarily makes things worse before they ultimately get much better) leads to code that is difficult to adapt to new use cases. In contrast, Novelty Search encourages the production of new behaviors, and thus perturbations that make major changes are less likely to be immediately rejected [[91](#bib.bib91)]. Specifically, Lehman and Stanley showed that neural networks subjected to novelty search were more compact than objective-based search [[89](#bib.bib89), [92](#bib.bib92)], in favor of the hypothesis that they are more evolvable. Novelty search was also shown to lead to representations that produce more behavioral diversity when perturbed, a measure of evolvability [[92](#bib.bib92)]. However, we found they were not faster in adapting to a new tasks [[183](#bib.bib183)], which is contrary to this hypothesis, although more research needs to be done on this question. Additionally, we discovered that when something similar to a neural network encodes pictures and these networks are subject to constantly changing, human-defined goals, the resulting representations are significantly smaller (a proxy for evolvability), more modular, and more hierarchical [[69](#bib.bib69)]. Moreover, we found they are much more likely to produce sensible variations than nonsensical ones, and do so in a hierarchical way. For example, an image of a face might be changed to enlarge or shrink the entire face, or both eyes, or just one eye. Or a smile could be easily converted into a frown, etc. Less likely were changes that transformed a face into an eagle or a scrambled mess of pixels. When automatically generating images with Innovation Engines, we also found that goal-switching led to significantly more compact, adaptable representations [[120](#bib.bib120)]. #### 2.3.3 Environment-generating quality-diversity algorithms QD algorithms as originally conceived could create a high-quality set of diverse behaviors within a single, pre-defined environment only. However, for AI-GA’s Third Pillar we need to generate different types of environments and their solutions. That desire motived our recent Paired Open-Ended Trailblazer (POET) algorithm [[189](#bib.bib189)]. In it, a parameter vector θE specifies an environment. In our demonstration domain, the environments were obstacle courses that could have different degrees of being hilly, gaps in the ground of varying width, and tree trunks of varying height. An additional parameter vector θA contains the weights of a DNN that controls an agent, which in this case is a robot that has to traverse the obstacle course as quickly as possible. An initial agent θA1 is optimized to solve the initial, simple environment θE1. Once the performance of θA1 is good enough, we copy θE1 and change it slightly to create θE2, which now represents a different environment. A copy of θA1 is made to create θA2, which is then optimized to solve the new environment θE2. Importantly, optimization of θA1 on θE1 continues in parallel. Over time, environments are periodically generated by copying any of the current environments for which its agent performs above some threshold. All or a subset of the current agents are evaluated on each new environment. Environments are kept only if they are not too hard for all of the agents in the population, or are not too easy for any of the agents. A copy of the highest-performing agent is transferred to the new environment, where it begins optimizing to try solve it. POET exhibits many desirable properties. First, it creates an expanding set of diverse environmental challenges, each of which can be solved by the current population of agents to some degree. In most cases, agents get better at solving their particular challenge, meaning they are gaining skills. Additionally, agents can goal-switch between challenges. We observe that in many cases the current agent in an environment is stuck on a local optima, but eventually an agent from another environment transfers in, ultimately leading to much higher performance. One can imagine that POET could create wildly different species of solutions within one run, but our initial demonstration domain was too simple to produce tremendous diversity because the environmental encoding only allowed the production of environments with different amounts of landscape ruggedness and different sizes of gaps and obstacles. However, if POET was combined with a flexible way to encode environments, one of its runs could create water worlds, desert worlds, and mountain worlds, each with its own types of agents customized to perform well in those worlds. Such specialization is especially easy to imagine if the bodies of virtual robots are simultaneously optimized, which is a subfield with a long history [[155](#bib.bib155), [19](#bib.bib19), [20](#bib.bib20), [65](#bib.bib65), [6](#bib.bib6), [55](#bib.bib55)]. Presumably goal-switching would happen much more often within certain types of water worlds, less so amongst more different types of water worlds, and never between water worlds and mountain worlds (at least, not directly). Thus, such algorithms naturally would hedge their bets on the best path to creating any type of interesting solution, including general AI, by simultaneously pursuing a diversity of high-quality solutions. In effect, POET creates multiple, simultaneous, overlapping curricula to learn an ever-expanding set of skills. Many of the curricula might be ineffective dead ends, but as long as some of them are fruitful, the algorithm can succeed. #### 2.3.4 More expressive environment encodings One drawback to the POET work is that it assumes a specific type of world, such as obstacle courses in a particular physics simulator, and a specific way of parameterizing that type of world (e.g. a vector with numbers defining the width of the gaps, the height of the stumps, etc.). But ultimately such a strategy is confined to only be able to produce environmental challenges allowable by that parameterization of that physics simulator. For example, our obstacle course simulator does not allow the creation of many types of challenges, such as doors, tire swings, swimming worlds, playing chess, or needing to learn different types of mathematics. It also cannot generate sound, smell, and other sensory modalities. In short, the original POET did not have a sufficiently expressive environmental encoding. In our newest work on the Third Pillar [[126](#bib.bib126)], we are creating algorithms that can generate effective learning environments with a *fully expressive environment encoding*, meaning one that can generate all possible learning environments. Recalling the concept of computers that are Turing Complete, we might call an environmental encoding that can create *any possible learning environment* Darwin Complete. The name reflects that the encoding enables the creation of all of the environments that made Darwinian evolution successful (and many more). To create a Darwin Complete environment encoding, we generate environments via a deep neural network (that can optionally be recurrent). We call these DNNs *Generative Teaching Networks* [[126](#bib.bib126)] because we explicitly train them to be optimal teachers for another student deep neural network that learns on the data (or in the environment) the GTNs create. Because recurrent neural networks are Turing Complete [[152](#bib.bib152)]), GTNs can produce any type of data, and thus could in theory create everything from image classification tasks to entire virtual 3D worlds complete with sound, touch, and smell. They could also create opponent agents to play against, and thus are strictly more expressive than self-play techniques (although they are likely much harder to optimize). That said, it may be easier to have trained agents interact within GTN-produced worlds, rather than forcing the GTN to learn to produce the policies of an arbitrary number of agents. In that option, there would be an entire population of agents in each generated environment. That would potentially make it easier to create multi-agent interactions, including the emergence of language and culture (including the catalyzing force of agents learning via cultural transmission and the cultural ratchet, meaning the amassing of increasing amounts of knowledge over time that each agent can learn from [[181](#bib.bib181)]). Because DNNs are differentiable, we can have GTNs produce training data or environments for a learner DNN, then test the trained learner on a target task, and differentiate back through the entire learning process to update the parameters of the GTN to improve its ability to produce effective training data. Because the GTN is trying to make the learner good on a predefined task, this use case is an example of the “target-task” version of generating effective learning environments. In our experiments so far, we have shown that a GTN can be trained to produce data that enables a student network to learn to classify MNIST. Interestingly, the GTN is not constrained to produce realistic-looking training data (e.g. images a human would recognize as handwritten digits). In fact, we found that the data it generates look completely unrecognizable and alien, yet still the student network learns to recognize real handwritten digits. Moreover, the student network learns four times faster than when training on real MNIST training data! The result that unrecognizable images are meaningful to DNNs is reminiscent of the realization that deep neural networks are easily fooled and will declare that unrecognizable images are everyday objects (e.g. guitars and starfish) with near certainty [[117](#bib.bib117)]. It is an interesting, open question as to whether natural brains, including those of humans, could be rapidly trained to perform any skill via such alien data (a la the novel Snow Crash). Interestingly, researchers recently showed that they could generate fooling images (‘super-stimuli,’ in the parlance of biologists) for the neurons of real monkeys [[130](#bib.bib130)]; they did so by generating images via the DGN-AM technique [[118](#bib.bib118)] to synthetically generate data that activate neurons in a live monkey’s brain. Many of the synthesized images were unrecognizable, yet activated the neurons in the brains of monkeys more than any of real images from the natural world. As mentioned above, there are two options for generating effective learning environments. The previous paragraph describes how we have already experimented with GTNs in the target-task paradigm. Intriguingly, GTNs (or any sufficiently expressive environment generator) can also be used in the open-ended paradigm for generating effective learning environments. This is a promising path to making progress on AI-GA’s Third Pillar. The idea is to harness GTNs to produce an expanding set of learning challenges for agents. For example, one could pair the GTN encoding with POET to create an expanding set of GTNs that each specify an environment. Alternately, a single, powerful GTN could be created that is conditioned on a noise vector (and possibly an agent’s past experience and learning progress) to endlessly produce new, effective learning challenges. Of course, just because the GTN can do that in theory does not mean it will be easy to make it work in practice, and much work remains to accomplish that lofty goal. Additionally, GTNs are just one approach to generating learning environments. They may prove *too* expressive and thus make searching their vast search space intractable. We might want to bake in more prior knowledge by constraining all environments to be in a physics simulator (based on our laws of physics), which both narrows the search space considerably and increases the chance that the skills learned will be relevant to our world and more comprehensible to us. Many other approaches are viable and research into the best ways to generate effective learning environments, including how to encode them, will be essential for progress. A wide open question in this line of research is what the reward function for the environment generator should be in the open-ended version of generating effective learning environments. This is a deep, fascinating, hard question that could be a key to unlocking significant progress in machine learning research. Finding the answer to this question could finally enable us to solve the longstanding grand challenge of producing open-ended search algorithms [[169](#bib.bib169)] and producing general AI, potentially solving two grand challenges in one stroke. I do not have an answer to what this environment-generator reward function should be, but I have some ideas that researchers could begin experimenting with and improving on. The question relates to abstracting *what environments were for* in natural evolution or, analogously, what *problems were for* in the history of scientific and technological innovation. What role did environments serve in producing the complexity explosion on Earth, including creating human intelligence? What role did problems play in scientific and technological innovation within our culture? What I consider the most promising direction for potential reward functions for environmental generators is to define environments as useful (and thus reward their creation) if the environments are such that agents transferring in from other, previously generated environments (1) perform well in the new environments faster than if they were trained on them from scratch, and (2) learn in the new environments (i.e. the environments are not too easy and not too hard, such that the agent experiences ‘learning progress’ [[146](#bib.bib146)]). The first condition encourages shared structure between the problems (e.g. similar laws of physics, or mathematical rules), such that having learned in some subset of problems makes agents be able to transfer that knowledge to other environments. That could prevent the creation of arbitrarily different problems that are challenging, but in uninteresting ways. The second condition prevents the creation of worlds that do not require and encourage learning. It also forces environments to be new in some way. This idea needs to be developed, improved, and experimentally investigated, but it gives a hint of how we can begin to make progress on producing very general principles for preferring the creation of some environments versus others. Another idea that could help is an explicit pressure to produce generalists, perhaps by incentivizing agents to be able to solve as many niches as possible. We have other ideas for how to reward the generation of interesting environments, but I cannot share them because we are actively investigating them. It is likely that none of these ideas will just work. Instead, we need lots of research by the community into these questions to make progress on this front. A major open question that remains is how we can constrain the generation of environments to be those we find interesting and/or that produce intelligence that helps us solve real-world problems. In other words, how do we ground the environment generator to make things relevant to us and our universe? For example, one might argue that such a system could produce intelligence that is alien to us and that we cannot communicate with. However, if it is truly general intelligence, presumably through its learning efforts and our own we could learn to communicate with it. Additionally, creating alien forms of intelligence would be fascinating as it would teach us about the limits and possibilities within the space of intelligent beings. #### 2.3.5 The viability of the Third Pillar The hypothesis behind the Third Pillar is that coming up with general principles for how to create effective learning environments is easier than creating a curriculum of training environments by hand. As with the overall AI-GA philosophy, the bet is that with sufficient compute, letting learning algorithms solve the problem will ultimately be easier than trying to solve it ourselves. Of course, this may require orders of magnitude more compute than we have at present, and whether we will have sufficient compute to see this pillar succeed before general AI is produced by the manual path is an open question. However, I predict that we will abstract the role environments play in the creation of complexity and intelligence, and that doing so will shave orders of magnitude off the total amount of compute required, such that we will not need the computation that was required on Earth to produce humans in order to create general AI via an AI-GA. One thing in favor of the Third Pillar, and the AI-GA approach in general, is that we know it can work. Darwinian evolution on Earth is the only existence proof we have of how to build general intelligence, and AI-GAs are modeled off of that phenomenon. It is a fascinating research grand-challenge to figure out whether we can extract the principles that made Earth’s intelligence-generating algorithm so successful in a way that enables us to create an AI-GA with the computation available to us. Note that the AI-GA grand challenge (creating an algorithm that generates general AI) is related to, but different from, the grand challenge of creating open-ended search algorithms [[169](#bib.bib169)]. One could create an AI-GA without it being open-ended. For example, one could launch an AI-GA with the target task being to pass the Turing Test, and the algorithm would then halt when it produces general AI because it would have no other goal to optimize for. One could also create an open-ended algorithm that would never produce an AI-GA (e.g. one that generates endless music innovations). That said, these two grand challenges are deeply related and will catalyze the progress of each other. 3 Discussion ------------- The AI-GA approach raises many questions. This section attempts to quickly address a variety of those different questions. Traditional machine learning involves hand-designing an environment, hand-selecting an architecture, and then hand-designing a learning algorithm that learns to solve that environment. Meta-learning represents a step towards a more fully automated pipeline. One area of research in meta-learning focuses on learning architectures. Another area of meta-learning is learning the learning algorithm. But in both, a researcher is still required to design the distribution of tasks for meta-training. AI-GAs take the next step in this natural progression by automatically learning all three things: the architecture, the learning algorithm, and the training environments. That said, the AI-GA approach is not free of building blocks. It does not start from scratch, trying to create intelligence from nothing but the laws of physics and a soup of atoms. It might, for example, start with the assumption that we want neural networks with regular and neuromodulatory neurons, niches and explicit goal-switching, and populations of agents within each environmental niche. Research on AI-GAs will involve identifying the minimum number of sufficient and catalyzing building blocks to create an AI-GA. But the set of building blocks AI-GA researchers will look for will be very different from those for the manual engineering approach. For example, AI-GA will likely attempt to automatically learn everything in Table [1](#S1.T1 "Table 1 ‣ 1.1 The manual AI approach ‣ 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence"). Rather than discovering those building blocks manually, AI-GA researchers will try to figure out what are the building blocks that enable us to abstract open-ended complexity explosions in a computationally tractable way. I hypothesize that fewer building blocks need to be identified to produce an AI-GA than to produce intelligence via the manual AI approach. For that reason, I further hypothesize that identifying them and how to combine them will be easier, and thus that AI-GA is more likely to produce general AI faster. That said, an open question is how much compute is required to make an AI-GA. If we magically had vastly more compute available starting tomorrow, my confidence in AI-GAs producing general AI before the manual path would be greatly increased. What is less clear is whether AI-GAs can beat the manual path given that the computation AI-GAs need is not yet available. The lack of computation gives the manual path an advantage since it is more compute-efficient. On the other hand, as Rich Sutton has argued, history often favors algorithms that better take advantage of the exponential increases in computation the future provides [[175](#bib.bib175)]. It may turn out that the manual path can succeed without having to identify hundreds of building blocks. It might turn out instead that only a few are required. At various points in the history of physics many pieces of theoretical machinery were thought to be required, only later to be unified into smaller, simpler, more elegant frameworks. Such unification could also occur within the manual AI path. But at some point if everything is being learned from a few basic principles, it seems more like an AI-GA approach instead of a manual identify-then-combine engineering approach. In other words, the AI-GA approach *is* the quest for a set of simple building blocks that can be combined to produce general AI, so true unification would validate it. \new One interesting way to catalyze the Third Pillar is to harness data from the natural world when creating effective learning environments. This idea could accelerate progress in both the target-task and/or open-ended versions of the Third Pillar. For example, the environment generator could be given access to the Internet. It could then learn to generate tasks that involve classifying real images, imitating the skills animals and/or humans perform (e.g. in online video repositories such as Youtube), solving problems in existing textbooks, or solving existing machine learning benchmarks in language, logic, reinforcement learning, etc. There is a long history of fruitful research in imitation learning and learning via observation that demonstrates the benefits of exploiting such data [[37](#bib.bib37), [13](#bib.bib13), [162](#bib.bib162), [7](#bib.bib7), [142](#bib.bib142), [36](#bib.bib36), [182](#bib.bib182), [129](#bib.bib129), [116](#bib.bib116), [1](#bib.bib1)]. AI-GAs too could benefit from this treasure trove of information. Incorporating such tasks might also have the benefit of making the AI that results more capable at solving problems in our world and better able to communicate with and understand us (and vice versa). There is a third path to general AI. I call it the “mimic path.” It involves neuroscientists studying animal brains, especially the human brain, in an attempt to reverse engineer how intelligence works in animals and rebuild it computationally. In contrast to abstracting the general principles found via neuroscience and building them in any workable instantiation (which is the purview of the manual path), the mimic path attempts to recreate animal brains in as much faithful detail as possible. The most famous example of this approach is the work of Henry Markram and his Blue Brain and European Human Brain projects. This path is also independently worthwhile, and should be pursued irrespective of whether the other two paths have already succeed. That is because if the manual or AI-GA paths build general AI that does not resemble human brains, we would still be interested in specifically how human intelligence works. If the mimic path wins, we would still be interested in the other two paths for the reasons outlined earlier. The mimic path is unlikely to be the fastest path to producing general AI because it does not seek to benefit from the efficiencies of abstraction. For example, if an entire neocortical column could be functionally replaced by a multi-layer recurrent neural network with skip connections, the spirit of the mimic path would be to eschew that option in favor of faithfully replicating the actual human neocortical column with all of its expensive-to-simulate complexity. Additionally, the mimic path is slowed by the difficulty of creating the technologies required to identify what is happening inside functioning natural brains. Overall, I do not view the mimic path it as one of the major paths to producing general AI because the goal of those committed to it is not solely to produce general AI. I thus only mention it this late in this essay, and still consider the manual and AI-GA paths as the two main paths. For the most part, this article has assumed that neural networks have a large part to play in the creation of AI-GAs. That reflects my optimism regarding deep neural networks, as well as my experience and passions. That said, it is of course possible that other techniques may produce general AI. One could substitute other techniques into the pillars of the AI-GA framework. For example, one could replace Pillar One and Pillar Two with Solomonoff induction via AIXI [[71](#bib.bib71), [70](#bib.bib70)] or similar ideas. To do so, major advances would be required to make these approaches computationally tractable, as with DNNs. However, one would still need work on AI-GA’s Third Pillar because Solomonoff induction and AIXI take as a starting assumption a single environment to be solved. That highlights again that generating effective learning environments is the newest, least-explored area of research of the three pillars, even in far disparate areas of AI research. Advances in it may thus benefit many different subfields of AI. \new I also want to emphasize that AI-GAs are *not* an “evolutionary approach,” despite people gravitating towards calling it that despite my saying otherwise. There are a few reasons to avoid that terminology. A first reason is that many methods could serve as the outer-loop optimizer. For example, one could use gradient-based meta-learning via meta-gradients [[126](#bib.bib126), [43](#bib.bib43), [103](#bib.bib103)] or policy gradients [[188](#bib.bib188), [35](#bib.bib35)]. Additionally, one could potentially use Bayesian Optimization as the outer-loop search algorithm, although innovations would be needed to help them search in high-dimensional search landscapes. Of course, evolutionary algorithms are also an option. There are pros and cons to using evolutionary methods [[173](#bib.bib173), [93](#bib.bib93), [143](#bib.bib143), [192](#bib.bib192), [164](#bib.bib164)], and benefits to hybrid approaches that combine evolutionary concepts (searching in the parameter space) with policy-gradient concepts (searching in the action space) [[128](#bib.bib128), [44](#bib.bib44)]. Because they are just one of many choices, calling AI-GAs an evolutionary approach is inaccurate. A second reason to avoid calling this an evolutionary approach is that many people in the machine learning community seem to have concluded that evolutionary methods are not worthwhile and not worth considering. There is thus a risk that if the AI-GA idea is associated with evolution it will not be evaluated with a clear, objective, open-mind, but instead will be written off for reasons that do not relate to its merits. Of course, in practice the three different paths will not exist isolated from each other. The manual, mimic, and AI-GA paths will all inform each other as discoveries in one catalyze and inspire work in the other two. Additionally, people will pursue hybrid approaches (e.g. the vision outlined in Botvinick et al. [[15](#bib.bib15)]) and it will be difficult in many cases to tell which path a particular group or project belongs to. That said, it is instructive to give these different approaches their own names and talk about them separately despite the inevitable cross pollination and hybridization that will occur. \new AI-GAs are a bet on learning in simulated worlds. The hypothesis is that we could create general AI entirely in such virtual worlds (although they may have to be complex worlds). However, once *general* AI is produced, that AI would be a sample-efficient learner that could efficiently learn in our world. For example, such an agent could be transferred to a robot that could learn to maneuver in our world, including helping and otherwise interacting with humans. Thus, rather than specifically designing techniques to transfer from simulation to reality, the approach would not require such adaptation because the general AI would perform that efficiently itself. Many researchers will enjoy research into creating AI-GAs. There are many advantages to AI-GA research versus the manual path. Initially, there are currently tens of thousands of researchers pursuing the manual path. There are very few scientists currently pursuing AI-GAs. For many, that will make AI-GA research more exciting. It also decreases the risk of getting scooped. Additionally, to pursue the manual path, one might feel (as I have historically) the need to stay abreast of developments on *every* building block one considers potentially important to creating a large, complicated general AI machine. One might thus feel a desire to read all new, important papers on each of the building blocks in Table [1](#S1.T1 "Table 1 ‣ 1.1 The manual AI approach ‣ 1 Two approaches to producing general AI: the manual approach vs. AI-generating algorithms ‣ AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence"), as well as building blocks it does not list and new ones as they are discovered. That is of course impossible given the number of such papers published each year. Because AI-GAs have fewer researchers and, if my hypothesis is correct, fewer building blocks required to make them, staying abreast of the AI-GA literature should prove more manageable. Additionally, if the large-scale trend in machine learning continues, wherein hand-coded components are replaced by learning, working on AI-GAs is a way to future-proof one’s research. To put it brashly, knowing what you know now, if you could go back 15 years ago, would you want to dedicate your career to improving HOG and SIFT? Or would you rather invest in the learning-based approaches that ultimately would render HOG and SIFT unnecessary, and prove to be far more general and powerful? As AI-GA research advances, it will likely generate many innovations that can be ported to the manual path. It will also create techniques and narrow AIs of economic importance that will benefit society long before the ambitious goal of producing general AI is accomplished. For example, creating algorithms that automatically search for architectures, learning algorithms, and training environments that solve tasks will be greatly useful, even for tasks far less ambitious than creating general AI. Thus, even if creating an AI-GA ultimately proves impossible, research into it will be valuable. There has been a long debate in the artificial intelligence and machine learning community as to where we should be on the spectrum between learning everything from scratch and hand-designing everything. Some argue that we need to inject lots of human priors into a system in order for it to be sample-efficient and/or generalize well. Others think we should learn as much as possible from data, eschewing human priors because learned solutions are ultimately better once there is sufficient compute and data. Where does the AI-GA paradigm fit in this debate? First, where we should be on the spectrum of course depends on our goals and how much time we have to achieve them. If we already know how to build a solution that performs well enough and performance is all we care about, we should clearly do that. Additionally, if we want to build a solution on a short timeframe, it is likely that the fastest path to a decent solution will involve lots of human prior information (unless we already know how to create learning-based solutions, as is the case in the many areas where deep learning is currently the dominant technique). However, for more ambitious goals over longer time horizons, it is often the case that betting on a learning-based solution is a good strategy. However, a commitment to learning is not a commitment to sample inefficient learners that only perform well with extreme levels of data and that generalize poorly. The AI-GA philosophy is that via a compute-intensive, sample-inefficient outer loop optimization process we can produce learning agents that are extremely sample efficient and that generalize well. Just as evolution (a slow, compute-intensive, sample-inefficient process) produced human intelligence, as AI-GAs advance they will produce individual learners that increasingly approach human learners in sample efficiency and generalization abilities. One might argue that means that the system itself is not sample efficient, because it requires so much data. That is true in some sense, but not true in other important ways. One important meaning of sample efficiency is how many samples are needed *on a new problem*. For example, a new disease might appear on Earth and we may want doctors or AI to be able to identify it and make predictions about it from very few labeled examples. If an AI-GA produces anything akin to human-level intelligence, the learner produced would be able, as humans are, to be sample efficient with respect to this new problem. The idea behind AI-GAs is that as compute becomes cheaper and our ability to generate sufficiently complex learning environments grows, we can afford to be sample inefficient in the outer loop in the service of producing a learner that is at or beyond human intelligence in being sample efficient when deployed on problems we care about. Putting aside its computational cost, AI-GAs thus in some sense represent the best of both ends of the spectrum: it can learn from data and not be constrained by human priors, but can produce something that, like humans themselves, contain powerful priors and ways to efficiently update them to rapidly solve problems. 4 Safety and ethical considerations ------------------------------------ Any discussion of producing general AI raises the ethical question of whether we should be pursuing this goal. The question of whether and why we should create general AI is a complicated one and is the focus of many articles [[140](#bib.bib140), [39](#bib.bib39), [4](#bib.bib4), [17](#bib.bib17), [14](#bib.bib14)]. I will not delve into that issue here as it is better served when it is the sole issue of focus. However, the AI-GA path introduces its own unique set of ethical issues that I do want to mention here. In my view, the largest ethical concern unique to the AI-GA path is that it is, by definition, attempting to create a runaway process that leads to the creation of intelligence superior to our own. Many AI researchers have stated that they do not believe that AI will suddenly appear, but instead that progress will be predictable and slow. However, it is possible in the AI-GA approach that at some point a set of key building blocks will be put together and paired with sufficient computation. It could be the case that the same amount of computation had previously been insufficient to do much of interest, yet suddenly the combination of such building blocks finally unleashes an open-ended process. I consider it unlikely to happen any time soon, and I also think there will be signs of much progress before such a moment. That said, I also think it is possible that a large step-change occurs such that prior to it we did not think that an AI-GA was in sight. Thus, the stories of science fiction of a scientist starting an experiment, going to sleep, and awakening to discover they have created sentient life are far more conceivable in the AI-GA research paradigm than in the manual path. As mentioned above, no amount of compute on training a computer to recognize images, play Go, or generate text will suddenly become sentient. However, an AI-GA research project with the right ingredients might, and the first scientist to create an AI-GA may not know they have finally stumbled upon the key ingredients until afterwards. That makes AI-GA research more dangerous. Relatedly, a major concern with the AI-GA path is that the values of an AI produced by the system are less likely to be aligned with our own. One has less control when one is creating AI-GAs than when one is manually building an AI machine piece by piece. Worse, one can imagine that some ways of configuring AI-GAs (i.e. ways of incentivizing progress) that would make AI-GAs more likely to succeed in producing general AI also make their value systems more dangerous. For example, some researchers might try to replicate a basic principle of Darwinian evolution: that it is ‘red in tooth and claw.’ If a researcher tried to catalyze the creation of an AI-GA by creating conditions similar to those on Earth, the results might be similar. We might thus produce an AI with human vices, such as violence, hatred, jealousy, deception, cunning, or worse, simply because those attributes make an AI more likely to survive and succeed in a particular type of competitive simulated world. Note that one might create such an unsavory AI unintentionally by not realizing that the incentive structure they defined encourages such behavior. In fact, it is routine for researchers to be surprised by the strategies AI comes up with to optimize the objectives it is given, and often the strategies and behaviors it creates are not at all what the researcher intended [[94](#bib.bib94)]. That phenomenon will only increase as AI-GA systems become more powerful, complex, and are able to make significant advances on their own. If a system based on red-in-tooth-and-claw competition is a faster path to a successful AI-GA, or even if it is thought to be, then some organizations or individuals on Earth may be incentivized to create such systems despite their potential risks. Additionally, it is likely safer to create AI when one knows how to make it piece by piece. To paraphrase Feynman again, one better understands something when one can build it. Via the manual approach, we would likely understand relatively more about what the system is learning in each module and why. The AI-GA system is more likely to produce a very large black box that will be difficult to understand. That said, even current neural networks, which are tiny and simple compared to those that will likely be required for AGI, are inscrutable black boxes that are very difficult to understand the inner workings of [[117](#bib.bib117), [119](#bib.bib119), [121](#bib.bib121), [118](#bib.bib118), [197](#bib.bib197), [99](#bib.bib99), [196](#bib.bib196), [177](#bib.bib177), [104](#bib.bib104), [198](#bib.bib198), [97](#bib.bib97), [52](#bib.bib52)]. Once these networks are larger and have more complex, interacting pieces, the result might be sufficiently inscrutable that it does not end up mattering whether the inscrutability is even higher with AI-GAs. While ultimately we likely will learn much about how these complex brains work, that might take many years. From the AI safety perspective, however, what is likely most critical is our ability to understand the AI we are creating right around the time that we are finally producing very powerful AI. For all these reasons, it is essential to invest in AI-GA-specific AI safety research. AI-GA researchers need to be in constant communication with AI safety researchers to help inform them. Ideally, AI-GA researchers should conduct safety research themselves in addition to making AI-GA advances. Each AI-GA scientist must take precautions to try to ensure that AI-GA research is safe and, should it succeed, that it produces AIs whose values are aligned with our own. It is fair to ask why should I write this paper if I think AI-GA research is more dangerous, as I am attempting to inform people about it potentially being a faster path to general AI and advocating that more people work on this path. One reason is I believe that, on balance, technological advances produce more benefit than harm. That said, this technology is very different and could prove an exception to the rule. A second reason is because I think society is better off knowing about this path and its potential, including its risks and downsides. We might therefore be better prepared to maximize the positive consequences of the technology while working hard to minimize the risks and negative outcomes. Additionally, I find it hard to imagine that, if this is the fastest path to AI, then society will not pursue it. I struggle to think of powerful technologies humanity has not invented soon after it had the capability to do so. Thus, if it is inevitable, then we should be aware of the risks and begin organizing ourselves in a way to minimize those risks. Very intelligent people disagree with my conclusion to make knowledge of this technology public. I respect their opinions and have discussed this issue with them at length. It was not an easy decision for me to make. But ultimately I feel that it is a service to society to make these issues public rather than keep them the secret knowledge of a few experts. There is another ethical concern, although many will find it incredible and dismiss it as the realm of fantasy or science fiction. We do not know how physical matter such as atoms can produce feelings and sensations like pain, pleasure, or the taste of chocolate, which philosophers call qualia. While some disagree, I think we have no good reason to believe that qualia will not emerge at some point in artificially intelligent agents once they are complex enough. A simple thought experiment makes the point: imagine if the mimic path enabled us to simulate an entire human brain and body, down to each subatomic particle. It seems likely to me that such a simulation would feel the same sensations as its real-world counterpart. Recognizing if and when artificial agents are feeling pain, pleasure, and other qualia that are worthy of our ethical considerations is an important subject that we will have to come to terms with in the future. However, that issue is not specific to the method in which AI is produced, and therefore is not unique to the AI-GA path. There is an AI-GA-specific consideration on this front, however. On Earth, there has been untold amounts of suffering produced in animals en route to the production of general AI. Is it ethical to create algorithms in which such suffering occurs if it is essential, or helpful, to produce AI? Should we ban research into algorithms that create such suffering in order to focus energy on creating AI-GAs that do not involve suffering? How do we balance the benefits to humans and the planet of having general AI vs. the suffering of virtual agents? These are all questions we will have to deal with as research progresses on AI-GAs. They are related to the general question of ethics for artificial agents, but have unique dimensions worthy of specific consideration. Some of these ideas will seem fantastical to many researchers. In fact, it is risky for my career to raise them. However, I feel obligated to let society and our community know that I consider some of these seemingly fantastical outcomes possible enough to merit consideration. For example, even if there is a small chance that we create dangerous AI or untold suffering, the costs are so great that we should discuss that possibility. As an analogy, if there were a 1% chance that a civilization-ending asteroid could hit Earth in a decade or ten, we would be foolish not to begin discussing how to track it and prevent that catastrophe. We should keep in mind the grandeur of the task we are discussing, which is nothing short than the creation of an artificial intelligence smarter than humans. If we succeed, we arguably have also created life itself, by some definitions. We do not know if that intelligence will feel. We do not know what its values might be. We do not know what its intentions towards us may be. We might have an educated guess, but any student of history would recognize that it would be the height of hubris to assume we know with certainty exactly what general AI will be like. Thus, it is important to encourage, instead of silence, a discussion of the risks and ethical implications of creating general artificial intelligence. 5 Conclusions -------------- In this essay I described three paths to producing general artificial intelligence. The ‘mimic path’ builds as many biological details of human brains as possible into computational models, and is pursued largely by neuroscientists, computational neuroscientists, and cognitive scientists. The mimic path is unlikely to be the fastest path to general AI because it attempts to simulate all of the detail of biological brains irrespective of whether they can be ignored or abstracted by different, more efficient, machinery. The ‘manual path’ is what most of the machine learning community is currently committed to. It involves two phases. In Phase 1, we identify each of the building blocks necessary to create a complex thinking machine. This is the phase we are currently in: most papers either introduce new candidate building blocks or improvements to previously proposed building blocks. It is unclear how long it might take to identify all of the correct building blocks, including the right variants of each one. The manual path implicitly assumes a Phase 2, where we will undertake the Herculean task of figuring out how to combine all of the correct variants of these building blocks into a complex thinking machine. One of the goals of this essay is simply to make explicit the path most of machine learning is committed to and that it implies a second phase that is rarely discussed. Of course, these phases will in practice overlap. There will be teams that increasingly try to combine existing building blocks while other teams continue to create new building blocks and improve existing ones. I discussed the scientific, engineering, and sociological pros and cons of this manual path to creating general AI. I also described an alternative path to AI: creating general AI-generating algorithms, or AI-GAs. This path involves Three Pillars: meta-learning architectures, meta-learning algorithms, and automatically generating effective learning environments. As with the other paths, there are advantages and disadvantages to this approach. A major con is that AI-GAs will require a lot of computation, and therefore may not be practical in time to be the first path to produce general AI. However, AI-GA’s ability to benefit more readily from exponential improvements in the availability of compute may mean that it surpasses the manual path before the manual path succeeds. A reason to believe that the AI-GA path may be the fastest to produce general AI is in line with the longstanding trend in machine learning that hand-coded solutions are ultimately surpassed by learning-based solutions as the availability of computation and data increase over time. Additionally, the AI-GA path may win because it does not require the Herculean Phase 2 of the manual path and all of its scientific, engineering, and sociological challenges. Additional benefits of AI-GA research are that fewer people are working on it, making it an exciting, unexplored research frontier. All three paths are worthwhile scientific grand challenges. That said, society should increase its investment in the AI-GA path. There are entire fields and billions of dollars devoted to the mimic path. Similarly, most of the machine learning community is pursuing the manual path, including billions of dollars in government and industry funding. Relative to these levels of investment, there is little research and investment in the AI-GA path. While still small relative to the manual path, there has been a recent surge of interest in Pillar 1 (meta-learning architectures) and Pillar 2 (meta-learning algorithms). However, there is little work on Pillar 3, and no work to date on attempting to combine the Three Pillars. Since the AI-GA path might be the fastest path to producing general AI, then society should substantially increase its investment in AI-GA research. Even if one believes the AI-GA path has a 1%-5% of being the first to produce general AI, then we should allocate corresponding resources into the field to catalyze its progress. That, of course, assumes we conclude that the benefits of potentially producing general AI faster outweigh the risks of producing it via AI-GAs, which I ultimately do. At a minimum, I hope this paper motivates a discussion on these questions. While there is great uncertainty about which path will ultimately produce general AI first, I think there is little uncertainty that we are underinvesting in a promising area of machine learning research. Finally, this essay has discussed many of the interesting consequences of building general AI that are unique to producing general AI via AI-GAs. One benefit is being able to produce a large diversity of different types of intelligent beings, and thus accelerating our ability to understand intelligence in general and all its potential manifestations. Doing so may also better help us understand our own single instance of intelligence, much as traveling the world is necessary to truly understand one’s hometown. Each different intelligence produced by an AI-GA could also create entire alien histories and cultures from which we can learn from. Downsides unique to AI-GAs were also discussed, including that it might make the sudden, unanticipated production of AI more likely, that it might make producing dangerous forms of AI more likely, and that it may create untold suffering in virtual agents. While I offered my own views on these issues and how I weigh the positives and negatives of this technology for the purpose of deciding whether we should pursue it, a main goal of mine is to motivate others to discuss these important issues. My overarching goal in this essay is not to argue that one path to general AI is likely to be better or faster. Instead, it is to highlight that there is an entirely different path to producing general AI that is rarely discussed. Because research in that path is less well known, I briefly summarized some of the research we and others have done to take steps towards creating AI-GAs. I also want to encourage reflection on (1) which path or paths each of us is committed to and why, (2) the assumptions that underlie each path (3) the reasons why each path might prove faster or slower in the production of general AI, (4) whether society and our community should rebalance our investment in the different paths, and (5) the unique benefits and detriments of each approach, including AI safety and ethics considerations. It is my hope that this essay will improve our collective understanding of the space of possible paths to producing general AI, which is worthwhile for everyone regardless of which path we choose to work on. I also hope this essay highlights that there is a relatively unexplored path that may turn out to be the fastest path in the greatest scientific quest in human history. I find that extremely exciting, and hope to inspire others in the community to join the ranks of those working on it. Acknowledgements ---------------- My foremost thanks go to Ken Stanley and Joel Lehman, whose lifetime of excellent, creative work greatly informed, influenced, and inspired my thinking on this subject. For helpful discussions on the ideas in this manuscript and/or comments on the manuscript, I thank both of them and Peter Dayan, Zoubin Ghahramani, Ashley Edwards, Felipe Petroski-Such, Vashisht Madhavan, Joost Huizinga, Adrien Ecoffet, Rui Wang, Fritz Obermeyer, Martin Jankowiak, Miles Brundage, Jack Clark, and all the members of Uber AI Labs.
04e3d32a-9c9b-4a26-8ab9-586aa8c0a8e4
trentmkelly/LessWrong-43k
LessWrong
Behavioral Sufficient Statistics for Goal-Directedness Note: this is a new version -- with a new title -- of my recent post "A Behavioral Definition of Goal-Directedness". Most of the formulas are the same, except for the triviality one that deals better with what I wanted; the point of this rewrite is to present the ideas in a perspective that makes sense. I'm not proposing a definition of goal-directedness, but just sufficient statistics on the complete behavior that make a behavioral study of goal-directedness more human-legible. I also use this new version as a first experiment in another approach to feedback: this post includes a lot of questions asked through the elicit prediction feature. A lot. I definitely tried to overshoot the reasonable number to add, to compensate my tendency to never use them. But don't worry: whether or not there were too many questions will be the subject of another question at the end! Introduction In a previous post, I argued for the study of goal-directedness in two steps: > * Defining goal-directedness: depends only on the complete behavior of the system, and probably assumes infinite compute and resources. > * Computing goal-directedness: depends on the internal structure, and more specifically what information about the complete behavior can be extracted from this structure. Intuitively, understanding goal-directedness should mean knowing which questions to ask about the complete behavior of the system to determine its goal-directedness. Here the “complete” part is crucial; it simplifies the problem by removing the need to infer what the system will do based on limited behavior. Similarly, we don’t care about the tractability/computability of the questions asked; the point is to find what to look for, without worrying (yet) about how to get it. Elicit Prediction (forecast.elicit.org/binary/questions/GUv6153YY) Despite these simplifications, the behavioral approach still suffers from one massive problem: it's not human-legible. We don't know what to do with this mass of loos
2162bca3-2e16-4d95-867d-72e632595688
trentmkelly/LessWrong-43k
LessWrong
Long-chain correlation: lead paint and crime A friend has been asking my views on the likelihood that there's anything to a correlation between changing levels of lead in paint (and automotive exhaust) and the levels of crime. He quoted from a Reason Blog: > So Nevin dove in further, digging up detailed data on lead emissions and crime rates to see if the similarity of the curves was as good as it seemed. It turned out to be even better: In a 2000 paper (PDF) he concluded that if you add a lag time of 23 years, lead emissions from automobiles explain 90 percent of the variation in violent crime in America. Toddlers who ingested high levels of lead in the '40s and '50s really were more likely to become violent criminals in the '60s, '70s, and '80s. I responded with the following: > Sounds like a stretch to me. I'd want to hear that they didn't test more than 5 other hypothesis before coming to that conclusion, or the p value was far better than .05. I kind of doubt that either is the case.  He's apparently continued to pursue the question, and just forwarded these remarks from Steven Pinker that I thought were very illuminating, and probably deserve a place in this community's toolkit for skeptics. Pinker's main point is that the association between Lead and crime is a long tenuous chain of suppositions, and several of the intermediate points should be far easier to measure. Finding correlations at this distance is not very informative. http://stevenpinker.com/files/pinker/files/pinker_comments_on_lead_removal_and_declining_crime.pdf Does the phrase "long-chain correlation" stick in your head and make it easier to dismiss this kind of argument?  
e9c07e25-728e-4098-848f-441c70b5158e
trentmkelly/LessWrong-43k
LessWrong
Arguments are good for helping people reason about things Yudkowsky recently posted a Twitter thread about how ideal reasoners respond to arguments. My understanding of his reasoning is: 1. The more smart/rational/whatever you are, the better you are at figuring out what is true 2. Thus, whether you believe the conclusion of an argument should be based primarily on whether the conclusion is true, rather than on how effectively the argument was presented This principle seems valid to me. Valentine, in a LessWrong comment thread, used Yudkowsky's thread to draw the conclusion that a healthy rationalist community should not make arguments. I think this is silly. We are not, unfortunately, logically omniscient; we cannot just look at data and draw all the correct conclusions from it. The purpose of an argument is to help people realize what conclusions they should draw from data without having to figure it all out on their own.
f4269bab-290a-40ff-aa38-351b3743dc64
trentmkelly/LessWrong-43k
LessWrong
Four visions of Transformative AI success Tl;dr When people work towards making a good future in regards to Transformative AI (TAI), what’s the vision of the future that they have in mind and are working towards? I’ll propose four (caricatured) answers that different people seem to give: * (Vision 1) “Helper AIs”, * (Vision 2) “Autonomous AIs”, * (Vision 3) “Supercharged biological human brains”, * (Vision 4) “Don’t build TAI”. For each of these four, I will go through: * the typical assumptions and ideas that these people seem to typically have in mind; * potential causes for concern; * major people, institutions, and research directions associated with this vision. I’ll interject a lot of my own opinions throughout, including a suggestion that, on the current margin, the community should be putting more direct effort into technical work towards contingency-planning for Vision 2. Warning 1: Oversimplifications. This document is full of oversimplifications and caricatures. But hopefully it’s a useful starting point for certain purposes. Warning 2: Jargon & Unexplained Assumptions. Lots of both; my target audience here is pretty familiar with the AGI safety and alignment literature, and buys into widely-shared assumptions within that literature. But DM me if something seems confusing or dubious, and I’ll try to fix it. Vision 1: “Helper AIs”—AIs doing specifically what humans want them to do 1.1 Typical assumptions and ideas By and large, people in this camp have an assumption that TAI will look, and act, and be trained, much like LLMs, but they’ll work better. They also typically have an assumption of slow takeoff, very high compute requirements for powerful AI, and relatively few big actors who are training and running AIs (but many more actors using AI through an API). There are two common big-picture stories here: * (Less common story) Vision 1 is a vision for the long-term future (example). * (More common story) Vision 1 is a safe way to ultimately get to Vision 2 (or somewhere els
9e69210e-010f-4790-8516-ddb72e5c98cb
trentmkelly/LessWrong-43k
LessWrong
[AN #71]: Avoiding reward tampering through current-RF optimization Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email. Audio version here (may not be up yet). Highlights Designing agent incentives to avoid reward tampering (Tom Everitt, Ramana Kumar, and Marcus Hutter) (summarized by Flo): Reward tampering occurs when a reinforcement learning agent actively changes its reward function. The post uses Causal Influence Diagrams (AN #61) to analyze the problem in a simple grid world where an agent can easily change the definition of its reward. The proposed solution is current-RF optimization: Instead of maximizing the sum of rewards that would be given after each action (where the reward signal can dynamically change over time), the agent searches for and executes a plan of actions that would maximize the current, unchanged reward signal. The agent would then not be incentivized to tamper with the reward function since the current reward is not maximized by such tampering. There are two different flavours to this: time-inconsistency-aware agents account for future changes in their own behaviour due to modified reward signals, while TI-unaware agents ignore this in their planning. TI-aware agents have an incentive to preserve their reward signal and are therefore potentially incorrigible. Flo's opinion: I enjoyed this application of causal diagrams and think that similar detailed analyses of the interactions between failure modes like wireheading, instrumental goals like reward preservation and the specific implementation of an agent would be quite valuable. That said, I am less excited about the feasibility of the proposed solution since it seems to require detailed knowledge of the agent about counterfactual rewards. Also, I expect the distinction between changes in the reward signal and changes in the state that happen to also affect
b91d6679-53ba-4a3a-bbe4-23b04f61a123
trentmkelly/LessWrong-43k
LessWrong
Newcomb’s problem is just a standard time consistency problem (Cross-posted from my blog.)  Confidence level: medium I want to argue that Newcomb’s problem does not reveal any deep flaw in standard decision theory. There is no need to develop new decision theories to understand the problem. I’ll explain Newcomb’s problem and expand on these points below, but here’s the punchline up front. TLDR: 1. Newcomb’s problem, framed properly, simply highlights the issue of time consistency. If you can, you would beforehand commit to being the type of person who takes 1 box; but in the moment, under discretion, you want to 2-box. 2. That is: the answer to Newcomb depends on from which point in time the question is being asked. There’s just no right way to answer the question without specifying this. 3. Confusion about Newcomb’s problem comes when people describing the problem implicitly and accidentally conflate the two different possible temporal perspectives. 4. Newcomb’s problem is isomorphic to the classic time consistency problem in monetary policy – a problem that is well-understood and which absolutely can be formally analyzed via standard decision theory. I emphasize that the textbook version of expected utility theory lets us see all this! There’s no need to develop new decision theories. Time consistency is an important but also well-known feature of bog-standard theory.   I. Background on Newcomb (You can skip this section if you’re already familiar.) Newcomb’s problem is a favorite thought experiment for philosophers of a certain bent and for philosophically-inclined decision theorists (hi). The problem is the following: * Scientists have designed a machine for predicting human decisions: it scans your brain, and predicts on average with very high accuracy your choice in the following problem. * I come to you with two different boxes: * 1. In one box, I show you there is $100. * 2. The second box is a mystery box. * I offer to let you choose between taking just the one mystery box, or taking both the my
3fdf024c-5ce3-41a1-9ddd-7af9ad350cbb
trentmkelly/LessWrong-43k
LessWrong
Language models are nearly AGIs but we don't notice it because we keep shifting the bar  I’m putting my existing work on AI on Less Wrong, and editing as I go, in preparation to publishing a collection of my works on AI in a free online volume. If this content interests you, you could always follow my Substack, it's free and also under the name Philosophy Bear. Anyway, enjoy. Comments are appreciated as I will be rewriting parts of the essays before I put them out. A big thank you to user TAG who identified a major error in my previous post regarding the Chinese Room Thought experiment, which prompted its correction [in the addition that will go in the book] and a new corrections section for my Substack page. Glossary: GPT-3- a text-generating language model. PaLM-540B- a stunningly powerful question-answering language model. Great Palm- A hypothetical language model that combines the powers of GPT-3 and PaLM-540B. Probably buildable with current technology, a lot of money and a little elbow grease. Great Palm with continuous learning (GPWCL)- A hypothetical language model that combines the capacities of GPT-3 and PaLM-540B, with an important additional capacity. Most language models work over a “window” of text, functioning as short-term memory. Their long-term memory is set by their training. Continuous learning is the capacity to keep adding to long-term memory as you go, and this would allow a language model to tackle much longer texts. The argument What I’ll be doing in this short essay is a bit cheeky, but I think we’ll make a few important points, viz: 1. Goals that seem very concrete can turn out to be vulnerable to bar-shifting- shifting which we may scarcely even notice. 2. AGI is such a goal. 3. We have gotten very good, much too good, at denying the progress we have made in AGI. 4. A focus on able-bodied humanity, and the tendency to forget disabled people exist when thinking about these topics, deceives us in these matters. If I’m being a bit of a gadfly here, it’s not without a purpose. Everything I say in this article in a
98d2925d-0e6d-49d1-9e02-c8399bfd7ef0
trentmkelly/LessWrong-43k
LessWrong
AI Alignment [Incremental Progress Units] this week (10/08/23) [edit: took out naming controversy stuff, as it was distracting from the point of the blog]  I am introducing a new rating system for each alignment breakthrough. The rating system will go from 1 star⭐ to 5 stars ⭐⭐⭐⭐⭐. A 1 star ⭐ “breakthrough” represents incremental progress. This means, that while technically achieving a new milestone, this breakthrough was the result of known techniques and could have been easily predicted in advance by an expert in the field. An example of something I’ve posted in the past that should be considered 1 star ⭐ is Wustchen V3. An example of a hypothetical 1 star ⭐ breakthrough would be if RLHF on GPT-5 was found to work better than RLHF on GPT-4. A 5 star ⭐⭐⭐⭐⭐ “breakthrough” represents a discovery that solves a significant unsolved problem that was considered to be a major obstacle on at least one AI Alignment path. An example of a 5 star ⭐⭐⭐⭐⭐ breakthrough that I’ve posted in the past would be neuron superposition. An example of a hypothetical 5 star ⭐⭐⭐⭐⭐ breakthrough would be if someone were to develop a system that could translate a human-language description of a math problem into a formal mathematical proof. Now, without any further ado… AI Alignment Breakthroughs this Week This week there were breakthroughs in the areas of: AI Evaluation AI Agents Mechanistic Interpretability Explainable AI Simulation Making AI Do what we want AI Art   AI Evaluation   PCA-Eval What is it: a new benchmark for multi-modal decision making What is new: evaluate multimodal models (like GPT-4V) by their ability to make decisions in different domains What is it good for: Benchmarking is key for many AI safety strategies such as the Pause and RSPs Rating: ⭐⭐ AI Agents   Adapting LLM Agents Through Communication What is it: Improved AI agents What is new: By fine-tuning the LLM, the agents can perform better What is it good for: Factored Congnition, Bueracracy of AIs Rating: ⭐ Mechanistic Interpretability   Research
13972cbb-d5c6-4129-b006-15961de6dba4
trentmkelly/LessWrong-43k
LessWrong
Desiderata for an Adversarial Prior Based on the discussion in the comments on https://www.lesswrong.com/posts/6XsZi9aFWdds8bWsy/is-there-any-discussion-on-avoiding-being-dutch-booked-or. Epistemic status: I am not an expert in the subject matter, so not confident, seems useful to have, I could not find anything discussion online.  As I understand it, Pascal Mugging can either be an honest opportunity for a large gain in utility (e.g. a Powerball ticket?) if the mugger is honest, or an adversarial attempt to exploit the agent (e.g. a Nigerian Prince Advance Fee Scam) if not. Most people have some skills in telling the two apart, usually erring on the side of caution. However, it is not clear how to formalize this approach. Most of the discussion focuses on ignoring super-tiny probabilities, and accepting a utility loss penalty, where a proverbial $10 on a busy sidewalk gets ignored because if it were real someone would have picked it up already, a situation humans navigate quite successfully most of the time. I suspect that Pascal Mugging detection might be better approached from a different direction: instead of starting from "I know a liar when I see one" and the rule of thumb "Most too-good-to-be-true proposals are just that" and trying to formalize it for the case of "tiny probabilities of vast utilities", one could try to start with a more general logic of "adversarial detection", and then apply it to the special case of Pascal Mugging. How would one go about telling apart an indifferent interaction, where something like the Solomonoff's universal prior is appropriate, from an adversarial one, where it runs into trouble? Whatever the approach, it is likely to have the following properties: * In a non-adversarial case it reduces to the universal prior. * An attack by a "less-intelligent" adversary is severely penalized, the larger the difference in "intelligence" the harsher.  * Conversely, an attack by a "vastly more intelligent" adversary is indistinguishable from the non-adversarial cas
b7d46914-8ab1-4e17-bae3-1aa5e4e62d82
trentmkelly/LessWrong-43k
LessWrong
Acceptability Verification: A Research Agenda This Google doc^ is a halted, formerly work-in-progress writeup of Evan Hubinger’s AI alignment research agenda, authored by Evan. It dates back to around 2020, and so Evan’s views on alignment have shifted since then. Nevertheless, we thought it would be valuable to get this posted and available to everyone working in alignment! In it, Evan outlines the following alignment scheme: > We should bake transparency tools into the loss function we’re training powerful models on, grading the model on its internal cognitive processes as well as on external behavior. We start by initializing a relatively dumb but non-deceptive model. We scale up the model, selecting against any model that isn’t demonstrably acceptable to a transparency-tool-assisted overseer. > > While Evan doesn’t expect this approach to be robust against deceptively aligned models, the hope is that we can define a notion of an 'acceptability predicate' such that, if we start with a dumb aligned model and scale from there, grading on cognitive processes as well as behavior, no model on that trajectory in model space will ever become deceptive in the first place. That is, before a model can be updated to become deceptive in this training process, it hopefully first must be updated to become unacceptable and non-deceptive. We can therefore update away from all merely unacceptable models as they appear, and thereby never instantiate a deceptive model in the first place. At the time of this doc’s writing, the leading candidate for an adequate acceptability predicate was 'demonstrably myopic.' One plausible account of 'myopia' here is “return the action that your model of HCH would return, if it received your inputs.” Since writing up this agenda, some things that Evan has updated on include: 1. Understanding myopia carefully is less important relative to just improving our transparency and interpretability capabilities. 2. Scaling interpretability via training transparency will require us to go throug
d998c49a-f948-4ea5-a4ee-dcbfd8116a65
trentmkelly/LessWrong-43k
LessWrong
Computing an exact quantilal policy Earlier we established that the quantilal policy can be computed in polynomial time to any given approximation (see "Proposition 5"). Now we show that an exact quantilal policy can be computed in polynomial time (in particular there is always a rational quantilal policy). We assume geometric time discount throughout. Lemma Consider ξ∈ΔS. Define the linear operators E:RS×A→RS and T:RS×A→RS by Et,sa:=[[t=s]] Tt,sa:=T(t|s,a) (Note that this T is the transpose of the T defined in "Proposition A.3" of the previous essay.) Then, ξ∈ImZ if and only if there is ϕ∈Δ(S×A) s.t. Eϕ=ξ (E−λT)ϕ=(1−λ)ζ0 Proof (This is actually well known, but we spell out the proof to be self-contained.) Suppose that ξ∈ImZ. We already know that this implies that there is a stationary policy π:Sk→A s.t. Zπ=ξ (we abuse notation in the obvious way): see the proofs of "Proposition 2" and "Proposition 3". Define the linear operator Tπ:RS→RS by Tπts:=Ea∼π(s)[T(t|s,a)] It follows that ξ=(1−λ)∞∑n=0λnTπnζ0=(1−λ)(1−λTπ)−1ζ0 (1−λTπ)ξ=(1−λ)ζ0 Define ϕ by ϕ(s,a):=ξ(s)π(a∣s) We have Tπξ=∑s∈SEa∼π(s)[T(s,a)]ξ(s)=∑s∈Sa∈Aπ(a∣s)T(s,a)ξ(s)=∑s∈Sa∈AT(s,a)ϕ(s,a)=Tϕ Also, obviously Eϕ=ξ. We get (E−λT)ϕ=ξ−λTπξ=(1−λTπ)ξ=(1−λ)ζ0 Conversely, suppose that ϕ is as above. Since Eϕ=ξ, there is π:Sk→A s.t. for any s∈S, if ξ(s)≠0 then π(a∣s)=ϕ(s,a)ξ(s) Again, we have Zπ=(1−λ)(1−λTπ)−1ζ0 Also, for the same reason as before (E−λT)ϕ=(1−λTπ)ξ By the assumption, the left hand side equals (1−λ)ζ0. We conclude ξ=(1−λ)(1−λTπ)−1ζ0=Zπ Theorem Assuming all parameters are rational like before, there is a polynomial time algorithm that computes a quantilal policy. Proof The algorithm starts by solving the following linear program. The indeterminates are ϕ∈RS×A and QV∈R. The goal is maximizing QV. The constraints are ∀s∈S,a∈A:ϕ(s,a)≥0 ∑s∈Sa∈Aϕ(s,a)=1 (E−λT)ϕ=(1−λ)ζ0 ∀s∈S∖suppZσ,a∈A:ϕ(s,a)=0 ∀s∈suppZσ:QV≤∑t∈SR(t)∑a∈Aϕ(t,a)−ηZσ(s)∑a∈Aϕ(s,a) Then, the algorithm computes π:Sk→A s.t. for any s∈S, if ∑b∈Aϕ(s,b)>0
7a4c198f-193d-4205-ae69-d932d62cd819
StampyAI/alignment-research-dataset/arxiv
Arxiv
Exploring Restart Distributions 1 Introduction --------------- Experience replay lets off-policy reinforcement learning (RL) methods remember and reuse past experiences [Lin, [1992](#bib.bib1); Mnih et al., [2015](#bib.bib2)]. This helps circumvent the rapid forgetting of past experiences and, therefore, improves sample-efficiency. Prioritising experience can further boost efficiency by replaying important transitions more frequently, where different criteria may be considered to measure the importance of each transition. For example, the magnitude of a transition’s temporal-difference (TD) error can be used as a proxy for how unexpected the transition is [van Seijen and Sutton, [2013](#bib.bib3); Schaul et al., [2016](#bib.bib4)]. Transitions can also be rated based on their corresponding episodic return [Oh et al., [2018](#bib.bib5)], a particularly useful criterion in environments with sparse rewards. On the other hand, on-policy methods cannot benefit from experience replay. As such, they are often sample-inefficient as past transitions are thrown away shortly after they are experienced, regardless of how rare or significant they may be. While replaying past experiences is not compatible with on-policy methods, creating new ones near previously-encountered states is. Given the capacity to reset the state with those corresponding to the agent’s past observations (e.g. in a standard simulator), the latter can be made possible by maintaining a memory of the agent’s previously-encountered states and using it to sample initial states. Effectively, this modifies the perceived distribution of initial states by combining the environment’s *initial-state distribution* with a proposal *restart distribution* over the buffered states, where different criteria could be considered to *prioritise* the latter. We refer to this approach, generically, as *exploring restart distributions*.111This choice was inspired in part by the theoretical assumption of *exploring starts* [Sutton and Barto, [2018](#bib.bib6)], with which our approach shares a subtle connection. By drawing inspiration from well-known ideas in the context of experience replay, we instantiate three variants of our approach. Specifically, our *uniform restart* resembles the uniform replay of [Mnih et al., [2015](#bib.bib2)], our *prioritised restart* resembles the prioritised replay of [Schaul et al., [2016](#bib.bib4)], and our *episodic restart* resembles the episodic replay of [Oh et al., [2018](#bib.bib5)]. We combine our variants with a canonical policy-gradient algorithm, Proximal Policy Optimisation (PPO) [Schulman et al., [2017](#bib.bib7)], which, due to its on-policy nature, cannot straightforwardly use experience replay and, as such, is interesting for our study. We test the resulting agents on two dense-reward and two sparse-reward environments, in each case considering a medium-difficulty and a hard exploration problem. We see improvements from our approach in all cases, with the most remarkable gains in the hard exploration problems. Broadly, we consider *simulator-based training* with *simulator-free execution*, a problem paradigm in which we can utilise the opportunity to adjust certain environment variables during training but not during execution (e.g. [Ciosek and Whiteson, [2017](#bib.bib8)]). Our approach improves this paradigm by utilising the reset capacity in simulated environments during training. We emphasise that we do not utilise this capacity during evaluations (i.e. policies are evaluated with respect to the original performance metric).222As such, our work differs from [Rajeswaran et al., [2017](#bib.bib9)] which examines the impact of more diverse initial-state distributions in the context of “robustness”. 2 Related work --------------- Kakade and Langford [[2002](#bib.bib10)] considered the notion of utilising the reset capacity and showed, under certain conditions, that using a proposal initial-state distribution that is more uniform over the state space than the original one improves learning performance with respect to the original performance metric. More recently, Agarwal et al. [[2019](#bib.bib11)] formalised the importance of how a favourable initial-state distribution provides a means to circumvent worst-case exploration issues in the context of policy-gradient methods [Sutton et al., [1999](#bib.bib12)]. Nonetheless, these works do not provide a practical procedure for creating such distributions when the state space is unknown a priori. To improve model-free learning, Popov et al. [[2017](#bib.bib13)] modified the initial-state distribution to be uniform over the states from provided expert demonstrations. Salimans and Chen [[2018](#bib.bib14)] reported high performance on Montezuma’s Revenge Atari 2600 game by restarting a standard deep RL agent from a set of designated initial states, manually extracted from a single expert demonstration. These approaches resemble our episodic restart variant with one major difference: in our approach, the agent progressively updates its best buffered episodes in order to sample initial states from them and, as such, it does not rely on expert demonstrations or manually designated initial states. Ecoffet et al. [[2019](#bib.bib15)] proposed a related method, called Go-Explore, that achieved the state-of-the-art on the hardest exploration games in the Atari 2600. Go-Explore’s main principles are to maintain a memory of previously-encountered states, reset the environment to the “promising” ones to explore from, and repeat this process until a complete solution is found. Once found, a policy is trained by imitation learning on the solution trajectory. This work provides strong supporting evidence for the utility of exploration through restarting from previously-encountered states. However, Go-Explore does not use RL to learn a policy that solves the problem. Furthermore, the criterion used to identify promising states is rather domain-specific. Using our approach, one could realise an RL counterpart for Go-Explore by using the same criterion to prioritise initial states. Florensa et al. [[2017](#bib.bib16)] presented a method for adaptive generation of curricula in the form of initial-state distributions that start close to the goal state and gradually move away with the agent’s progress. This method is limited to goal-oriented problems with clear goal states and further assumes a priori knowledge of such states. While our approach is not limited to such environments, a similar behaviour to curriculum generation in this way could emerge with our approach using an appropriate priority measure, whereby a single encounter of a goal state biases the restart distribution towards it. Restarting from previously-encountered states to sample more transitions reduces the variance of the gradient estimator in policy-gradient methods. The vine procedure of [Schulman et al., [2015](#bib.bib17)] utilises the reset capacity in simulated environments for this purpose. This method can be realised as a special case of our approach. 3 Background ------------- We consider the RL framework [Szepesvári, [2010](#bib.bib18); Sutton and Barto, [2018](#bib.bib6)] in which the interaction of an agent and an environment is modeled as a Markov decision process (MDP) [Puterman, [1994](#bib.bib19)] comprising of a state space 𝒮𝒮\mathcal{S}caligraphic\_S, an action space 𝒜𝒜\mathcal{A}caligraphic\_A, an initial-state distribution p1(s1)=Pr{S1=s1}subscript𝑝1subscript𝑠1𝑃𝑟subscript𝑆1subscript𝑠1p\_{1}(s\_{1})=Pr\{S\_{1}{=}s\_{1}\}italic\_p start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) = italic\_P italic\_r { italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT }, a transition distribution p(s′|s,a)=Pr{St+1=s′|St=s,At=a}𝑝conditionalsuperscript𝑠′𝑠𝑎𝑃𝑟conditional-setsubscript𝑆𝑡1superscript𝑠′formulae-sequencesubscript𝑆𝑡𝑠subscript𝐴𝑡𝑎p(s^{\prime}|s,a)=Pr\{S\_{t+1}{=}s^{\prime}|S\_{t}{=}s,A\_{t}{=}a\}italic\_p ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_s , italic\_a ) = italic\_P italic\_r { italic\_S start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT = italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_S start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_s , italic\_A start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_a }, and a reward function r(s,a,s′)=𝔼[Rt|St=s,At=a,St+1=s′]𝑟𝑠𝑎superscript𝑠′𝔼delimited-[]formulae-sequenceconditionalsubscript𝑅𝑡subscript𝑆𝑡𝑠formulae-sequencesubscript𝐴𝑡𝑎subscript𝑆𝑡1superscript𝑠′r(s,a,s^{\prime})=\mathbb{E}[R\_{t}|S\_{t}{=}s,A\_{t}{=}a,S\_{t+1}{=}s^{\prime}]italic\_r ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = blackboard\_E [ italic\_R start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_S start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_s , italic\_A start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_a , italic\_S start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT = italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ], for all s,s′∈𝒮,a∈𝒜,s1∈𝒮1⊂𝒮formulae-sequence𝑠superscript𝑠′ 𝒮formulae-sequence𝑎𝒜subscript𝑠1subscript𝒮1𝒮s,s^{\prime}\in\mathcal{S},a\in\mathcal{A},s\_{1}\in\mathcal{S}\_{1}\subset\mathcal{S}italic\_s , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ caligraphic\_S , italic\_a ∈ caligraphic\_A , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∈ caligraphic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⊂ caligraphic\_S. The decision-making process of an agent is characterised by a policy π(a|s)=Pr{At=a|St=s}𝜋conditional𝑎𝑠𝑃𝑟conditional-setsubscript𝐴𝑡𝑎subscript𝑆𝑡𝑠\pi(a|s)=Pr\{A\_{t}{=}a|S\_{t}{=}s\}italic\_π ( italic\_a | italic\_s ) = italic\_P italic\_r { italic\_A start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_a | italic\_S start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_s }. This policy can be approximated by a parameterised function π(a|s,𝜽)𝜋conditional𝑎𝑠𝜽\pi(a|s,{\boldsymbol{\theta}})italic\_π ( italic\_a | italic\_s , bold\_italic\_θ ) (e.g. a neural network), where 𝜽∈ℝd𝜽superscriptℝ𝑑{\boldsymbol{\theta}}\in\mathbb{R}^{d}bold\_italic\_θ ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT is the vector of policy parameters and, typically, d≪|𝒮|much-less-than𝑑𝒮d\ll|\mathcal{S}|italic\_d ≪ | caligraphic\_S |. The agent uses its policy to interact with the environment to sample a trajectory S1,A1,R1,S2,…,ST,AT,RT,ST+1subscript𝑆1subscript𝐴1subscript𝑅1subscript𝑆2…subscript𝑆𝑇subscript𝐴𝑇subscript𝑅𝑇subscript𝑆𝑇1S\_{1},A\_{1},R\_{1},S\_{2},\dots,S\_{T},A\_{T},R\_{T},S\_{T+1}italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_S start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_A start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_S start\_POSTSUBSCRIPT italic\_T + 1 end\_POSTSUBSCRIPT (where T𝑇Titalic\_T is the trajectory’s horizon which is, in general, a random variable). In this paper, we assume that T𝑇Titalic\_T is finite and that terminations may occur due to terminal states in episodic tasks (i.e. *concrete episodes*) or due to an arbitrary condition, such as timeouts, in continuing or episodic tasks (i.e. *partial episodes*). The majority of our discussions are considered under the more generic assumption of learning from partial episodes and, as such, are relevant only to bootstrapping methods [Sutton and Barto, [2018](#bib.bib6)] (e.g. TD methods such as Q-learning, Sarsa, and actor-critic methods). Nevertheless, the main proposition of this paper applies also to Monte-Carlo methods, in which case the episodes are strictly concrete. We assume access to the capacity to reset the state with those corresponding to the agent’s past observations. We remark that this assumption is weaker than having explicit access to the environment’s model. Furthermore, we do not assume a priori knowledge of the (valid) state space. In fact, such knowledge is rarely accessible in practice, which is why we build a memory of states on-the-fly. ### 3.1 Impact of the initial-state distribution on the learning objective In this section, we consider the question “how does modifying the initial-state distribution affect the learning objective and, ultimately, the learned policy with respect to the original performance metric?”. We will consider this question separately for tabular and approximate solution methods. In tabular methods, the learned values at each state are decoupled from one another (i.e. an update at one state affects no other). Let us now consider the control problem in which the agent’s goal is to maximise its value from the environment’s designated set of initial states. As per the *principle of optimality* [Sutton and Barto, [2018](#bib.bib6)], a policy achieves the optimal value from a state s𝑠sitalic\_s, if and only if, for any state s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT reachable from s𝑠sitalic\_s it achieves the optimal value. Therefore, by letting the agent also start in states outside the environment’s designated set of initial states, we can better optimise for the designated set by better optimising for the states that are reachable from the designated set. On the contrary, with approximation, an update at one state affects many others as generally we have far more states than parameters. Therefore, making one state’s estimate more accurate often means making others’ less accurate. Let us now consider a common objective function for approximate prediction of the action values for a given policy: | | | | | | --- | --- | --- | --- | | | L(𝐰)≐∑s∈𝒮ρ(s)∑a∈𝒜π(a|s)(qπ(s,a)−q^π(s,a|𝐰))2.approaches-limit𝐿𝐰subscript𝑠𝒮𝜌𝑠subscript𝑎𝒜𝜋conditional𝑎𝑠superscriptsubscript𝑞𝜋𝑠𝑎subscript^𝑞𝜋𝑠conditional𝑎𝐰2L(\mathbf{w})\doteq\sum\_{s\in\mathcal{S}}{\color[rgb]{0,0,1}\rho(s)}\sum\_{a\in\mathcal{A}}\pi(a|s)\Big{(}q\_{\pi}(s,a)-\hat{q}\_{\pi}(s,a|\mathbf{w})\Big{)}^{2}.italic\_L ( bold\_w ) ≐ ∑ start\_POSTSUBSCRIPT italic\_s ∈ caligraphic\_S end\_POSTSUBSCRIPT italic\_ρ ( italic\_s ) ∑ start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT italic\_π ( italic\_a | italic\_s ) ( italic\_q start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) - over^ start\_ARG italic\_q end\_ARG start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s , italic\_a | bold\_w ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT . | | (1) | This objective function is weighted according to the state distribution ρ(s)𝜌𝑠{\color[rgb]{0,0,1}\rho(s)}italic\_ρ ( italic\_s ), which depends on the policy π(a|s)𝜋conditional𝑎𝑠\pi(a|s)italic\_π ( italic\_a | italic\_s ) and, in episodic tasks, the initial-state distribution p1(s1)subscript𝑝1subscript𝑠1p\_{1}(s\_{1})italic\_p start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ). In effect, this promotes the approximation of the action values to be more accurate at states that have a higher visitation density. As such, changing the initial-state distribution modifies the learning objective for approximate prediction. The same rationale holds in approximate control, e.g. when using policy-gradient methods. One caveat to this in the control case is a policy that maximises the modified learning objective within some restricted class of policies may perform poorly with respect to the original performance metric. We can reduce this unsought learning bias by using a distribution of initial states whose support contains and spans beyond that of the environment’s initial-state distribution (see Sec. [4](#S4 "4 Exploring restart distributions ‣ Exploring Restart Distributions")), as well as using a parameterisation that affords the problem’s underlying complexity. While the latter cannot be guaranteed in general, it seems often admissible in deep RL (especially considering the relative simplicity of many problems of interest with respect to the commonly-used, high-capacity neural networks [Rajeswaran et al., [2017](#bib.bib9)]). Nonetheless, there are many problems where learning any reasonable policy is challenging, not to mention learning an optimal one. In such cases, it may be appropriate to accept the cost of potentially introducing a learning bias in order to facilitate learning. 4 Exploring restart distributions ---------------------------------- We consider a generic approach in which the agent maintains, what we call, a *restart memory* of its past experiences along with their corresponding (true) states, and uses this restart memory to sample initial states for new episodes. This in turn allows the agent to gradually increase the diversity of the states in which it can restart. Formally, for the environment’s initial-state space 𝒮1subscript𝒮1\mathcal{S}\_{1}caligraphic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and the agent’s set of buffered states 𝒮ℬsubscript𝒮ℬ\mathcal{S}\_{\mathcal{B}}caligraphic\_S start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT, our approach enables sampling initial states from 𝒮1∪𝒮ℬsubscript𝒮1subscript𝒮ℬ\mathcal{S}\_{1}\cup\mathcal{S}\_{\mathcal{B}}caligraphic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∪ caligraphic\_S start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT (which contains and spans beyond 𝒮1subscript𝒮1\mathcal{S}\_{1}caligraphic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT). We achieve this by sampling from both the environment’s initial-state distribution p1subscript𝑝1p\_{1}italic\_p start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT (with support 𝒮1subscript𝒮1\mathcal{S}\_{1}caligraphic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT) and a restart distribution pℬsubscript𝑝ℬp\_{\mathcal{B}}italic\_p start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT (with support 𝒮ℬsubscript𝒮ℬ\mathcal{S}\_{\mathcal{B}}caligraphic\_S start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT). This is equivalent to sampling from a new distribution μ1subscript𝜇1\mu\_{1}italic\_μ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, which mixes p1subscript𝑝1p\_{1}italic\_p start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and pℬsubscript𝑝ℬp\_{\mathcal{B}}italic\_p start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT, with support 𝒮1∪𝒮ℬsubscript𝒮1subscript𝒮ℬ\mathcal{S}\_{1}\cup\mathcal{S}\_{\mathcal{B}}caligraphic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∪ caligraphic\_S start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT. In this paper, we control the extent of contributions from each of the distributions p1subscript𝑝1p\_{1}italic\_p start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and pℬsubscript𝑝ℬp\_{\mathcal{B}}italic\_p start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT by maintaining a fixed *ratio* for the number of transitions that stem from *augmented initial states* (i.e. initial states that are sampled from a restart memory) versus the total number of transitions. ### 4.1 Uniform restart Our uniform restart variant generally follows the same mechanism as the uniform replay of [Mnih et al., [2015](#bib.bib2)]. The differences are that we store states as opposed to observations and that we ultimately care about sampling states as opposed to transitions. In other words, we store recent states (without selection) in a restart memory ℬℬ\mathcal{B}caligraphic\_B and sample augmented initial states uniformly (i.e. pℬsubscript𝑝ℬp\_{\mathcal{B}}italic\_p start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT is a uniform distribution over the buffered states 𝒮ℬsubscript𝒮ℬ\mathcal{S}\_{\mathcal{B}}caligraphic\_S start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT). Having the capacity to reset the state naturally implies we can early-terminate episodes. Doing so is compatible with bootstrapping methods which can bootstrap at the end of partial episodes [Pardo et al., [2018](#bib.bib20)]. By convention, we choose to apply a time limit Taugsubscript𝑇augT\_{\textrm{aug}}italic\_T start\_POSTSUBSCRIPT aug end\_POSTSUBSCRIPT to interactions that stem from augmented initial states with Taug≤Tenvsubscript𝑇augsubscript𝑇envT\_{\textrm{aug}}\leq T\_{\textrm{env}}italic\_T start\_POSTSUBSCRIPT aug end\_POSTSUBSCRIPT ≤ italic\_T start\_POSTSUBSCRIPT env end\_POSTSUBSCRIPT, where Tenvsubscript𝑇envT\_{\textrm{env}}italic\_T start\_POSTSUBSCRIPT env end\_POSTSUBSCRIPT is the environment’s time limit (if any). ### 4.2 Prioritised restart Our prioritised restart variant uses a similar mechanism as the *proportional* prioritised replay of [Schaul et al., [2016](#bib.bib4)] but for prioritising states rather than transitions. As such, we use the state-value TD error (as opposed to the state-action form used in [Schaul et al., [2016](#bib.bib4)]): | | | | | | --- | --- | --- | --- | | | δi≐ri+γv(si′)−v(si),approaches-limitsubscript𝛿𝑖subscript𝑟𝑖𝛾𝑣subscriptsuperscript𝑠′𝑖𝑣subscript𝑠𝑖\delta\_{i}\doteq r\_{i}+\gamma v(s^{\prime}\_{i})-v(s\_{i})\,,italic\_δ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≐ italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT + italic\_γ italic\_v ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) - italic\_v ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) , | | (2) | where i𝑖iitalic\_i is the index of state sisubscript𝑠𝑖s\_{i}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT in the restart memory. We calculate the probability of sampling state sisubscript𝑠𝑖s\_{i}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT from the restart memory via | | | | | | --- | --- | --- | --- | | | pℬ(si)=piα∑kpkα,subscript𝑝ℬsubscript𝑠𝑖superscriptsubscript𝑝𝑖𝛼subscript𝑘superscriptsubscript𝑝𝑘𝛼p\_{\mathcal{B}}(s\_{i})=\frac{p\_{i}^{\alpha}}{\sum\_{k}{p\_{k}^{\alpha}}}\,,italic\_p start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = divide start\_ARG italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_α end\_POSTSUPERSCRIPT end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_α end\_POSTSUPERSCRIPT end\_ARG , | | (3) | where pi≐|δi|+εapproaches-limitsubscript𝑝𝑖subscript𝛿𝑖𝜀p\_{i}\doteq|\delta\_{i}|+\varepsilonitalic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≐ | italic\_δ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | + italic\_ε is the priority of state sisubscript𝑠𝑖s\_{i}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT (with ε𝜀\varepsilonitalic\_ε being a small positive constant to ensure non-zero probabilities for all buffered states) and the exponent α𝛼\alphaitalic\_α determines how much prioritisation is used. It is noteworthy that prioritising initial states does not introduce a learning bias in the way that prioritisation in the context of replay does. Biasing the replay frequency directly alters the perceived state transition and reward dynamics in stochastic environments. However, this does not apply to our approach as, regardless of how an initial state is sampled, transitions are always sampled from the environment, not replayed from a replay memory. Therefore, we do not need importance sampling corrections as used in [Schaul et al., [2016](#bib.bib4)]. Lastly, similar to our uniform restart variant, we apply a time limit Taugsubscript𝑇augT\_{\textrm{aug}}italic\_T start\_POSTSUBSCRIPT aug end\_POSTSUBSCRIPT to interactions that stem from augmented initial states. ### 4.3 Episodic restart Episodic return is another criterion for measuring priorities, one that is particularly useful in environments with sparse rewards [Oh et al., [2018](#bib.bib5)]. We build our episodic restart variant to enable using this criterion for prioritisation, leading to a number of differences with respect to our previous variants. Most fundamentally, states are now buffered at the end of episodes rather than on each transition, and only episodes that obtain a higher undiscounted return in comparison to those already in the restart memory are buffered. In other words, the agent maintains its most rewarding episodes, where the restart-memory size determines the maximum number of episodes in the restart memory at any given time. The episodes are then prioritised according to their corresponding undiscounted returns, with uniform sampling of states from any selected episodes. While it is possible to also prioritise the states in a selected episode (e.g. using TD error), we omit that in this work to simplify our experiments. Using our episodic restart variant in the manner described above could significantly hurt the agent’s learning performance by biasing its experiences towards specific parts of the state space. For example, consider a multi-goal problem in which the goal is different in every episode and, as such, the goal is part of the state (i.e. assuming *Markov states* [Sutton and Barto, [2018](#bib.bib6)]). Encountering an episode that has an easy goal could result in obtaining a high return in comparison to other episodes. The states from such an episode receive a high priority for being sampled as initial states in new episodes. This leads to frequently experiencing episodes of the same easy goal, thereby quickly filling the restart memory with episodes of a single goal. To address this, we use a nested storage mechanism in which an episode that stems from an environment’s initial state is stored as a *parent episode* (i.e. a standard episode) and an episode that stems from an augmented initial state is stored as a *sub-episode*, where each sub-episode is linked to the parent episode from which its augmented initial state originated. This avoids filling the restart memory with episodes of similar nature as, now, only a limited number of sub-episodes can be stored under each parent episode, with the nature of each parent episode being determined by the environment’s initial-state distribution. We will now discuss what is needed and how to sample an augmented initial state from this restart memory. To maintain episodic returns comparable in environments with time limits, all sub-episodes need to be terminated after an appropriate number of steps, Taugsubscript𝑇augT\_{\textrm{aug}}italic\_T start\_POSTSUBSCRIPT aug end\_POSTSUBSCRIPT. Assuming (as before) a fixed, environment’s time limit Tenvsubscript𝑇envT\_{\textrm{env}}italic\_T start\_POSTSUBSCRIPT env end\_POSTSUBSCRIPT, sub-episodic time limit Taugsubscript𝑇augT\_{\textrm{aug}}italic\_T start\_POSTSUBSCRIPT aug end\_POSTSUBSCRIPT can be determined via | | | | | | --- | --- | --- | --- | | | Taug=Tenv−twitht≤Tenv−1,subscript𝑇augsubscript𝑇env𝑡with𝑡subscript𝑇env1T\_{\textrm{aug}}=T\_{\textrm{env}}-t\;\;\textrm{with}\;\;t\leq T\_{\textrm{env}}-1,italic\_T start\_POSTSUBSCRIPT aug end\_POSTSUBSCRIPT = italic\_T start\_POSTSUBSCRIPT env end\_POSTSUBSCRIPT - italic\_t with italic\_t ≤ italic\_T start\_POSTSUBSCRIPT env end\_POSTSUBSCRIPT - 1 , | | (4) | where t𝑡titalic\_t is the time step of the augmented initial state.333The number of steps in the shortest path from the augmented initial state to the initial state of its corresponding parent episode. In this way, we can calculate an augmented episodic return for any sub-episodes as the sum of the sub-episode’s rewards and the rewards along the shortest path from the parent episode’s initial state to the sub-episode’s augmented initial state. To calculate the priority of a *category of episodes* (i.e. a parent episode together with its sub-episodes), we use the maximum undiscounted return across the category’s parent episode and all its sub-episodes. Let us denote this maximum undiscounted return per category with G¯isubscript¯𝐺𝑖\bar{G}\_{i}over¯ start\_ARG italic\_G end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, with i𝑖iitalic\_i being the index of the category in the restart memory. We calculate the priority of the i𝑖iitalic\_ith category as | | | | | | --- | --- | --- | --- | | | pi≐G¯i−δi+ε,approaches-limitsubscript𝑝𝑖subscript¯𝐺𝑖subscript𝛿𝑖𝜀p\_{i}\doteq\bar{G}\_{i}-\delta\_{i}+\varepsilon\,,italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≐ over¯ start\_ARG italic\_G end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - italic\_δ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT + italic\_ε , | | (5) | where | | | | | | --- | --- | --- | --- | | | δi≐min⁡(0,mini⁡G¯i)approaches-limitsubscript𝛿𝑖0subscript𝑖subscript¯𝐺𝑖\delta\_{i}\doteq\min(0,\min\_{i}\bar{G}\_{i})italic\_δ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≐ roman\_min ( 0 , roman\_min start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT over¯ start\_ARG italic\_G end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) | | (6) | is used as an offset to enable handling negative returns and ε𝜀\varepsilonitalic\_ε is a small positive constant that ensures non-zero probabilities for all buffered categories. We calculate the probability of sampling the i𝑖iitalic\_ith category from the restart memory by using the priorities of Eq. ([5](#S4.E5 "5 ‣ 4.3 Episodic restart ‣ 4 Exploring restart distributions ‣ Exploring Restart Distributions")) in Eq. ([3](#S4.E3 "3 ‣ 4.2 Prioritised restart ‣ 4 Exploring restart distributions ‣ Exploring Restart Distributions")). We perform this once to sample a category of episodes, and again to sample an episode from the selected category.444Each sub-episode in a category of episodes is augmented with as many as Taugsubscript𝑇augT\_{\textrm{aug}}italic\_T start\_POSTSUBSCRIPT aug end\_POSTSUBSCRIPT states which link its augmented initial state to the initial state of its corresponding parent episode. We then sample an augmented initial state uniformly from the selected episode. 5 Experiments -------------- We evaluate our approach on several continuous control environments, simulated using the MuJoCo physics engine [Todorov et al., [2012](#bib.bib21)]. As per the nature of our variants, we present performance evaluations of our uniform and prioritised restart variants in dense-reward environments (Sec. [5.1](#S5.SS1 "5.1 Dense-reward environments ‣ 5 Experiments ‣ Exploring Restart Distributions")) and of our episodic restart variant in sparse-reward environments (Sec. [5.2](#S5.SS2 "5.2 Sparse-reward environments ‣ 5 Experiments ‣ Exploring Restart Distributions")). In each case, we consider a medium-difficulty and a hard exploration problem. We focus our experiments on on-policy RL which cannot straightforwardly replay past experiences and, thus, will benefit more significantly from our approach. Specifically, we evaluate our approach using PPO, which is a canonical on-policy method for continuous control. To avoid the significant cost of systematic hyperparameter search, throughout our experiments we fix the ratio hyperparameter of our approach to 0.1 (i.e. 10% of the total interactions stem from augmented initial states) and, generally, use the PPO hyperparameters as originally reported in [Schulman et al., [2017](#bib.bib7)]. ### 5.1 Dense-reward environments To evaluate the performance of our uniform and prioritised restart variants, we consider two dense-reward environments from the OpenAI Gym [Brockman et al., [2016](#bib.bib22)], namely HalfCheetah (medium-difficulty exploration) and Humanoid (hard exploration). By default, these environments apply a time limit of Tenv=1000subscript𝑇env1000T\_{\textrm{env}}=\textrm{1000}italic\_T start\_POSTSUBSCRIPT env end\_POSTSUBSCRIPT = 1000 to each episode. We apply this default time limit to all interactions that stem from the environment’s initial states. For any interactions that stem from an augmented initial state, we apply the much shorter time limit of Taug=10subscript𝑇aug10T\_{\textrm{aug}}=\textrm{10}italic\_T start\_POSTSUBSCRIPT aug end\_POSTSUBSCRIPT = 10. For both of our variants, we set the restart-memory size to 20000. For our prioritised restart variant, we set α=0.4𝛼0.4\alpha=\textrm{0.4}italic\_α = 0.4 to induce a mild priority (see Eq. ([3](#S4.E3 "3 ‣ 4.2 Prioritised restart ‣ 4 Exploring restart distributions ‣ Exploring Restart Distributions"))). #### 5.1.1 HalfCheetah The goal in the HalfCheetah environment is to make a planar biped run as fast as possible. Given the medium-dimensionality of its observation and action spaces, this environment is not very challenging for advanced RL agents. Fig. [1](#S5.F1 "Figure 1 ‣ 5.1.2 Humanoid ‣ 5.1 Dense-reward environments ‣ 5 Experiments ‣ Exploring Restart Distributions") (left) shows the learning curves for this experiment, created by evaluating each agent periodically during training using the environment’s initial-state distribution. The learning curves show average undiscounted returns (each averaged over 5 seeds). Our results show mild improvements for both of our variants, with a slight advantage for our prioritised one. #### 5.1.2 Humanoid The goal in the Humanoid environment is to make a three-dimensional biped walk forward as fast as possible, without falling over. While a dense-reward environment, the high-dimensionality of its observation and action spaces make it rather challenging. Fig. [1](#S5.F1 "Figure 1 ‣ 5.1.2 Humanoid ‣ 5.1 Dense-reward environments ‣ 5 Experiments ‣ Exploring Restart Distributions") (right) shows the learning curves for this experiment, created following the same procedure as in our HalfCheetah experiment. Our results show significant improvements for the agents that use our uniform or prioritised restart variants. However, contrary to our HalfCheetah results, here our uniform restart variant outperforms our prioritised one. We conducted an informal study to examine whether one of these variants would have a general advantage but did not find the results to be conclusive in this regard. ![Refer to caption](/html/1811.11298/assets/x1.png) ![Refer to caption](/html/1811.11298/assets/x2.png) Figure 1: Average test performance curves of our uniform and prioritised restart variants as applied to and against PPO on two dense-reward environments. Shaded areas are standard error. ### 5.2 Sparse-reward environments To evaluate the performance of our episodic restart variant, we consider two sparse-reward environments from the OpenAI Gym, namely FetchReach (medium-difficulty exploration) and Thrower (hard exploration). #### 5.2.1 FetchReach The FetchReach environment was proposed by Plappert et al. [[2018](#bib.bib23)] to assess goal-conditional policy learning methods in a problem of practical interest. In each episode, the agent’s task is to control its four joints in order to move the gripper to a goal position, where the goal is randomly sampled at the start of each episode. The agent’s observations consist of the robot’s current state as well as a three-dimensional goal position. This environment has no terminal states but enforces a time limit of Tenv=50subscript𝑇env50T\_{\textrm{env}}=\textrm{50}italic\_T start\_POSTSUBSCRIPT env end\_POSTSUBSCRIPT = 50. We set the restart-memory size to 100 parent episodes and 10 sub-episodes. The agent receives a step-wise penalty of -1 whenever its gripper is not at the goal position, and 0 otherwise. Fig. [2](#S5.F2 "Figure 2 ‣ 5.2.1 FetchReach ‣ 5.2 Sparse-reward environments ‣ 5 Experiments ‣ Exploring Restart Distributions") (left) shows the learning curves for this experiment, created by evaluating each agent periodically during training using the environment’s initial-state distribution. The learning curves show average success rates (each averaged over 10 seeds). Our results show mild improvements for our episodic restart variant. This is consistent with our observation in the HalfCheetah experiment, suggesting that performance improvements can be expected from our approach even in environments which do not pose a significant exploration challenge. ![Refer to caption](/html/1811.11298/assets/x3.png) ![Refer to caption](/html/1811.11298/assets/x4.png) Figure 2: Average test success-rate curves of our episodic restart variant as applied to and against PPO on two sparse-reward environments. Shaded areas are standard error. #### 5.2.2 Thrower Our Thrower environment is a variant of the one provided in the OpenAI Gym, modified to feature significantly sparse rewards. Our modifications make this problem particularly challenging as the agent receives a positive reward of 1 only for successfully throwing the ball in the goal region, and 0 otherwise. The agent additionally incurs a small torque penalty. Each episode terminates once the ball impacts the goal or the table, or upon reaching the time limit of Tenv=100subscript𝑇env100T\_{\textrm{env}}=\textrm{100}italic\_T start\_POSTSUBSCRIPT env end\_POSTSUBSCRIPT = 100. We set the restart-memory size to 50 parent episodes and 10 sub-episodes. Due to the complexity of this environment, the probability of the ball impacting the goal is very low. Hence, to ensure that each independent run encounters a positive training signal early on in its training process, we only considered runs that experienced an instance of the ball impacting the goal in the first 50000 interaction steps (roughly 10% of our runs achieved this during the specified window). Moreover, we use an entropy coefficient of 0.02 to further encourage exploration. Fig. [2](#S5.F2 "Figure 2 ‣ 5.2.1 FetchReach ‣ 5.2 Sparse-reward environments ‣ 5 Experiments ‣ Exploring Restart Distributions") (right) shows the learning curves for this experiment, created following the same procedure as in our FetchReach experiment (each averaged over 42 seeds). Our results show a clear advantage for using our episodic restart variant. By inspecting the performances of individual runs, we found that our episodic restart variant enabled the agent to learn consistently across all 42 independent runs (with each run achieving just under 50% success rate per evaluation trial), whereas the baseline agent completely failed to learn any useful policy in 80% of the runs (with the remaining runs achieving just under 50% success rate per evaluation trial, similar to our variant). In other words, using our episodic restart variant led to more robust learning (with respect to random initialisation) by enabling a way for the agent to utilise its extremely rare positive experiences. 6 Conclusion ------------- We considered the generic approach of maintaining a restart memory of the agent’s past experiences along with their corresponding (true) states, and using this restart memory to sample initial states for new episodes. This approach utilises the reset capacity in simulated environments during training in order to help with exploration. We instantiated three variants of our approach by drawing inspiration from well-known ideas in the context of experience replay. We tested these variants on two dense-reward and two sparse-reward environments. In each case, we considered a medium-difficulty and a hard exploration problem. We showed improvements from our approach in all cases, with the most remarkable gains in the hard exploration problems. #### Acknowledgements AT acknowledges financial support from the UK Engineering and Physical Sciences Research Council (EPSRC DTP). VL and PK acknowledge financial support from the Samsung Advanced Institute of Technology (SAIT GRO). All authors acknowledge computational resources from Microsoft (Azure for Research Award).
8df9b6ef-d0de-4b21-9c13-e0ab9f149228
trentmkelly/LessWrong-43k
LessWrong
Less Wrong automated systems are inadvertently Censoring me Just a short post to highlight an issue with debate on LW; I have recently been involved with some interest in the debate on covid-19 origins on here. User viking_math posted a response which I was keen to respond to, but it is not possible for me to respond to that debate (or any) because the LW site has rate-limited me to one comment per 24 hours because my recent comments are on -5 karma or less.  So, I feel that I should highlight that one side of the debate (my side) is simply not going to be here. I can't prosecute a debate like this.  This is funnily enough an example of brute-force manufactured consensus - there will be a debate, people will make points on their side and the side I am arguing for will be missing, so observers will conclude that there are no valid counterarguments rather than that there are, but they were censored.  I think this is actually quite a good model of how the world has reached the wrong conclusion about various things (which may include covid-19 origins, assuming that covid-19 was actually a lab leak which is not certain). This is perhaps even more interesting than whether covid-19 came from a lab or not - we already knew before 2019 that bioerror was a serious risk. But I feel that we underestimate just how powerful multiple synergistic brute-force consensus mechanisms are at generating an information cascade into the incorrect conclusion.  I'm sure these automated systems were constructed with good intentions, but they do constitute a type of information cascade mechanism - people choose to downvote, so you cannot reply, so it looks like you have no arguments, so people choose to downvote more, etc.   
81d5467e-f0ce-4d6d-b402-d362a6a47fa6
trentmkelly/LessWrong-43k
LessWrong
Learning AI if you suck at math None
83721c8b-d7b6-4e6a-a111-cc601e8fd7c6
trentmkelly/LessWrong-43k
LessWrong
Omicron: My Current Model A year and a half ago, I wrote a post called Covid-19: My Current Model. Since then things have often changed, and we have learned a lot. It seems like high time for a new post of this type. Note that this post mostly does not justify and explain its statements. I document my thinking, sources and analysis extensively elsewhere, little of this should be new. This post combines the basic principles from my original post, which mostly still stand, with my core model for Omicron. I’ll summarize and update the first post, then share my current principles for Omicron and how to deal with and think about it. There’s a lot of different things going on, so this will likely be incomplete, but hopefully it will prove useful. The personally useful executive summary version first. 1. Omicron has already taken over, most cases are being missed, crunch time is now. Crunch time will likely last 1-2 months. 2. First two shots don’t protect against infection, boosters do somewhat (60%?). 3. Vaccination and natural infection protect against severe disease, hospitalization and death (best guess ~80% reduction in death for double vaccination, 95%+ reduction in death for boosters but too soon to know). 4. Tests work, but when delayed are mostly useless for preventing infection especially when delayed, as Omicron can spread within 1-2 days after exposure. Rapid tests mostly test for infectiousness, not being positive. 5. Omicron probably milder than Delta (~50%) so baseline IFR likely ~0.3% unless hospitals overload, lower for vaccinated or reinfected. 6. Being young and healthy is robust protection against severe disease and death, being not that means a lot more risk. Long Covid risk small but real for all age groups, vaccination likely helps a lot. 7. Medical system is under strain, could be overwhelmed soon, should be better again in a few months at most if it gets bad. Delaying infection has value but stopping it fully is likely not worth the cost. If you care about real
072fdfd6-3cde-4ae8-aff2-fef708b805e4
trentmkelly/LessWrong-43k
LessWrong
Formalizing «Boundaries» with Markov blankets How could «boundaries» be formally specified? Markov blankets seem to be one fitting abstraction.  [The post is largely a conceptual distillation of Andrew Critch’s Part 3a: Defining boundaries as directed Markov blankets.] Explaining Markov blankets By the end of this section, I want you to understand the following diagram (Pearlian causal diagram): Also: I will assume a basic familiarity with Markov chains in this post. First, I want you to imagine a simple Markov chain that represents the fact that a human influences itself over time: Second, I want you to imagine a Markov chain that represents the fact that the environment[1] influences itself over time: Okay. Now, notice that in between the human and its environment there’s some kind of boundary. For example, their skin (a physical boundary) and their interpretation/cognition (an informational boundary). If this were not a human but instead a bacterium, then the boundary I mean would (mostly) be the bacterium’s cell membrane. Third, imagine a Markov chain that represents that boundary influencing itself over time: Okay, so we have these three Markov chains running in parallel: But they also influence each other, so let’s build that into the model, too.  How should they be connected? Well, how does the environment affect a human?  Ok, so I want you to notice that when an environment affects a human, it doesn’t influence them directly, but instead it influences their skin or their cognition (their boundary), and then their boundary influences them.  For example, I shine light in your eyes (part of your environment), it activates your eyes (part of your boundary), and your eyes send information to your brain (part of your insides). Which is to say, this is what does not happen: (This is called “infiltration”.) The environment does not directly influence the human.  Instead, the environment influences the boundary which influences them, which looks like this: The environment influenc
f4c5450e-864f-4b6e-9e23-681344a95848
trentmkelly/LessWrong-43k
LessWrong
Non-myopia stories Written under the supervision of Lionel Levine. Thanks to Owain Evans, Aidan O’Gara, Max Kaufmann, and Johannes Treutlein for comments. This post is a synthesis of arguments made by other people. It provides a collection of answers to the question, "Why would an AI become non-myopic?" In this post I’ll describe a model as myopic if it cares only about what happens in the current training episode.[1] This form of myopia is called episodic myopia. Typically, we expect models to be myopic because the training process does not reward the AI for outcomes outside of its training episode. Non-myopia is interesting because it indicates a flaw in training – somehow our AI has started to care about something we did not design it to care about.  One reason to care about non-myopia is that it can cause a system to manipulate its own training process. If an ML system wants to affect what happens after its gradient update, it can do so through the gradient update itself. For instance, an AI might become deceptively aligned, behaving as aligned as possible in order to minimize how much it is changed by stochastic gradient descent (SGD). Or an AI could engage in exploration hacking, avoiding certain behaviors that it does not want to engage in because they will be rewarded and subsequently reinforced. Additionally, non-myopic AI systems could collude in adversarial setups like AI safety via debate. If debates between AI systems are iterated, they are analogous to a prisoners dilemma. If systems are non-myopic they could cooperate.  This post will outline six different routes to non-myopia: 1. Simulating other agents. Models could simulate humans or other non-myopic agents and adopt their non-myopia. 2. Inductive bias toward long-term goals. Inductive Biases like simplicity might favor non-myopic goals. 3. Meta-learning. A meta-learning loop can select for non-myopic agents. 4. (Acausal) trade. An otherwise myopic model might behave non-myopically by trading with other AI mo
18374c0e-4762-4219-8627-796968164f61
trentmkelly/LessWrong-43k
LessWrong
Can chess be a game of luck? Gil Kalai, a well known mathematician, has this to say on the topic of chess and luck: http://gilkalai.wordpress.com/2009/07/05/chess-can-be-a-game-of-luck/ I didn't follow his argument at all, but it seems like something other LW posters may understand, so I decided to post it here. Do comment on his arguments if you agree or disagree with him.
d61e5631-bb93-47ad-9793-f2ff7dfc60d0
trentmkelly/LessWrong-43k
LessWrong
Pausing AI is Positive Expected Value The PauseAI (⏸️) movement often gets this pushback: > “You're not factoring in all the benefits of good AI!” > > “Stopping AI progress is also a doom scenario!” To which I reply: If you agree P(doom) from building superintelligent AI before knowing how to align or control it is 5%+, try doing the basic expected-value calculation; you'll see why your objection is misguided. First, we need to estimate a few key probabilities and values. These can vary by many orders of magnitude. I'll pick values that AI optimists hopefully agree are fair:  Probability that AI goes right if capabilities scale to superintelligence by 2034 50% This is an immediate "fast takeoff" scenario where state-of-the-art AI remains near-inscrutable, yet within a decade it becomes vastly more intelligent than humans on every dimension. I'd personally give this scenario a much lower probability than 50% of going right for humanity, but I'm trying to be generous to AI optimists. Probability that AI goes right if we delay superintelligence to 2100 70% An important premise of PauseAI is that if we can give ourselves a few extra years or decades to thoroughly research the fundamental principles of how to align AI — how to robustly specify preferences, how to capture the delicate structure of human values as self-consistent preferences, etc — then we can significantly increase the probability that superintelligent AI goes well. If you agree that more time for safety research helps safety catch up to capabilities, you can take whatever probability you gave to superintelligent AI going right in 2034 and add 20% (or more) to the probability that it goes right in 2100. Value of baseline future, where AI never gets beyond human intelligence Let's define this as our baseline $0 scenario, because it's how normies who've never even heard of superintelligent AI currently imagine the future. We'll define the value of other scenarios in relation to the value of this scenario. If we never let ourselves
90b4938e-a30d-449e-9d40-ec5363d00fe9
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"I spent the week prepping for finals. One is a year-long cumulative closed-book chemistry exam that I haven't had much time to practice for. I was worried about memorizing a few things:Periodic trends and exceptionsThe form and application of approximately 100 workhorse equations and various forms of measurement (molarity vs. molality vs. mole fraction).Equations that get used rarely in homework or on exercises, but might be used as "gotchas" on the test.Some concepts that I found either confusing, or so simple that I didn't bother to remember them the first time.My anxiety wasn't just my ability to recall these ideas when prompted:"What's the two-point form of the Clausius-Clapeyron Equation?"ln(P2 / P1) = - Δ Hvap/R * (1/T2 - 1/T1)Nor was I unable to perform the calculations.My real concern was that I had spent the year treating my chemistry textbook like a reference manual, a repository for concepts and equations that I could look up when needed. I just memorized the few bits I'd need on any given quiz. Looking back at 1,000 pages of chemistry, I foresaw myself reviewing chapter 5 for a couple hours, but forgetting that review by the time I got to chapter 19.The sheer volume of work that seemed to be involved in memorizing a textbook seemed unreasonable. I hate using Anki, and I spend far too much time in front of screens as it is.So I decided to try something different - experimenting with the memory palace technique.I perceive myself as having a poor visual imagination, but I've been trying to practice improving it lately, with some success. Gwern points to expert opinion that visual thinking ability might be second only to IQ in terms of intellectual importance. My experience is that when I'm using psychedelics, or deliberately practicing my visualization abilities, I do improve far beyond my perceived abilities. We're stuck with our IQ, but if it's possible to improve our visual thinking skills through practice in adulthood, that's important.I want to describe my attempts and the outcome.First RoomI tried this both with a single calculus textbook chapter, and my entire chemistry textbook. The results were similar but different. I'm going to focus on the chemistry palace here.I close my eyes and allow myself to picture nothing, or whatever random nonsense comes to mind. No attempt to control.Then I invite the concept of a room into mind. I don't picture it clearly. There's a vague sense, though, of imagining a space of some kind. I can vaguely see fleeting shadowy walls. I don't need to get everything crystal clear, though.I mentally label the room as the "Ch. 14 room," or the "rates room." That means doing lots of things to make the label stick. I speak the words in my head. I picture a banner with them printed on it hanging from the ceiling. Or if I can't see it clearly, I picture a banner-like thing and just know that it says "rates room." I picture hourglasses sitting on furniture - the image comes to me much more easily than a banner with text.I imagine the crucial equations sitting on columnar pedestals. Again, they are easier to picture for some reason. I make sure that I can visually see each piece of the equation. I imagine a label on the pedestal - one says "t1/2" for the half-life equations; the other says "Integrated rate law," with an hourglass made out of two intertwined integration signs.I look up a picture of Svante Arrhenius and picture him in the room. He takes on a life of his own. I can tell he's proud of his equation, which appears in bold letters at the back of the room, with a sort of curtain around it. He's the keeper of the room. It takes on a calm atmosphere here. He's also the doorman. I have to tell him how to calculate the overall reaction order in order to enter. But if he knows that I know how to do it, I don't have to explain it in as much detail. We have a psychic relationship.Second RoomMoving backwards to Ch. 13, I once again imagine a new room, the Solutions Room. Standing there, I can still see the entrance to the first room - I can even picture some of the things inside, from a distance. I start populating the room with symbols, objects, equations, and the chemists they're named after. They are happy to explain things to me as many times as necessary.Abstract concepts that the book presents in words, still images, or equations get visualized in new ways. Partial pressures become two beakers, one with yellow steam and the other with red steam emerging. They get mixed into a single beaker that now emits a mixture of yellow and red steam, somewhere in between the amounts that the yellow and red beaker emit on their own. François-Marie Raoult is standing by to demonstrate his law to me. There's a bottle of Coke with Henry's Law printed on it.The solubility rules are accessible when I glance at the periodic table on the wall. Rather than seeing a list of rules, I see the individual elements, which take on a life of their own. The alkali metals, ammonium, and nitrate zoom around the room, not interested in talking to anybody, on their own adventure. The halogens are too cool to talk to anybody except silver, mercury, and lead, who are immensely popular. Silver had a falling out with acetate, who's a communist and not interested in money. Be sensitive! Chromate is a rich chick in an expensive chrome-hubbed car cruising around, looking for a boyfriend. Sulfur is bicurious, so she'll bond not only with the transition metals but with astatine, arsenic, bismuth, and lead.I practice traveling back and forth between the first and second rooms. They stay remarkably stable. Unlike recalling flash cards or the textbook, when I'm in my memory palace the ideas come almost unbidden. The elemental relationships I've used to conceptualize the solubility rules come bursting out of the periodic table.Further roomsI continue this for 6 chapters over the course of several hours. I am shocked and delighted at how easy and pleasant it is both to create the memory palace and to access the memories stored there. Not everything goes in - just the bits that I tend to forget. If I'm not sure about something, the famous chemists who populate the rooms will remind me, literally by talking me through their ideas.The presence of the chemists is also helpful for keeping me focused. I suspect that my brain is recruiting my social motivation. If the only people in my environment are genius chemists who are delighted to keep me interested in chemistry, then why would I get distracted by the internet?I find it deeply reassuring to stand in the Intermolecular Forces room and know that just by walking a few rooms over, I can get back to the Rates Room, where all the equations are stored. Perhaps I've built a path through the mental mountains? The next day, it's pretty easy to get back to the memory palace, and everything is as I left it. I just have to close my eyes and wait for a moment to get back in.Concerns and questionsI also did a memory palace for calculus. I did it day-of because I felt more confident about calculus, it wasn't a comprehensive exam, and it was open book. I'll describe it another time. Mostly, it helped me feel more confident that I understood the breadth of the material. I found it much more convenient to refer to the textbook when necessary.But for tomorrow's, I'm very glad that I now have a store of chemical facts in my memory palace. The anxiety that had been plaguing me this week has vanished. I'm not certain that it will really help. But I do anticipate continuing to use this technique in the future. I think it helps not only my memory but my synthesis of learning.For example, our chapter on Lewis Structures also introduces the topic of electronegativity and formal charge. Anyone who's taken first year gen chem knows they're related: any negative formal charge should go on the most electronegative atom.But when I would stare at the electronegativity pages in the textbook, I would focus on the rules offered there: the range of EN difference that characterizes a covalent vs. ionic bond, the periodic trend in EN, and how to calculate net dipole moment. Likewise, in the formal charge section, I would focus on how to calculate the charge.It took seeing Linus Pauling holding a symbol for electronegativity in one hand, and a symbol for formal charge in the other, to more deeply understand that these are not just two different calculations to do. They're deeply related ways of modeling how molecules are structured. They go together like yeast and flour.I also see how much faster and more intuitively I think about both chemistry and calculus when I can visualize them. It's just no comparison. Trying to remember Raoult's Law by remembering a verbal description or picturing the equation is just no comparison to looking at those yellow and red steaming beakers. Similarly, it's so helpful to picture a 3D mountain range and see a tiny little yellow gradient vector surfing up and down it on the steepest slopes.AdviceI'm a true beginner here, so I don't want to make any grand claims about how to learn or how useful these techniques are. But I'd give a few pointers so far:If you think you can't visualize, you might be wrong.Start by just closing your eyes and allowing your brain to produce images without trying to control them. It seems important to have a relaxed, accepting attitude toward my own brain's way of visualizing.The way to add control is to take a gentle, experimental attitude. Go with what's easy. Let yourself be surprised by what's do-able and what's useful. Is it hard to picture letters on a banner? Try visualizing Isaac Newton and ask him to say the thing you're trying to remember. Maybe you don't even need to do that - maybe it's enough to picture a vague stripe in the sky that you just know is a banner, and you just know has the words "rate room" printed on it.It takes a while to navigate the rooms and get the information you need, so this might need refinement or practice if you've got to remember things quickly." The above is a chunk of text from a post created by a user on the website LessWrong. Can you explain what this text is saying as if I've never finished my first 2 years of college? This text is essentially a detailed description of how the author used a memory technique called the "memory palace" to study for their chemistry and calculus exams. The memory palace technique, also known as the "method of loci," is a method used to memorize a large amount of information by associating each item with a specific location in a mental image of a spatial environment. This technique has been used since ancient times and is based on the theory that people can remember places better than they can remember plain facts. The author was feeling overwhelmed by the amount of information they had to memorize for their exams. Instead of trying to rote memorize everything, they decided to use the memory palace technique. They visualized a room for each chapter of their chemistry textbook, and populated each room with images representing key concepts and equations from that chapter. For example, they visualized a room for the chapter on rates, with the key equations displayed on pedestals, and an image of the chemist Svante Arrhenius, who is associated with one of the equations. They also visualized a room for the chapter on solutions, with images representing the concepts of partial pressures and solubility rules. The author found this method to be very effective. They could easily recall the information by mentally walking through the rooms, and the images helped them understand the concepts more deeply. They also found that the process of creating the memory palace was enjoyable and reduced their anxiety about the exams. The author suggests that this technique could be useful for anyone who needs to memorize a large amount of information, even if they don't think they have a good visual imagination. They advise starting with a relaxed attitude, allowing the brain to produce images naturally, and then gradually adding control to visualize the desired information. Could you elaborate on how the memory palace technique might be adapted for subjects that are less concept-based and more skill-based, such as learning a new language or a musical instrument? Certainly. While the memory palace technique is often used for memorizing facts or concepts, it can also be adapted for skill-based learning such as learning a new language or a musical instrument. For language learning, each room in the memory palace could represent a different category of vocabulary or grammar rules. For example, one room could be dedicated to verbs, another to nouns, and another to adjectives. Within each room, you could visualize objects or scenes that represent the words or grammar rules you're trying to learn. For instance, in the verb room, you might visualize an object in action to represent each verb. To remember the Spanish verb "correr" (to run), you could visualize a person running. To add another layer, you could place these verbs in different areas of the room based on their conjugation patterns or tenses. For learning a musical instrument, you could use the memory palace to remember scales, chords, or pieces of music. Each room could represent a different key, scale, or piece of music. Within each room, you could visualize objects or scenes that represent the notes or chords. For example, to remember a C Major scale, you could visualize a ladder with each rung representing a different note in the scale. To remember a piece of music, you could visualize a journey through a series of rooms, with each room representing a different section of the piece. The key to using the memory palace technique for skill-based learning is to create vivid, memorable images that represent the skills you're trying to learn. The more unique and emotionally engaging the images, the easier it will be to remember them. And as with any form of learning, practice is crucial. The more you practice navigating your memory palace and recalling the images, the more ingrained the skills will become. Reflecting on your explanation, what could be the potential challenges or limitations in applying the memory palace technique to skill-based learning such as language acquisition or musical instrument mastery? While the memory palace technique can be a powerful tool for enhancing memory, it does come with potential challenges and limitations, especially when applied to skill-based learning. 1. Time and Effort: Constructing a memory palace can be time-consuming and requires a good deal of mental effort, especially when you're first starting out. You need to create a detailed, vivid mental image for each piece of information you're trying to remember, which can be a daunting task if you're trying to learn a large amount of information. 2. Complexity: Skill-based learning often involves not just memorizing information, but also understanding complex concepts, mastering physical techniques, and developing intuition or "feel" for the skill. While the memory palace technique can help with the memorization aspect, it may not be as effective for these other aspects of skill learning. 3. Practice: Skills like language and music require a lot of practice to master. While a memory palace can help you remember vocabulary or musical notes, it can't replace the need for repeated practice to reinforce the skill and make it automatic. 4. Over-reliance: There's a risk of becoming over-reliant on the memory palace as a crutch, which could potentially hinder your ability to recall information without it. For example, if you're trying to speak a new language in a real-world situation, you might not have time to mentally walk through your memory palace to find the words you need. 5. Individual Differences: Not everyone finds the memory palace technique helpful. Some people might struggle with visualizing mental images, while others might find the technique confusing or distracting. It's important to find a learning strategy that works best for you as an individual. How might one overcome the challenges associated with over-reliance on the memory palace technique when learning a new language or musical instrument? Overcoming the challenges associated with over-reliance on the memory palace technique when learning a new language or musical instrument involves incorporating other learning strategies and methods of practice. Here are a few suggestions: 1. Active Practice: For language learning, try to use the new vocabulary or grammar rules in real-life situations as much as possible. This could be through conversation practice with a language partner, writing essays or diary entries, or even just talking to yourself. For music, this might involve playing the piece or scale over and over, or improvising using the new chords or techniques you've learned. 2. Spaced Repetition: This is a learning technique where you review information at increasing intervals over time. This can help move information from your short-term memory to your long-term memory, reducing your reliance on the memory palace. 3. Multisensory Learning: Try to engage as many of your senses as possible. For language learning, this could involve listening to the language, speaking it out loud, writing it down, and even tasting the food of the culture. For music, this could involve listening to the piece, watching others play it, and physically playing it yourself. 4. Progressive Difficulty: Start with simpler tasks and gradually move on to more complex ones. For example, start by memorizing a few key vocabulary words or chords, then move on to full sentences or songs. 5. Regular Review: Even after you've memorized something using the memory palace technique, continue to review it regularly. This can help reinforce the information in your memory and reduce your reliance on the memory palace. 6. Use of Technology: Language learning apps, online courses, and other digital tools can provide interactive and engaging ways to practice and reinforce what you've learned. By combining the memory palace technique with these other strategies, you can enhance your learning and reduce your reliance on any one method.
57a2d42a-eb8e-43f2-87d7-5080ff3dd8a1
trentmkelly/LessWrong-43k
LessWrong
Testing Genetic Engineering Detection with Spike-Ins Thanks to our collaborators Marc Johnson and Clayton Rushford at the University of Missouri for their critical work on the laboratory experiments and sample preparation for this test, and to Evan Fields, Mike McLaren, and Simon Grimm for their feedback on drafts. Summary The NAO has been developing an early warning system for pandemics, focusing specifically on stealth pathogens that could spread widely before demonstrating identifiable symptoms. In June we described progress with our chimera detection approach, and demonstrated how it flagged engineered pathogens in simulated data and engineered viral vectors in real data. Here we describe a series of real-world tests, "spiking" samples of municipal wastewater with three different engineered HIV-derived viral particles. In each case our chimera detection system correctly identified sequencing reads containing the junctions between the original and modified sections of the genomes, with performance mildly exceeding our simulations. Experiment Our partners Marc Johnson and Clayton Rushford at the University of Missouri (MU) have been sequencing municipal wastewater influent on a regular basis. At the end of September they took one of these samples, split it into four replicates, and spiked HIV-derived engineered viral particles into three of them. The genomes of the viral particles looked something like the following: The three particles were developed at MU, and each derives from the HIV-1 lab strain pNL4-3, with different inserted regions: * v530: DsRed, a red fluorescent protein * v549: Green Fluorescent Protein (GFP) * Puro: a CMV promoter and a codon-optimized puromycin resistance gene Our MU collaborators spiked 10uL into each of these samples. In the case of Puro the final concentration was 7.1k copies per mL. With v530 and v549, however, they haven't measured the concentration and so our assessment of those samples will be only in relative terms. In addition to the three spiked samples they also pr
545eecb8-e693-4c2f-bc30-bc3c7d4b8408
trentmkelly/LessWrong-43k
LessWrong
[Poll] Who looks better in your eyes? This is thread where I'm trying to figure out a few things about signalling on LessWrong and need some information, so please immediately after reading about the two individuals please answer the poll. The two individuals: A. Sees that an interpretation of reality shared by others is not correct, but tries to pretend otherwise for personal gain and/or safety. B. Fails to see that an interpretation of reality is shared by others is flawed. He is therefore perfectly honest in sharing the interpretation of reality with others. The reward regime for outward behaviour is the same as with A.   To add a trivial inconvenience that matches the inconvenience of answering the poll before reading on, comments on what I think the two individuals signal,what the trade off is and what I speculate the results might be here versus the general population, is behind this link.
b9d1bf54-6fdf-4a45-b810-26ea7f1f175b
trentmkelly/LessWrong-43k
LessWrong
(Approximately) Deterministic Natural Latents Background: Natural Latents: The Math, Natural Latents: The Concepts, Why Care About Natural Latents?, the prototypical semantics use-case. This post does not assume that you’ve read all of those, or even any of them. Suppose I roll a biased die 1000 times, and then roll the same biased die another 1000 times. Then... * Mediation: The first 1000 rolls are approximately independent of the second 1000 given the bias (to reasonable precision). * Redundancy: I can estimate the die’s  bias (to reasonable precision) with high confidence from either the first or second 1000 rolls. The die’s bias is therefore a natural latent, which means it has various nice properties. * Minimality: The bias is the smallest summary of all the information about the first 1000 rolls relevant to the second 1000 (and vice-versa). * Maximality: The bias is the largest piece of information which can be calculated from the first 1000 rolls and also can separately be calculated from the second 1000 rolls. * Any other variable which satisfies the above properties must tell us (approximately) the same information about the die rolls as the bias. Furthermore, the bias is a(n approximate) deterministic natural latent: the die’s bias (to reasonable precision) is approximately determined by[1] the first 1000 die rolls, and also approximately determined by the second 1000 die rolls. That implies one more nice property: * Uniqueness: The bias is the unique-up-to(-approximate)-isomorphism latent which has the above properties, making it a natural Schelling point for communication between agents. We’ve proven all that before, mostly in Natural Latents: The Math (including the addendum added six months after the rest of the post). But it turns out that the math is a lot shorter and simpler, and easily yields better bounds, if we’re willing to assume (approximate) determinism up-front. That does lose us some theoretical tools (notably the resampling construction), but it gives a cleaner foundatio
732bc109-0b28-45a4-bce5-34db427721d2
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Counterfactual Induction (Lemma 4) .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} **Lemma 4:** ∀V∈Λ:∃V∈Ξ′:t(V)A(ϕ)≤VA(ϕ) **Notation:** For notational convenience, we will abuse notation a bit and abbreviate f(g(A)) as A. This is just turning A from a finite collection of axioms to a sentence, and from there to the set of worlds where that sentence is true. And if E∈L, we will abbreviate E/P(S) as ¬E. E,F denote arbitrary elements of L, and by our previous abuse of notation, A is an element of A when it appears as a subscript, and L otherwise, via the translation from groups of sentences to sets of worlds. Our axioms of interest are: 1: Unitarity: ∀A:VA(A)=1 2: Subadditivity: ∀A,E,F:VA(E∪F)≤VA(E)+VA(F) 3: Empty Set: ∀A:VA(∅)=0 4: Law of Excluded Middle: ∀A,E:VA(E)+VA(¬E)=1 5: Sane Conditioning: ∀A,E:VA(E)=VA(A∩E) 6: Monotonicity: ∀A,E,F:A∩E⊆A∩F→VA(E)≤VA(F) Notice that 6 implies 5. 6 is the powerset analogue of propositional monotonicity (where the precondition is A⊢pcϕ→ψ), and 5 is the powerset analogue of propositional equivalence (where the precondition is A⊢pcϕ↔ψ). Also note that 1,4, and 5 imply 3 because VA(∅)+VA(P(S))=1 by LEM, and VA(P(S))=VA(A)=1 by sane conditioning and unitarity. Ξ′ is the set of all valuations that fulfill properties 1,2,3,4, and 6 (this also gets 5) Θ′ is the set of all valuations that fulfill properties 1,2, and 3, while Θ′lem,Θ′cond,Θ′mono are the subsets of Θ′ that fulfill properties 4, 5, and 6 respectively. Ξ′=Θ′lem∩Θ′mono. Also, in some of the sublemmas, we will be working heavily with the powerset lattice L. It is the lattice of all subsets of worlds, ordered by set inclusion. Sup in this lattice corresponds to union, inf corresponds to intersection, and given some set A, A↓ refers to the ideal of all subsets of A, and A↑ refers to the filter of all supersets of A. Thinking of this lattice may help give some geometric intuition to the proofs for the reader, and we will freely move back and forth between the subset view and the lattice view. The subvaluation property is defined as follows: V′≤V↔∀A,E:V′A(E)≤VA(E). A similar definition applies when we're working with sentences rather than sets. This means that the new valuation V′ assigns an equal or lower value to every statement, for all conditionals. The property of being a subvaluation is preserved when the function t:Θ′lem∩Θ′cond→V (which restricts to Ξ′→Ξ)is applied, so if V′≤V, then t(V′)≤t(V). Being a subvaluation is also transitive. Lemmas 5, 6, and 7 will be used in the proof and proved afterwards. **Lemma 5:** ∀V∈Λ:∃V∈Θ′cond:t(V)≤V. For any valuation which applies to sentences that fits the "shortest disproof" property and a few others, there's another valuation which, when translated into a valuation on sentences, is a subvaluation. **Lemma 6:** ∀V∈Θ′cond:∃V′∈Θ′mono:V′≤V For any valuation over sets of worlds that fulfills sane conditioning, there is a valuation over sets of worlds which fulfills subset monotonicity, and is a subvaluation. For Lemma 7, we need to introduce the concept of a frozen set. F:Θ′×A→P(A↓) is a function which takes a conditional A and a valuation V, and returns a set of "frozen sets" at or below A, defined as the minimum closure of the following two rules: 1: A is frozen. 2: If B∪C is frozen, and VA(B∪C)=VA(B)+VA(C), then B and C are frozen. Notice that this implies ∅ is frozen, since A is. If |F(V,A)|=|A↓|, then A is "totally frozen" (note that this refers to A in the role of a conditional, not A as a statement within the lattice). If this holds for all A, then V is "totally frozen". The set of these valuations is denoted as Θ′froz. **Lemma 7:** *For all* V∈Θ′cond, if V *isn't totally frozen, there's a* V′∈Θ′cond *such that:* V′≤V, *and, for all* A *which aren't totally frozen,* |F(V,A)|<|F(V′,A)| **Proof of Lemma 4:** Select an arbitrary V∈Λ. Now, define a sequence of valuations {Vn}n∈N as follows: Let V0 be the valuation guaranteed to exist by Lemma 5, so V0∈Θ′cond and t(V0) is a subvaluation of V. For odd Vn, they are the valuation guaranteed to exist by applying Lemma 6 to Vn−1. They are all subvaluations of Vn−1, and all lie in the closed set Θ′mono. For even Vn, if Vn−1 is totally frozen, they are just equal to Vn−1. Otherwise, Vn is defined to be equal to the valuation produced by repeatedly applying Lemma 7 until a totally frozen set is produced. (this is guaranteed to exist by the finite size of A and L, and the total number of frozen nodes increasing for every application of Lemma 7.) Note that these valuations all lie in Θ′cond. If a valuation V is totally frozen and fulfills monotonicity, it also fulfills the law of the excluded middle. The proof of this is as follows: Select an arbitrary E with nonzero intersection with A, which also isn't a superset of A. The set E∩A is frozen. By the definition of frozen, this means there is a strict superset of E∩A, E1, which is still a subset of A, and a corresponding E1∗ which is also a subset of A. More formally, E1=(E∩A)∪E1∗, and VA(E∩A)+VA(E1∗)=VA(E1) by the definition of when a set freezes. Keep repeating this argument until you eventually hit a set En equivalent to A itself. (E∩A)∪⋃1≤i≤nEi∗=A. This means that (¬E∩A)⊆⋃1≤i≤nEi∗. Because VA(E∩A)+∑1≤i≤nVA(Ei∗)=VA(En)=1, and because VA(¬E∩A)≤VA(⋃1≤i≤nEi∗)≤∑1≤i≤nVA(Ei∗) (by monotonicity and subadditivity respectively), and because VA(¬E∩A)<∑1≤i≤nVA(Ei∗) impliesVA(E∩A)+VA(¬E∩A)<VA(A)=1, which is a violation of subadditivity, this means that VA(¬E∩A)=∑1≤i≤nVA(Ei∗) and thus that VA(E∩A)+VA(¬E∩A)=1. Now, because VA(E)=VA(E∩A) by monotonicity, and the same goes for VA(¬E), we get that VA(E)+VA(¬E)=1. Now, assume E∩A=∅. Then VA(E)=VA(E∩A)=VA(∅)=0 by sane conditioning and empty set. Also, ¬E∩A=A , so VA(¬E)=VA(¬E∩A)=VA(A)=1 by sane conditioning and unitarity, soVA(E)+VA(¬E)=0+1=1. A symmetric argument applies when E∩A=A, showing LEM for all sets. Further, the set of frozen valuations Θ′froz is a closed set. Fixing a Cauchy sequence of valuations, for all of the finitely many E,F pairs, additivity (not subadditivity!) either holds or doesn't hold on any given timestep. If it doesn't hold for some E,F pair in the limiting valuation, then there is some finite time after which additivity never holds. Thus, there is some finite time after which all violations-of-additivity in the limit are a violation-of-additivity in the corresponding valuation at timestep n. Because the limiting distribution has as many or more instances of additivity holding than the timestep n valuation, and the timestep n valuation is totally frozen, then the limiting distribution must be totally frozen as well. Now, in the sequence of valuations we just defined, the odd-numbered elements are in the closed set Θ′mono, and the even-numbered elements are all in the closed set Θ′froz∩Θ′cond. Because it's a sequence of subvaluations, Vn,A(E) decreases or stays the same with each step, for all A and E. Due to the fact that we have a monotonically decreasing function with a lower bound for each of the finitely many arguments, this means that we have defined a Cauchy sequence of valuations, so there is a limiting valuation V∞ which is a subvaluation of all other ones in the sequence. By splitting the sequence into the even-numbered valuations and the odd-numbered ones, both of which limit to V∞, and applying the closedness of the sets Θ′mono and Θ′froz, we find that V∞∈Θ′mono∩Θ′froz and since this implies LEM, we know that this means V∞∈Θ′mono∩Θ′lem=Ξ′. Anyways, since V∞∈Ξ′, and it's a subvaluation of the whole sequence, in particular V0, and the property of being a subvaluation can be transferred through t, then t(V∞)≤t(V0), and we already have by Lemma 5 that t(V0)≤V, then the transitivity of subvaluations implies the desired Lemma 4. **Proof of Lemma 5:** ∀V∈Λ:∃V∈Θ′cond:t(V)≤V Because V∈Λ, it fulfills subadditivity, unitarity, LEM, and propositional equivalence. We must then show that there is a V which is a subvaluation, and fulfills unitarity, subadditivity, LEM, and sane conditionals. Because unitarity, LEM, and sane conditionals imply the empty set axiom, we can get that V∈Θ′cond. Let VA(E):=minψ∈f−1(E)VA(ψ) . Although, by propositional equivalence for V, if f(Aϕ)=f(ψ), VA(ϕ)=VA(ψ).By the definition of a subvaluation, t(V) is a subvaluation, because for an arbitrary A,ϕ pair, t(V)A(ϕ)=VA(f(ϕ))=minψ∈f−1(f(ϕ))VA(ψ)=VA(ϕ) Also, VA(A)=minψ∈f−1(A)VA(ψ)=VA(A)=1, so unitarity for V is shown by unitarity for V. Given an arbitrary E and F, let ψE and ψF be some representative for E and F. VA(E∪F)=minψ∈f−1(E∪F)VA(ψ)=VA(ψE∨ψF)≤VA(ψE)+VA(ψF)=VA(E)+VA(F) Therefore, by the definition of V, ψE∨ψF∈f−1(E∪F), and subadditivity for V, we get subadditivity for V. Now to show LEM. Let ψE be a representative for E. Note that ¬ψE is a representative for ¬E. VA(E)+VA(¬E)=VA(ψE)+VA(¬ψE)=1 , by LEM for V. Now for sane conditionals. Fix ψE and ψE∩A in the usual way. Assuming A, these two statements are propositionally equivalent. VA(E)=VA(ψE)=VA(ψA∩E)=VA(A∩E) by propositional equivalence for V. And we're done! **Proof of Lemma 6:** ∀V∈Θ′cond:∃V′∈Θ′mono:V′≤V Our task is to go from an arbitrary V which fulfills subadditivity, unitarity, empty set, and sane conditionals to a V′ which is a subvaluation, and fulfills subaddivity, unitarity, empty set, and monotonicity. Let V′A(E):=minF∈E↑VA(F). Clearly this is a subvaluation because E∈E↑, so every set can only remain the same or decrease in value. To prove unitarity, apply sane conditionals and unitarity for V to show V′A(A)=minF∈A↑VA(F)=minF∈A↑VA(F∩A)=VA(A)=1. To prove empty set, apply empty set for V to show V′A(∅)=minF∈∅↑VA(F)≤VA(∅)=0 To prove sane conditionals for V′, (Not V! Notice the '. We'll need this as an intermediate result), note that if F∈E↑, then F∩A∈(E∩A)↑, and by sane conditionals for V, VA(F)=VA(F∩A). Therefore, minF∈(E∩A)↑VA(F)=minF∈E↑VA(F). E↑ is a larger subset of the lattice than (E∩A)↑, so you'd naively think that minimizing over E↑ produces a smaller value, but this outcome is prevented by any minimizing F having F∩A produce the same value by sane conditioning, and F∩A being in (E∩A)↑. Then, V′A(E)=minF∈E↑VA(F)=minF∈(E∩A)↑VA(F)=V′A(E∩A) To prove subset monotonicity, assume that E∩A⊆F∩A. Since the latter is above the former, this means that (F∩A)↑⊆(E∩A)↑, and, by the definition of V′, and sane conditionals for V′, that V′A(E)=V′A(A∩E)=minG∈(A∩E)↑VA(G)≤minG∈(A∩F)↑VA(G)=V′A(A∩F)=V′A(F) To prove subadditivity, notice that for all E,F, (E↑)∩(F↑)=(E∪F)↑. (ie, a set having E and F as subsets also has E∪F as a subset, and vice-versa). Fix arbitrary E and F. E↑ and F↑ have a minimal-valued element in them, according to VA, call these minimal-valued elements G and H. Also, G∪H∈E↑∩F↑=(E∪F)↑. Now, by the definition of V′, G∪H∈(E∪F)↑, and VA(G)=V′A(E) (and similar for H and F) V′A(E∪F)=minI∈(E∪F)↑VA(I)≤VA(G∪H)≤VA(G)+VA(H)=V′A(E)+V′A(F) Therefore, V′ is a subvaluation of V that is in Θ′mono. **Proof of Lemma 7:** By assumption, the distribution V is not totally frozen, so there is some set of A's which remain unfrozen. The task is to show unitarity, subadditivity, empty set, sane conditioning, that V′ is a subvaluation, that for all unfrozen A, the number of frozen nodes in A↓ strictly increases. For each unfrozen A, and unfrozen E∈A↓, V′A(E):=(1−ϵA)VA(E). If E is frozen, V′A(E)=VA(E). Otherwise, the valuation is extended outside of A↓ by setting V′A(E):=V′A(E∩A). The ϵ is selected for each A to be the maximum possible which does not violate a subadditivity constraint when applied. We will show that if A is not totally frozen, ϵ is positive. Obviously, unitarity applies because V′A(A)=VA(A)=1, by unitarity of V, and the set A being frozen. Similarly, empty set applies because V′A(∅)=VA(∅)=0 and ∅ is frozen. Sane conditioning obviously applies because of how the valuation was extended outside of A↓. To show subadditivity, we carry out a proof by cases. First, observe that V′A(E∪F)=V′A((E∪F)∩A). In the first case, (E∪F)∩A is unfrozen. Then V′A((E∪F)∩A)=(1−ϵA)VA((E∪F)∩A)≤(1−ϵA)(VA(E∩A)+VA(F∩A))≤V′A(E∩A)+V′A(F∩A)=V′A(E)+V′A(F) And subadditivity is shown. Admittedly, E∩A or F∩A might be frozen, but taking this into account, the final ≤ still applies. In case 2, (E∪F)∩A is frozen, and VA((E∪F)∩A)=VA(E∩A)+VA(F∩A) By the definition of frozen, E∩A and F∩A are frozen as well, so none of the valuations change, and the equality is preserved and subadditivity holds. In case 3, (E∪F)∩A is frozen, and VA((E∪F)∩A)<VA(E∩A)+VA(F∩A) Because ϵA was selected to be the maximum ϵ which did not violate any subadditivity constraints, V′A(E∪F)=V′A((E∪F)∩A)≤(1−ϵ)(VA(E∩A)+VA(F∩A))≤V′A(E∩A)+V′A(F∩A)=V′A(E)+V′A(F) In short, while the upper bound on VA((E∪F)∩A) may shrink, while the value itself remains the same, the < just leads to a narrowing of the gap. ϵA can be as large as 1 and not violate subadditivity in case 1 or 2, but due to (finitely many) instances of case 3 existing in V, since it isn't totally frozen, ϵA will be positive (due to each < being associated with its own ϵ that measures the degree of subadditivity), and there only being finitely many of them), and in fact ϵA will end up being just big enough to produce a new equality of the form V′A((E∪F)∩A)=V′A(E∩A)+V′A(F∩A) where (E∪F)∩A was frozen, and one of either E∩A or F∩A was not frozen. However, due to the new equality which was produced, at least one extra frozen node is produced. Showing that V′ is a subvaluation is trivial, because V′A(E)=V′A(E∩A)≤VA(E∩A)=VA(E).
660fd66b-dae4-4ec4-8b39-abeca5835eee
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Common misconceptions about OpenAI I have recently encountered a number of people with misconceptions about OpenAI. Some common impressions are accurate, and others are not. This post is intended to provide clarification on some of these points, to help people know what to expect from the organization and to figure out how to engage with it. It is not intended as a full explanation or evaluation of OpenAI's strategy.  The post has three sections: * Common accurate impressions * Common misconceptions * Personal opinions The bolded claims in the first two sections are intended to be uncontroversial, i.e., most informed people would agree with how they are labeled (correct versus incorrect). I am less sure about how commonly believed they are. The bolded claims in the last section I think are probably true, but they are more open to interpretation and I expect others to disagree with them. Note: I am an employee of OpenAI. Sam Altman (CEO of OpenAI) and Mira Murati (CTO of OpenAI) reviewed a draft of this post, and I am also grateful to Steven Adler, Steve Dowling, Benjamin Hilton, Shantanu Jain, Daniel Kokotajlo, Jan Leike, Ryan Lowe, Holly Mandel and Cullen O'Keefe for feedback. I chose to write this post and the views expressed in it are my own. Common accurate impressions --------------------------- **Correct: OpenAI is trying to directly build safe AGI.** OpenAI's [Charter](https://openai.com/charter/) states: "We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome." OpenAI leadership describes trying to directly build safe AGI as the best way to currently pursue OpenAI's mission, and have expressed concern about scenarios in which a bad actor is first to build AGI, and chooses to misuse it. **Correct: the majority of researchers at OpenAI are working on capabilities.** Researchers on different teams often work together, but it is still reasonable to loosely categorize OpenAI's researchers (around half the organization) at the time of writing as approximately: * Capabilities research: 100 * Alignment research: 30 * Policy research: 15 **Correct: the majority of OpenAI employees did not join with the primary motivation of reducing existential risk from AI specifically.** My strong impressions, which are not based on survey data, are as follows. Across the company as a whole, a minority of employees would cite reducing existential risk from AI as their top reason for joining. A significantly larger number would cite reducing risk of some kind, or other principles of beneficence put forward in the OpenAI [Charter](https://openai.com/charter/), as their top reason for joining. Among people who joined to work in a safety-focused role, a larger proportion of people would cite reducing existential risk from AI as a substantial motivation for joining, compared to the company as a whole. Some employees have become motivated by existential risk reduction since joining OpenAI. **Correct: most interpretability research at OpenAI stopped after the Anthropic split.** Chris Olah led interpretability research at OpenAI before becoming a cofounder of Anthropic. Although several members of Chris's former team still work at OpenAI, most of them are no longer working on interpretability. Common misconceptions --------------------- **Incorrect: OpenAI is not working on scalable alignment.** OpenAI has teams focused both on practical alignment (trying to make OpenAI's deployed models as aligned as possible) and on scalable alignment (researching methods for aligning models that are beyond human supervision, which could potentially scale to AGI). These teams work closely with one another. Its recently-released alignment research includes [self-critiquing models](https://openai.com/blog/critiques/) ([AF discussion](https://www.alignmentforum.org/posts/AHBejZBsaTR6dkRHs/ai-written-critiques-help-humans-notice-flaws)), [InstructGPT](https://openai.com/blog/instruction-following/), [WebGPT](https://openai.com/blog/webgpt/) ([AF discussion](https://www.alignmentforum.org/posts/jWkqACmDes6SoAiyE/truthful-lms-as-a-warm-up-for-aligned-agi)) and [book summarization](https://openai.com/blog/summarizing-books/) ([AF discussion](https://www.alignmentforum.org/posts/cTzsfK4vZDPBJGPtP/summarizing-books-with-human-feedback-recursive-gpt-3)). OpenAI's approach to alignment research is described [here](https://openai.com/blog/our-approach-to-alignment-research/), and includes as a long-term goal an [alignment MVP](https://aligned.substack.com/p/alignment-mvp) ([AF discussion](https://www.alignmentforum.org/posts/fYf9JAwa6BYMt8GBj/link-a-minimal-viable-product-for-alignment)). **Incorrect: most people who were working on alignment at OpenAI left for Anthropic.** The main group of people working on alignment (other than interpretability) at OpenAI at the time of the Anthropic split at the end of 2020 was the Reflection team, which has since been renamed to the Alignment team. Of the 7 members of the team at that time (who are listed on the [summarization paper](https://arxiv.org/pdf/2009.01325v1.pdf)), 4 are still working at OpenAI, and none are working at Anthropic. *Edited to add*: this fact alone is not intended to provide a complete picture of the Anthropic split, which is more complicated than I am able to explain here. **Incorrect: OpenAI is a purely for-profit organization.** OpenAI has a hybrid structure in which the highest authority is the board of directors of a non-profit entity. The members of the board of directors are listed [here](https://openai.com/about/). In [legal paperwork](https://openai.com/blog/openai-lp/#themissioncomesfirst) signed by all investors, it is emphasized that: "The [OpenAI] Partnership exists to advance OpenAI Inc [the non-profit entity]'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The General Partner [OpenAI Inc]'s duty to this mission and the principles advanced in the OpenAI Inc Charter take precedence over any obligation to generate a profit. The Partnership may never make a profit, and the General Partner is under no obligation to do so." **Incorrect: OpenAI is not aware of the risks of race dynamics.** OpenAI's [Charter](https://openai.com/charter/) contains the following merge-and-assist clause: "We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”" **Incorrect: OpenAI leadership is dismissive of existential risk from AI.** OpenAI has a Governance team (within Policy Research) that advises leadership and is focused on strategy for avoiding existential risk from AI. In multiple recent all-hands meetings, OpenAI leadership have emphasized to employees the need to scale up safety efforts over time, and encouraged employees to familiarize themselves with alignment ideas. OpenAI's Chief Scientist, Ilya Sutskever, recently pivoted to spending 50% of his time on safety. Personal opinions ----------------- **Opinion: OpenAI leadership cares about reducing existential risk from AI.** I think that OpenAI leadership are familiar and agree with the basic case for concern and appreciate the magnitude of what's at stake. Existential risk is an important factor, but not the only factor, in OpenAI leadership's decision making. OpenAI's alignment work is much more than just a token effort. **Opinion: capabilities researchers at OpenAI have varying attitudes to existential risk.** I think that capabilities researchers at OpenAI have a wide variety of views, including some with long timelines who are skeptical of attempts to mitigate risk now, and others who are supportive but may consider the question to be outside their area of expertise. Some capabilities researchers actively look for ways to help with alignment, or to learn more about it. **Opinion: disagreements about OpenAI's strategy are substantially empirical.** I think that some of the main reasons why people in the alignment community might disagree with OpenAI's strategy are largely disagreements about empirical facts. In particular, compared to people in the alignment community, OpenAI leadership tend to put more likelihood on slow takeoff, are more optimistic about the possibility of solving alignment, especially via empirical methods that rely on capabilities, and are more concerned about bad actors developing and misusing AGI. I would expect OpenAI leadership to change their mind on these questions given clear enough evidence to the contrary. **Opinion: I am personally extremely uncertain about strategy-related questions.** I do not spend most of my time thinking about strategy. If I were forced to choose between OpenAI speeding up or slowing down its work on capabilities, my guess is that I would end up choosing the latter, all else equal, but I am very unsure. **Opinion: OpenAI's actions have drawn a lot of attention to large language models.** I think that the release of GPT-3 and the OpenAI API led to significantly increased focus and somewhat of a competitive spirit around large language models. I consider there to be advantages and disadvantages to this. I don't think OpenAI predicted this in advance, and believe that it would have been challenging, but not impossible, to foresee this. **Opinion: OpenAI is deploying models in order to generate revenue, but also to learn about safety.** I think that OpenAI is trying to generate revenue through deployment in order to directly create value and in order to fund further research and development. At the same time, it also uses deployment as a way to learn in various ways, and [about safety](https://openai.com/blog/language-model-safety-and-misuse/) in particular. **Opinion: OpenAI's particular research directions are driven in large part by researchers.** I think that OpenAI leadership has control over staffing and resources that affects the organization's overall direction, but that particular research directions are largely delegated to researchers, because they have the most relevant context. OpenAI would not be able to do impactful alignment research without researchers who have a strong understanding of the field. If there were talented enough researchers who wanted to lead new alignment efforts at OpenAI, I would expect them to be enthusiastically welcomed by OpenAI leadership. **Opinion: OpenAI should be focusing more on alignment.** I think that OpenAI's alignment research in general, and its scalable alignment research in particular, has significantly higher average social returns than its capabilities research on the margin. **Opinion: OpenAI is a great place to work to reduce existential risk from AI.** I think that the Alignment, RL, Human Data, Policy Research, Security, Applied Safety, and Trust and Safety teams are all doing work that seems useful for reducing existential risk from AI.
3b74edf0-e716-4560-bbfc-f67116ee8db5
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
The heritability of human values: A behavior genetic critique of Shard Theory **Overview (TL;DR):** Shard Theory is a new approach to understanding the formation of human values, which aims to help solve the problem of how to align advanced AI systems with human values (the ‘AI alignment problem’). [Shard Theory](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values) has provoked a lot of interest and discussion on LessWrong, AI Alignment Forum, and EA Forum in recent months. However, Shard Theory incorporates a relatively Blank Slate view about the origins of human values that is empirically inconsistent with many studies in behavior genetics indicating that many human values show heritable genetic variation across individuals. I’ll focus in this essay on the empirical claims of Shard Theory, the behavior genetic evidence that challenges those claims, and the implications for developing more accurate models of human values for AI alignment. **Introduction: Shard Theory as an falsifiable theory of human values** The goal of the ‘AI alignment’ field is to help future Artificial Intelligence systems become better aligned with human values. Thus, to achieve AI alignment, we might need a good theory of human values. A new approach called “[Shard Theory](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values)” aims to develop such a theory of how humans develop values.  My goal in this essay is to assess whether Shard Theory offers an empirically accurate model of human value formation, given what we know from behavior genetics about the heritability of human values. The stakes here are high. If Shard Theory becomes influential in guiding further alignment research, but if its model of human values is not accurate, then Shard Theory may not help improve AI safety.  These kinds of empirical problems are not limited to Shard Theory. Many proposals that I’ve seen for AI ‘alignment with human values’ seem to ignore most of the research on human values in the behavioral and social sciences. I’ve tried to challenge this empirical neglect of value research in four previous essays for EA Forum, on the [heterogeneity of value types](https://forum.effectivealtruism.org/posts/KZiaBCWWW3FtZXGBi/the-heterogeneity-of-human-value-types-implications-for-ai) in humans individuals, the [diversity of values across individuals](https://forum.effectivealtruism.org/posts/DXuwsXsqGq5GtmsB3/ai-alignment-with-humans-but-with-which-humans), the importance of [body/corporeal values](https://forum.effectivealtruism.org/posts/zNS53uu2tLGEJKnk9/ea-s-brain-over-body-bias-and-the-embodied-value-problem-in), and the importance of [religious values](https://forum.effectivealtruism.org/posts/YwnfPtxHktfowyrMD/the-religion-problem-in-ai-alignment).  Note that this essay is a rough draft of some preliminary thoughts, and I welcome any feedback, comments, criticisms, and elaborations. In future essays I plan to critique Shard Theory from the perspectives of several other fields, such as evolutionary biology, animal behavior research, behaviorist learning theory, and evolutionary psychology. **Background on Shard Theory** Shard Theory has been developed mostly by Quintin Pope (a computer science Ph.D. student at Oregon State University) and Alex Turner (a post-doctoral researcher at the Center for Human-Compatible AI at UC Berkeley). Over the last few months, they posted a series of essays about Shard Theory on LessWrong.com, including this main essay here , ‘[The shard theory of human values’](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values) (dated Sept 3, 2022), plus auxiliary essays such as: ‘[Human values & biases are not accessible to the genome’](https://www.lesswrong.com/posts/CQAMdzA4MZEhNRtTp/human-values-and-biases-are-inaccessible-to-the-genome) (July 7, 2022), ‘[Humans provide an untapped wealth of evidence about alignment](https://www.lesswrong.com/posts/CjFZeDD6iCnNubDoS/humans-provide-an-untapped-wealth-of-evidence-about)’ (July 13, 2022), ‘[Reward is not the optimizer’](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) (July 24, 2022), and ‘[Evolution is a bad analogy for AGI: Inner alignment](https://www.lesswrong.com/posts/FyChg3kYG54tEN3u6/evolution-is-a-bad-analogy-for-agi-inner-alignment)’ (Aug 13, 2022). [This is not a complete list of their Shard Theory writings; it’s just the set that seems most relevant to the critiques I’ll make in this essay.] Also, David Udell published this useful summary: ‘[Shard Theory: An overview’](https://www.lesswrong.com/posts/xqkGmfikqapbJ2YMj/shard-theory-an-overview) (Aug 10, 2022).  There’s a lot to like about Shard Theory. It takes seriously the potentially catastrophic risks from AI. It understands that ‘AI alignment with human values’ requires some fairly well-developed notions about where human values come from, what they’re for, and how they work. It is intellectually ambitious, and tries to integrate reinforcement learning, self-supervised predictive learning, decision theory, developmental psychology, and cognitive biases. It seeks to build some common ground between human intelligence and artificial intelligence, at the level of how complex cognitive systems develop accurate world models and useful values. It tries to be explicit about its empirical commitments and theoretical assumptions. It is open about being a work-in-progress rather than a complete, comprehensive, or empirically validated theory. It has already provoked much discussion and debate. Even if my critiques of Shard Theory are correct, and some of its key evolutionary, genetic, and psychological assumptions are wrong, that isn’t necessarily fatal to the whole Shard Theory project. I imagine some form of Shard Theory 2.0 could be developed that updates its assumptions in the light of these critiques, and that still makes some progress in developing a more accurate model of human values that is useful for AI alignment. **Shard Theory as a Blank Slate theory** However, Shard Theory includes a model of human values that is not consistent with what behavioral scientists have learned about the origins and nature of values over the last 170 years of research in psychology, biology, animal behavior, neurogenetics, behavior genetics, and other fields. The key problem is that Shard Theory re-invents a relatively ‘Blank Slate’ theory of human values. Note that no Blank Slate theory posits that the mind is 100% blank. Every Blank Slate theory that’s even marginally credible accepts that there are at least a few ‘innate instincts’ and some ‘hardwired reward circuitry’. Blank Slate theories generally accept that human brains have at least a few ‘innate reinforcers’ that can act as a scaffold for the socio-cultural learning of everything else. For example, even the most radical Blank Slate theorists would generally agree that sugar consumption is reinforcing because we evolved taste receptors for sweetness.  The existence of a few innate reinforcement circuits was accepted even by the most radical Behaviorists of the 1920s through 1960s, and by the most ‘social constructivist’ researchers in the social sciences and humanities from the 1960s onwards. Blank Slate theorists just try to minimize the role of evolution and genetics in shaping human psychology, and strongly favor Nurture over Nature in explaining both psychological commonalities across sentient beings, and psychological differences across species, sexes, ages, and individuals. Historically, Blank Slate theories were motivated not so much by empirical evidence, as by progressive political ideologies about the equality and perfectibility of humans. (See the 2002 book [The Blank Slate](https://en.wikipedia.org/wiki/The_Blank_Slate) by Steven Pinker, and the 2000 book [Defenders of the Truth](https://www.amazon.com/Defenders-Truth-Sociobiology-Ullica-Segerstrale/dp/0192862154) by Ullica Segerstrale.) Shard Theory seems to follow in that tradition – although I suspect that it’s not so much due to political ideology, as to a quest for theoretical simplicity, and for not having to pay too much attention to the behavioral sciences in chasing AI alignment. At the beginning of their [main statement](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values) of Shard Theory, in their TL;DR, Pope and Turner include this bold statement: “Human values are not e.g. an incredibly complicated, genetically hard-coded set of drives, but rather sets of contextually activated heuristics which were shaped by and bootstrapped from crude, genetically hard-coded reward circuitry.”  Then they make three explicit neuroscientific assumptions. I’ll focus on Assumption 1 of Shard Theory: “Most of the circuits in the brain are learned from scratch, in the sense of being mostly randomly initialized and not mostly genetically hard-coded.” This assumption is motivated by an argument explored [here](https://www.lesswrong.com/posts/CQAMdzA4MZEhNRtTp/human-values-and-biases-are-inaccessible-to-the-genome) that ‘human values and biases are inaccessible to the genome’. For example, Quintin Trout argues “it seems intractable for the genome to scan a human brain and back out the “death” abstraction, which probably will not form at a predictable neural address. Therefore, we infer that the genome can’t directly make us afraid of death by e.g. specifying circuitry which detects when we think about death and then makes us afraid. In turn, this implies that there are a lot of values and biases which the genome cannot hardcode.”  This Shard Theory argument seems to reflect a fundamental misunderstanding of how evolution shapes genomes to produce phenotypic traits and complex adaptations. The genome never needs to ‘scan’ an adaptation and figure out how to reverse-engineer it back into genes. The genetic variants simply build a slightly new phenotypic variant of an adaptation, and if it works better than existing variants, then the genes that built it will tend to propagate through the population. The flow of design information is always from genes to phenotypes, even if the flow of selection pressures is back from phenotypes to genes. This one-way flow of information from DNA to RNA to proteins to adaptations has been called the ‘[Central Dogma of molecular biology’](https://en.wikipedia.org/wiki/Central_dogma_of_molecular_biology), and it still holds largely true (the recent hype about epigenetics notwithstanding).  Shard Theory implies that biology has no mechanism to ‘scan’ the design of fully-mature, complex adaptations back into the genome, and therefore there’s no way for the genome to code for fully-mature, complex adaptations. If we take that argument at face value, then there’s no mechanism for the genome to ‘scan’ the design of a human spine, heart, hormone, antibody, cochlea, or retina, and there would be no way for evolution or genes to influence the design of the human body, physiology, or sensory organs. Evolution would grind to a halt – not just at the level of human values, but at the level of all complex adaptations in all species that have ever evolved.  As we will see, this idea that ‘human values and biases are inaccessible to the genome’ is empirically incorrect. **A behavior genetic critique of Shard Theory** In future essays, I plan to address the ways that Shard Theory, as presently conceived, is inconsistent with findings from several other research areas: (1) evolutionary biology models of how complex adaptations evolve, (2) animal behavior models of how nervous systems evolved to act in alignment with fitness interests, (3) behaviorist learning models of how reinforcement learning and reward systems operate in animals and humans, and (4) evolutionary psychology models of human motivations, emotions, preferences, morals, mental disorders, and personality traits. For now, I want to focus on some conflicts between Shard Theory and behavior genetics research. As mentioned above, Shard Theory adopts a relatively ‘Blank Slate’ view of human values, positing that we inherit only a few simple, crude values related to midbrain reward circuitry, which are presumably universal across humans, and all other values are scaffolded and constructed on top of those. However, behavior genetics research over the last several decades has shown show that most human values that differ across people, and that can be measured reliably – including some quite abstract values associated with political, religious, and moral ideology – are moderately heritable. Moreover, many of these values show relatively little influence from ‘shared family environment’, which includes all of the opportunities and experiences shared by children growing up in the same household and culture. This means that genetic variants influence the formation of human values, and genetic differences between people explain a significant proportion of the differences in their adult values, and family environment explains a lot less about differences in human values than we might have thought. This research is based on convergent findings using diverse methods such as [twin studies](https://en.wikipedia.org/wiki/Twin_study), [adoption studies](https://en.wikipedia.org/wiki/Adoption_study), [extended twin family designs](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3228846/), [complex segregation analysis](https://en.wikipedia.org/wiki/Complex_segregation_analysis), and [genome-wide association studies](https://en.wikipedia.org/wiki/Genome-wide_association_study) (GWAS). All of these behavior genetic observations are inconsistent with Shard Theory, particularly its Assumption 1.  Behavior genetics was launched in 1869 when [Sir Francis Galton](https://en.wikipedia.org/wiki/Francis_Galton) published his book *Hereditary Genius*, which proposed some empirical methods for studying the inheritance of high levels of human intelligence. A few years earlier, Galton’s cousin Charles Darwin had developed the theory of evolution by natural selection, which focused on the interplay of heritable genetic variance and evolutionary selection pressures. Galton was interested in how scientists might analyze heritable genetic variance in human mental traits such as intelligence, personality, and altruism. He understood that Nature and Nurture interact in very complicated ways to produce species-typical human universals. However, he also understood that it was an open question how much variation in Nature versus variation in Nurture contributed to individual differences in each trait. Note that behavior genetics was always about explaining the factors that influence statistical variation in quantitative traits, not about explaining the causal, mechanistic development of traits. This point is often misunderstood by modern critics of behavior genetics who claim ‘every trait is an inextricable combination of Nature and Nurture, so there’s no point in trying to partition their influence.’ The mapping from genotype (the whole set of genes in an organism) to phenotype (the whole set of body, brain, and behavioral traits in an organism) is, indeed, extremely complicated and remains poorly understood. However, behavior genetics doesn’t need to understand the whole mapping; it can trace how genetic variants influence phenotypic trait variants using empirical methods such as twin, adoption, and GWAS studies.  In modern behavior genetics, the influence of genetic variants on traits is indexed by a metric called [heritability](https://en.wikipedia.org/wiki/Heritability), which can range from 0 (meaning genetic variants have no influence on individual differences in a phenotypic trait) to 1 (meaning genetic variants explain 100% of individual differences in a phenotypic trait). So-called ‘narrow-sense heritability’ includes only additive genetic effects due to the average effects of alleles; additive genetic effects are most important for predicting responses to evolutionary selection pressures – whether in the wild or in artificial selective breeding of domesticated species. ‘Broad-sense heritability’ includes additive effects plus dominant and epistatic genetic effects. For most behavioral traits, additive effects are by far the most important, so broad-sense heritability is usually only a little higher than narrow-sense heritability.  The most important result from behavior genetics is that all human behavioral traits that differ across people, and that can be measured reliably, are heritable to some degree – and often to a surprisingly high degree. This is sometimes called the [First Law of Behavior Genetics](http://faculty.umb.edu/peter_taylor/epi/turkheimer00.pdf) – not because it’s some kind of natural law that all behavioral traits must be heritable, but because the last 150 years of research has found no replicable exceptions to this empirical generalization. Some behavioral traits such as general intelligence show very high heritability – over [0.70](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/wilson-effect-the-increase-in-heritability-of-iq-with-age/FF406CC4CF286D78AF72C9E7EF9B5E3F) – in adults, which is about as heritable as human [height](https://www.nature.com/articles/d41586-019-01157-y). (For a good recent introduction to the ‘Top 10 replicated findings from behavior genetics, see [this paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4739500/).) **Does anybody really believe that values are heritable?** To people who accept a Blank Slate view of human nature, it might seem obvious that human values, preferences, motivations, and moral judgments are instilled by family, culture, media, and institutions – and the idea that genes could influence values might sound absurd. Conversely, to people familiar with behavior genetics, who know that all psychological traits are somewhat heritable, it might seem obvious that human values, like other psychological traits, will be somewhat heritable. It’s unclear what proportion of people lean towards the Blank Slate view of human values, versus the ‘hereditarian’ view that values can be heritable. As a reality check, I ran this Twitter poll on Oct 17, 2022, with the results shown in this screenshot: ![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/747647dce9b0d5f82deb2efa0f56308444a14a2a2bd67553.png) I was surprised that so many people took a slightly or strongly hereditarian view of values. Maybe the idea isn’t as crazy as it might seem at first glance. However, this poll is just illustrative that there is real variation in people’s views about this. It should not be taken too seriously as data, because it is just one informal question on social media, answered by a highly non-random sample. Only about 1.4% of my followers (1,749 out of 124,600) responded to this poll (which is a fairly normal response rate). My typical follower is an American male who’s politically centrist, conservative, or libertarian, and probably has a somewhat more hereditarian view of human nature than average. The poll’s main relevance here is in showing that a lot of people (not just me) believe that values can be heritable. **Human traits in general are heritable** A 2015 [meta-analysis](https://www.nature.com/articles/ng.3285.) of human twin studies analyzed 17,804 traits from 2,748 papers including over 14 million twin pairs. These included mostly behavioral traits (e.g. psychiatric conditions, cognitive abilities, activities, social interactions, social values), and physiological traits (e.g. metabolic, neurological, cardiovascular, and endocrine traits). Across all traits, average heritability was .49, and shared family environment (e.g. parenting, upbringing, local culture) typically had negligible effects on the traits. For 69% of traits, heritability seemed due solely to additive genetic variation, with no influence of dominance or epistatic genetic variation.  Heritability of human traits is generally caused by many genes that each have very small, roughly additive effects, rather than by a few genes that have big effects (see [this review](https://journals.sagepub.com/doi/abs/10.1177/1745691615617439)). Thus, to predict individual values for a given trait, molecular behavior genetics studies generally aggregate the effects of thousands of DNA variants into a [polygenic score](https://en.wikipedia.org/wiki/Polygenic_score). Thus, each trait is influenced by many genes. But also, each gene influences many traits (this is called [pleiotropy](https://en.wikipedia.org/wiki/Pleiotropy)). So, there is a complex [genetic architecture](https://en.wikipedia.org/wiki/Genetic_architecture) that maps from many genetic variants onto many phenotypic traits, and this can be explored using multivariate behavior genetics methods that track [genetic correlations](https://en.wikipedia.org/wiki/Genetic_correlation) between traits. (Elucidating the genetic architecture of human values would be enormously useful for AI alignment, in my opinion.) **Human values are heritable** The key point here, in relation to Shard Theory, is that ‘all human behavioral traits’ being heritable includes ‘all human values that differ across people’. Over the last few decades, behavior geneticists have expanded their focus from studying classic traits, such as general intelligence and mental disorders, to explicitly studying the heritability of human values, and values-adjacent traits. So far, behavior geneticists have found mild to moderate heritability for a wide range of values-related traits, including the following: * Food preferences are [heritable](https://www.mdpi.com/2072-6643/11/8/1735), and they are not just influenced by genes that predict basic taste or smell functions. Genes influence [heritabilities of tastes](https://www.mdpi.com/2072-6643/11/8/1735) for specific food categories such as vegetables, fruit, starchy foods, meat/fish, dairy, and snacks. [Different genes](https://www.sciencedirect.com/science/article/pii/S0950329321003037) underlie meat preferences in men versus women. Food fussiness and food neophobia are both [heritable in kids](https://acamh.onlinelibrary.wiley.com/doi/full/10.1111/jcpp.12647), and reflect a common genetic etiology. Obesity, reflecting a high reward-sensitivity for food, is about [45% heritable](https://onlinelibrary.wiley.com/doi/full/10.1002/oby.23116). * Mate preferences are somewhat heritable, including [rated importance](https://onlinelibrary.wiley.com/doi/full/10.1111/j.1558-5646.2011.01546.x) of key traits in potential partners, and [preference for partner height](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0049294). These heritable mate preferences can lead to positive genetic correlations between the preferences and with the actual traits preferred, as in [this study](https://www.sciencedirect.com/science/article/abs/pii/S1090513814000798) of height, intelligence, creativity, exciting personality, and religiosity. * Sexual values, reinforcers, and reward systems are heritable, including [sexual orientation](https://www.nature.com/articles/s41598-017-15736-4), [affectionate communication](https://www.tandfonline.com/doi/abs/10.1080/03637751.2020.1760327), [frequency of female orgasm](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1743-6109.2011.02300.x), [extrapair mating](https://www.sciencedirect.com/science/article/abs/pii/S1090513814001317) (infidelity), [sexual jealousy](https://www.sciencedirect.com/science/article/pii/S1090513821000611), and [sexual coerciveness](https://www.tandfonline.com/doi/abs/10.1080/10683160802621925). * Parenting behaviors and values are heritable, according to a [meta-analysis](https://psycnet.apa.org/doiLanding?doi=10.1037%2Fa0034205) of 56 studies. Also, the shared family environment created by parents when raising their kids has many heritable components (according to studies on the ‘heritability of the environment’, and ‘the [Nature of Nurture’](https://www.taylorfrancis.com/chapters/edit/10.4324/9780203838013-8/nature-nurture-robert-plomin).) * Economic values and consumer preferences are heritable, including [consumer decision heuristics](https://academic.oup.com/jcr/article-abstract/37/6/951/1869443), vocational interests, [preferences for self-employment](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0060542), [entrepreneurship](https://www.sciencedirect.com/science/article/pii/S0883902619301247), [delay discounting](https://www.sciencedirect.com/science/article/abs/pii/S0006322314008282), [economic policy preferences](https://www.pnas.org/doi/abs/10.1073/pnas.1120666109), [investment biases](https://www.sciencedirect.com/science/article/abs/pii/S0304405X14000889), [socio-economic status](https://www.nature.com/articles/s41562-021-01053-4), and [lifetime earnings](https://link.springer.com/article/10.1007/s10888-019-09413-x). * Moral values are [heritable](https://journals.sagepub.com/doi/abs/10.1177/1948550611412793), including [moral intuitions](https://link.springer.com/article/10.1007/s12110-020-09380-7), [cognitive empathy](https://www.nature.com/articles/mp2017122), [justice sensitivity](https://www.nature.com/articles/s41598-022-09253-2), [prosociality](https://www.sciencedirect.com/science/article/pii/S2352250X15001323), [self-control](https://www.sciencedirect.com/science/article/pii/S0149763418307905), [attitudes towards dishonesty](https://www.sciencedirect.com/science/article/abs/pii/S016726811300125X), and [vegetarianism](https://www.sciencedirect.com/science/article/pii/S095032932200180X). * Immoral behaviors and values are also heritable, including [violent crime](https://link.springer.com/article/10.1007/s10519-011-9483-0), [sexual coercion](https://academic.oup.com/ije/article/44/2/713/753089), [white-collar crime](https://www.cambridge.org/core/journals/psychological-medicine/article/abs/swedish-national-twin-study-of-criminal-behavior-and-its-violent-whitecollar-and-property-subtypes/0D9A88185ED0FD5525A5EBD5D2EBA117), and [juvenile delinquency](https://link.springer.com/article/10.1007/s10578-020-01119-w). * Political values are about [40% heritable](https://www.pnas.org/doi/abs/10.1073/pnas.1818711116); see [2021 review here](https://journals.sagepub.com/doi/abs/10.1177/14789299211053780); these heritable political values include [conservatism](https://www.cambridge.org/core/journals/journal-of-experimental-political-science/article/abs/genes-ideology-and-sophistication/91C7C343BBA8801732F62E7D55B16676), [liberalism](https://www.journals.uchicago.edu/doi/abs/10.1017/S0022381610001015), [social dominance orientation](https://www.pnas.org/doi/abs/10.1073/pnas.1818711116), [political engagement](https://journals.sagepub.com/doi/abs/10.1177/1065912917698045), [political trust](https://www.elgaronline.com/view/edcoll/9781782545101/9781782545101.00020.xml), [political interest](https://www.cambridge.org/core/journals/politics-and-the-life-sciences/article/genes-personality-and-political-behavior/CE6A2F64A262E29F396893965E286FAF), [political sophistication](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/genetic-basis-of-political-sophistication/9E69BA562FEF42FA4F7117ED1E3FF0EE), [military service](https://journals.sagepub.com/doi/abs/10.1177/0095327X18765449), [foreign policy preferences](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/heritability-of-foreign-policy-preferences/61AD34FFC1B0FF174FDFC6AA819050D4), [civic engagement](https://royalsocietypublishing.org/doi/full/10.1098/rstb.2015.0015), and [voter turnout](https://www.jstor.org/stable/23260396). * Religious values are heritable, including overall [religiosity](https://link.springer.com/article/10.1007/s10519-010-9388-3), [existential certainty](https://www.sciencedirect.com/science/article/abs/pii/S0092656613000500), [obedience to traditional authority](https://www.sciencedirect.com/science/article/abs/pii/S0191886913001384), and [apostasy](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/is-apostasy-heritable-a-behavior-genetics-study/2F93769FEBAACB2FC4AFC502B123BA83). In addition, the [Big Five personality traits](https://en.wikipedia.org/wiki/Big_Five_personality_traits) are moderately heritable (about 40%) according to this [2015 meta-analysis](https://psycnet.apa.org/record/2015-20360-001?doi=1) of 134 studies.  Each personality trait is centered around some latent values that represent how rewarding and reinforcing various kinds of experiences are. For example, people higher in Extraversion value social interaction and energetic activity more, people higher in Openness value new experiences and creative exploration more, people higher in Agreeableness value friendliness and compassion more, people higher in Conscientiousness value efficiency and organization more, and people higher in Neuroticism value safety and risk-aversion more. Each of these personality traits is heritable, so these values are also heritable. In fact, personality traits might be central to the genetic architecture of human values. Moreover, common mental disorders, which are [all heritable](https://www.nature.com/articles/s41380-017-0010-4), can be viewed as embodying different values. [Depression](https://en.wikipedia.org/wiki/Depression_(mood)) reflects low reward sensitivity and disengagement from normally reinforcing behaviors. [Anxiety disorders](https://en.wikipedia.org/wiki/Anxiety_disorder) reflect heightened risk-aversion, loss aversion, and hyper-sensitivity to threatening stimuli; these concerns can be quite specific (e.g. social anxiety disorder vs. specific phobias vs. panic disorder). The [negative symptoms](https://en.wikipedia.org/wiki/Schizophrenia#Negative_symptoms) of schizophrenia reflect reduced reward-sensitivity to social interaction (asociality), speech (alogia), pleasure (anhedonia), and motivation (avolution). The ‘[Dark Triad’](https://en.wikipedia.org/wiki/Dark_triad) personality traits (Machiavellianism, Narcissism, Psychopathy) reflect a higher value placed on personal status-seeking and short-term mating, and a lower value placed on other people’s suffering. A [2010 review paper](https://www.researchgate.net/publication/44589642_Psychiatric_'diseases'_versus_behavioral_disorders_and_degree_of_genetic_influence) showed that heritabilities of psychiatric ‘diseases’ (such as schizophrenia or depression) that were assumed to develop ‘involuntarily’ are about the same as heritabilities of ‘behavioral disorders’ (such as drug addiction or anorexia) that were assumed to reflect individual choices and values. Specific drug dependencies and addictions are all heritable, reflecting the differential rewards that psychoactive chemicals have in different brains. Genetic influences have been especially well-studied in [alcoholism](https://link.springer.com/article/10.1007/s11920-019-1008-1), [cannabis use](https://www.cambridge.org/core/journals/psychological-medicine/article/abs/overlap-of-heritable-influences-between-cannabis-use-disorder-frequency-of-use-and-opportunity-to-use-cannabis-trivariate-twin-modelling-and-implications-for-genetic-design/A758DD589C6C621BF3C680E0609CD026), [opiate addiction](https://www.sciencedirect.com/science/article/pii/S2352250X1830112X), [cocaine addiction](https://www.nature.com/articles/s41386-018-0008-x), and [nicotine addiction](https://www.tandfonline.com/doi/full/10.31887/DCNS.2017.19.3/pgorwood). Other kinds of ‘behavioral disorders’ also show [heritability](https://link.springer.com/chapter/10.1007/978-3-030-36391-8_63), including [gambling](https://www.frontiersin.org/articles/10.3389/fpsyg.2017.02121/full), [compulsive Internet use](https://onlinelibrary.wiley.com/doi/full/10.1111/adb.12218), and [sugar addiction](https://academic.oup.com/ajcn/article-abstract/104/4/1144/4557113) – and each reflects a genetic modulation of the relevant reward/reinforcement systems that govern responses to these experiences. **Heritability for behavioral traits tends to increase, not decrease, during lifespan development** Shard Theory implies that genes shape human brains mostly before birth, setting up the basic limbic reinforcement system, and then Nurture takes over, such that heritability should decrease from birth to adulthood. This is exactly the opposite of what we typically see in [longutudinal behavior genetic studies](https://psycnet.apa.org/record/2014-08122-001?doi=1) that compare heritabilities across different ages. Often, heritabilities for behavioral traits increase rather than decrease as people mature from birth to adulthood. For example, the heritability of general intelligence [increases gradually](https://link.springer.com/article/10.1023/A:1019772628912) from early childhood through young adulthood, and genes, rather than shared family environment, explain most of the continuity in intelligence across ages. A [2013 meta-analysis](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3954471/) confirmed increasing heritability of intelligence between ages 6 months and 18 years. A [2014 review](https://www.nature.com/articles/mp2014105) observed that heritability of intelligence is about 20% in infancy, but about 80% in adulthood. This increased heritability with age has been called ‘[the Wilson Effect’](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/wilson-effect-the-increase-in-heritability-of-iq-with-age/FF406CC4CF286D78AF72C9E7EF9B5E3F) (after its discoverer Ronald Wilson), and it is typically accompanied by a decrease in the effect of shared family environment.  Increasing heritability with age is not restricted to intelligence. [This study](https://pubmed.ncbi.nlm.nih.gov/16953685/) found increasing heritability of prosocial behavior in children from ages 2 through 7, and decreasing effects of shared family environment. Personality traits show relatively stable genetic influences across age, with small increases in genetic stability offsetting small decreases in heritability, according to this [meta-analysis](https://pubmed.ncbi.nlm.nih.gov/24956122/) of 24 studies including 21,057 sibling pairs. A frequent finding in longitudinal behavior genetics is that the stability of traits across life is better explained by the [stability of genes](https://www.sciencedirect.com/science/article/pii/B9780128046746000296) across life, than by the persistence of early experiences, shared family environment effects, or contextually reinforced values.  More generally, note that heritability does not just influence ‘innate traits’ that are present at birth. Heritability also influences traits that emerge with key developmental milestones such as social-cognitive maturation in middle childhood, sexual maturation in adolescence, political and religious maturation in young adulthood, and parenting behaviors after reproduction. Consider some of the findings in the previous section, which are revealed only after individuals reach certain life stages. The heritability of mate preferences, sexual orientation, orgasm rate, and sexual jealousy are not typically manifest until puberty, so are not ‘innate’ in the sense of ‘present at birth’. The heritability of voter behavior is not manifest until people are old enough to vote. The heritability of investment biases is not manifest until people acquire their own money to invest. The heritability of parenting behaviors is not manifest until people have kids of their own. It seems difficult to reconcile the heritability of so many late-developing values with the Shard Theory assumption that genes influence only a few crude, simple, reinforcement systems that are present at birth. **Human Connectome Project studies show that genetic influences on brain structure are not restricted to ‘subcortical hardwiring’** Shard Theory seems to view genetic influences on human values as being restricted mostly to the subcortical limbic system. Recall that Assumption 1 of Shard Theory was that “The cortex is basically (locally) randomly initialized.”   Recent studies in neurogenetics show that this is not accurate. Genetically informative studies in the Human Connectome Project [show](https://direct.mit.edu/netn/article/02/02/175/2208/Heritability-of-the-human-connectome-A) pervasive heritability in neural structure and function across all brain areas, not just limbic areas. A recent [review](https://www.sciencedirect.com/science/article/pii/S1053811921008430) shows that genetic influences are quite strong for global white-matter microstructure and anatomical connectivity between brain regions; these effects pervade the entire neocortex, not just the limbic system. Note that these results based on brain imaging include not just the classic twin design, but also genome-wide association studies, and studies of gene expression using transcriptional data. Another [study](https://elifesciences.org/articles/20178) showed that genes, rather than shared family environment, played a more important role in shaping connectivity patterns among 39 cortical regions. Genetic influences on the brain’s connectome are often [modulated by age and sex](https://www.biorxiv.org/content/10.1101/2020.12.09.417725v2.abstract) – in contrast to Shard Theory’s implicit model that all humans, of all ages, and both sexes, shared the same subcortical hardwiring. Another [study](https://www.sciencedirect.com/science/article/pii/S1053811922003950) showed high heritability for how the brain’s connectome transitions across states through time – in contrast to Shard Theory’s claim that genes mostly determine the static ‘hardwiring’ of the brain. It should not be surprising that genetic variants influence all areas of the human brain, and the values that they embody. Analysis of the [Allen Human Brain Atlas](https://www.sciencedirect.com/science/article/abs/pii/S1053811919300114), a map of gene expression patterns throughout the human brain, shows that over [80% of genes](https://www.nature.com/articles/s41598-017-00952-9) are expressed in at least one of 190 brain structures studied. Neurogenetics research is making [rapid progress](https://www.science.org/doi/abs/10.1126/science.aat8464) on characterizing the gene regulatory network that governs human brain development – including neocortex. This is also helping genome-wide association studies to discover and analyze the millions of [quantitative trait loci](https://en.wikipedia.org/wiki/Quantitative_trait_locus) (minor genetic variants) that influence individual differences in brain development. Integration of the Human Connectome Project and the Allen Human Brain Atlas reveals [pervasive heritability](https://onlinelibrary.wiley.com/doi/full/10.1111/gbb.12537) for myelination patterns in human neocortex – which directly contradicts Shard Theory’s Assumption 1 that “Most of the circuits in the brain are learned from scratch, in the sense of being mostly randomly initialized and not mostly genetically hard-coded.”  **Behavioral traits and values are also heritable in non-human animals** A recent 2019 [meta-analysis](https://academic.oup.com/jhered/article/110/4/403/5497135) examined 476 heritability estimates in 101 publications across many species, and across a wide range of 11 behavioral traits– including activity, aggression, boldness, communication, exploration, foraging, mating, migration, parenting, social interaction, and other behaviors. Overall average heritability of behavior was 0.24. (This may sound low, but remember that empirical heritability estimates are limited by the measurement accuracy for traits, and many behavioral traits in animals can measured with only modest reliability and validity.) Crucially, heritability was positive for every type of behavioral trait, was similar for domestic and wild species, was similar for field and lab measures of behavior, and was just as high for vertebrates as for invertebrates. Also, average heritability of behavioral traits was just as high as average heritability of physiological traits (e.g. blood pressure, hormone levels) and life history traits (e.g. age at sexual maturation, life span), and were only a bit lower than the heritability for morphological traits (e.g. height, limb length).  Note that most of these behavioral traits in animals involve ‘values’, broadly construed as reinforcement or reward systems that shape the development of adaptive behavior. For example, ‘activity’ reflects how rewarding it is to move around a lot; ‘aggression’ reflects how rewarding it is to attack others, ‘boldness’ reflects how rewarding it is to track and investigate dangerous predators, ‘exploratory behavior’ reflects how rewarding it is to investigate novel environments, ‘foraging’ reflects how rewarding it is to find, handle, and consume food, ‘mating’ reflects how rewarding it is to do mate search, courtship, and copulation, ‘parental effort’ reflects how rewarding it is to take care of offspring, and ‘social behavior’ reflects how reward it is to groom others or to hang around in groups. In other words, every type of value that can vary across individual animals, and that can be reliably measured by animal behavior researchers, seems to show positive heritability, and heritability of values is just as high in animals with complex central nervous systems (vertebrates) as in animals with simpler nervous systems (invertebrates). **So what if human values are heritable?** You might be thinking, OK, all this behavior genetics stuff is fine, and it challenges a naïve Blank Slate model of human nature, but what difference does it really make for Shard Theory, or for AI alignment in general?  Well, Shard Theory certainly think it matters. Assumption 1 in Shard Theory is presented as foundational to the whole project (although I’m not sure it really is). Shard Theory repeatedly talks about human values being built up from just a few, crude, simple, innate, species-typical reinforcement systems centered in the midbrain (in contrast to the rich set of many, evolved, adaptive, domain-specific psychological adaptations posited by evolutionary psychology). Shard Theory seems to allow no role for genes influencing value formation after birth, even at crucial life stages such as middle childhood, sexual maturation, and parenting. More generally, Shard Theory seems to underplay the genetic and phenotypic diversity of human values across individuals, and seems to imply that humans have only a few basic reinforcement systems in common, and that all divergence of values across individuals reflects differences in family, socialization, cultural, and media exposure.  Thus, I think that Shard Theory has some good insights and some promise as a research paradigm, but I think it needs some updating in terms of its model of human evolution, genetics, development, neuroscience, psychology, and values.  **Why does the heritability of human values matter for AI alignment?** Apart from Shard Theory, why does it matter for AI alignment if human values are heritable? Well, I think it might matter in several ways.  First, polygenic scores for value prediction. In the near future, human scientists and AI systems will be able to predict the values of an individual, to some degree, just from their genotype. As GWAS research discovers thousands of new genetic loci that influence particular human values, it will become possible to develop polygenic scores that predict someone’s values given their complete sequenced genome – even without knowing anything else about them. Polygenic scores to predict intelligence are already [improving](https://www.sciencedirect.com/science/article/abs/pii/S0160289621000143) at a rapid rate. Polygenic value prediction would require large sample sizes of sequenced genomes linked to individuals’ preferences and values (whether self-reported or inferred behaviorally from digital records), but it is entirely possible given current behavior genetics methods. As the [cost](https://sequencing.com/education-center/whole-genome-sequencing/whole-genome-sequencing-cost) of whole-genome sequencing falls below $1,000, and the medical benefits of sequencing rise, we can expect hundreds of millions of people to get genotyped in the next decade or two. AI systems could request free access to individual genomic data as part of standard terms and conditions, or could offer discounts to users willing to share their genomic data in order to improve the accuracy of their recommendation engines and interaction styles. We should expect that advanced AI systems will typically have access to the complete genomes of the people they interact with most often – and will be able to use polygenic scores to translate those genomes into predicted value profiles. Second, familial aggregation of values. Heritability means that values of one individual can be predicted somewhat by the values of their close genetic relatives. For example, learning about the values of one identical twin might be highly predictive of the values of the other identical twin – even if they were separated at birth and raised in different families and cultures. This means that an AI system trying to understand the values of one individual could start from the known values of their parents, siblings, and other genetic relatives, as a sort of maximum-likelihood familial Bayesian prior. An AI system could also take into account developmental behavior genetic findings and life-stage effects – for example, an individual’s values at age 40 after they have kids might be more similar in some ways to those of their own parents at age 40, than to themselves as they were at age 20.  Third, the genetic architecture of values. For a given individual, their values in one domain can sometimes be predicted by values in other domains. Values are not orthogonal to each other; they are shaped by genetic correlations across values. As behavior genetics researchers develop a more complete genetic architecture of values, AI systems could potentially use this to infer a person’s unknown values from their known values. For example, their consumer preferences might predict their political values, or their sexual values might predict their religious values. Fourth, the authenticity of values. Given information about an individual’s genome, the values of their close family members, and the genetic architecture of values, an AI system could infer a fairly complete expected profile of values for that individual, at each expected life-stage. What if the AI discovers that there’s a big mismatch between an individual’s ‘genetic prior’ (their values are predicted from genomic and family information), and their current stated or revealed values? That might be evidence that the individual has heroically overcome their genetic programming through education, enlightenment, and self-actualization. Or if might be evidence that the individual has been manipulated by a lifetime of indoctrination, mis-education, and propaganda that has alienated them from their instinctive preferences and morals. The heritability of values raises profound questions about the authenticity of human values in our credentialist, careerist, consumerist, media-obsessed civilization. When AI systems are trying to align with our values, but our heritable values don’t align with our current stated cultural values (e.g. this month’s fashionable virtue signals), which should the AI weigh most heavily? **Conclusion** If we’re serious about AI alignment with human values, we need to get more serious about integrating empirical evidence about the origins, nature, and variety of human values. One recent attempt to ground AI alignment in human values – Shard Theory – has some merits and some interesting potential. However, this potential is undermined by Shard Theory’s empirical commitments to a fairly Blank Slate view of human value formation. That view is inconsistent with a large volume of research in behavior genetics on the heritability of many human values. By taking genetic influences on human values more seriously, we might be able to improve Shard Theory and other approaches to AI safety, and we might identify new issues in AI alignment such as polygenic scores for value prediction, familial aggregation of values, and the genetic architecture of values. Finally, a hereditarian perspective raises the thorny issue of which of our values are most authentic and most worthy of being aligned with AI systems – the ones our genes are nudging us towards, the ones our parents taught us, the ones that society indoctrinates us into, or the ones that we ‘freely choose’ (whatever that means).    **Appendix 1: Epistemic status of my arguments** I’m moderately confident that some key assumptions of Shard Theory as currently presented are not empirically consistent with the findings of behavior genetics, but I have very low confidence about whether or not Shard Theory can be updated to become consistent, and I have no idea yet what that update would look like. As a newbie AI alignment researcher, I’ve probably made some errors in my understanding of the more AI-oriented elements of Shard Theory. I worked a fair amount on neural networks, genetic algorithms, autonomous agents, and machine learning from the late 1980s through the mid-1990s, but I’m still getting up to date with more recent work on deep learning, reinforcement learning, and technical alignment research.  As an evolutionary psychology professor, I’m moderately familiar with behavior genetics methods and findings, and I’ve published several papers using behavior genetics methods. I’ve been thinking about behavior genetics issues since the late 1990s, especially in relation to human intelligence. I taught a course on behavior genetics in 2004 (syllabus [here](https://www.primalpoly.com/s/bg-syllabus-2004.doc)). I did a sabbatical in 2006 at the Genetic Epidemiology Center at QIMR in Brisbane, Australia, run by [Nick Martin](https://scholar.google.com/citations?user=Ba2kwtkAAAAJ&hl=en&oi=ao). We published two behavior genetics studies, one in [2011](https://www.sciencedirect.com/science/article/abs/pii/S1743609515336304) on the heritability of female orgasm rates, and one in [2012](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/heritability-and-genetic-correlates-of-mobile-phone-use-a-twin-study-of-consumer-behavior/56022F02DE9EDBE7A79607E719B652DC) on the heritability of talking and texting on smartphones. I did a 2007 [meta-analysis](https://www.sciencedirect.com/science/article/abs/pii/S0160289606001073) of brain imaging data to estimate the coefficient of additive genetic variance in brain size. I also published a couple of papers in 2008 on genetic admixture studies, such as [this](https://onlinelibrary.wiley.com/doi/abs/10.1002/ajpa.20945). However, I’m not a full-time behavior genetics researcher, and I’m not actively involved in the large international genetics consortia that dominate current behavior genetics studies. Overall, I’m highly confident in the key lessons of behavior genetics (e.g. all psychological traits are heritable, including many values; shared family environment has surprisingly small effects on many traits). I’m moderately confident in the results from meta-analyses and large-scale international consortia studies. I’m less confident in specific heritability estimates from individual papers that haven’t yet been replicated.
b5a5516c-97df-4ff2-b57a-3e2fa091286e
trentmkelly/LessWrong-43k
LessWrong
Many arguments for AI x-risk are wrong The following is a lightly edited version of a memo I wrote for a retreat. It was inspired by a draft of Counting arguments provide no evidence for AI doom. I think that my post covers important points not made by the published version of that post. I'm also thankful for the dozens of interesting conversations and comments at the retreat. I think that the AI alignment field is partially founded on fundamentally confused ideas. I’m worried about this because, right now, a range of lobbyists and concerned activists and researchers are in Washington making policy asks. Some of these policy proposals seem to be based on erroneous or unsound arguments.[1] The most important takeaway from this essay is that the (prominent) counting arguments for “deceptively aligned” or “scheming” AI provide ~0 evidence that pretraining + RLHF will eventually become intrinsically unsafe. That is, that even if we don't train AIs to achieve goals, they will be "deceptively aligned" anyways. This has important policy implications. ---------------------------------------- Disclaimers: 1. I am not putting forward a positive argument for alignment being easy. I am pointing out the invalidity of existing arguments, and explaining the implications of rolling back those updates. 2. I am not saying "we don't know how deep learning works, so you can't prove it'll be bad." I'm saying "many arguments for deep learning -> doom are weak. I undid those updates and am now more optimistic." 3. I am not covering training setups where we purposefully train an AI to be agentic and autonomous. I just think it's not plausible that we just keep scaling up networks, run pretraining + light RLHF, and then produce a schemer.[2] Tracing back historical arguments In the next section, I'll discuss the counting argument. In this one, I want to demonstrate how often foundational alignment texts make crucial errors. Nick Bostrom's Superintelligence, for example: > A range of different methods can be used to
98845a86-f105-472f-abcb-54f6bbf214bf
trentmkelly/LessWrong-43k
LessWrong
Meetup : Reading Weekly Meetups Discussion article for the meetup : Reading Weekly Meetups WHEN: 14 October 2015 05:00:00PM (+0100) WHERE: Reading, England The Reading meetup is looking likely to be a weekly thing, so we'll be meeting in the same place at the same time as last week. We may use the time to discuss a better place and time to hold the meetups, as this doesn't work well for everyone. For now, we're still in the Starbucks next to Vue cinema and the river Kennet, in the Oracle Shopping Centre (Riverside Entrance). I (amongst others) will be there from 5pm, with a whiteboard with LessWrong written on it and the drawing of Clippy from last week. In the event that introductions and subsequent interesting discussions don't take up all our time, I have "motivation and long term goals" jotted down as a conversation topic. Hopefully anyone who missed the last one will be able to come to this one, so we'll be able to all introduce ourselves and discuss an optimal scheduling as a group. I am a bad meetup organiser for not putting this up sooner, and for that I am sorry. If you couldn't make it to either but you're in the area, please do comment and let us know a time that would be better, and we'll try and take you into account! https://www.google.co.uk/maps/place/The+Oracle+Shopping+Centre/@51.45308,-0.971488,3a,75y,90t/data=!3m8!1e2!3m6!1s7305926!2e1!3e10!6s%2F%2Fstorage.googleapis.com%2Fstatic.panoramio.com%2Fphotos%2Fsmall%2F7305926.jpg!7i2048!8i1536!4m2!3m1!1s0x48769b15e736aff1:0x91a4b0447e720259!6m1!1e1 <--- As last week, if you're lost, then if the area you're in looks like this, you are in vaguely the right place! Discussion article for the meetup : Reading Weekly Meetups
17845aa0-e6b6-4c0e-9fe4-74fe0baa1ae1
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
250 A Generalist Agent 1 hello and welcome to session 250 in the aisafety.com reading group tonight we'll be discussing the first half of the article a generalist agent by scott reed and others this is a work done by deepmind scott reed is the primary author but there is a team of 21 people um and i'm not gonna uh find their pictures find all of their pictures it's published very recently so it's like a couple of weeks old at most um the word gato as far as i could tell is it's not described why they chose that name it means cat in spanish and um like it's pro not an acronym uh i could at least generalists probably an agent that's like ga and then you could she is probably a transformer and maybe optimizer or something like that i couldn't find anything else so i don't think it's an acronym of much mud i also don't really think it's an agent like um i realize some people say everything that has to do with reinforcement learning is an agent um but uh i i don't prefer such a limited like most people when they hear talk about an agent they don't think something like um like uh gator in fact so let's start out with a few reactions to this paper it's been very mixed someone like uh daniel kukotal was willing to call this indeed subhuman level agi that we do in fact have agi here other people like nostalgia bryce made some uh very dismissive comments saying this work is like totally what we could have expected um and was really underwhelmed but one thing that certainly was not underwhelmed was the prediction markets because around the time when this was uh uh revealed the prediction markets for when we would have weak agi moved uh dramatically to become dramatically shorter going from 2048 you might be able to see here in this graph it's on an uh logarithmic curve here so it looks like it didn't go that far down but it's actually from 2042 to 2028 which is a rather dramatic shortening of timelines um i looked a bit around what people said to that and a lot of the people said this was a crazy overreaction and mostly because uh not because it changed very that much but because uh saying this could only happen 2042 was already uh crazy so the market is correcting in some way i must admit i i looked more into the um the details of this bed in particular and one of the things required for weak agi is passing the turing test so you need to be able to follow a scenario somewhat like what alan turing originally envisioned and that probably requires more than human level intelligence to do and also in fact it requires winning the lutner price and that's the price that's been defunct for the past couple of years and people according to wikipedia people scorn of this price and that's probably why they don't give that out any longer so uh this uh uh this uh prediction here probably also to some extent reflects the probability that it will never be one because the lutheran price will not be uh reinstated or something like that one neural network to rule them all that's my quote and not one from this uh from this paper um the the paper the introduction four times in a row states that gato does a number of things and it uses the same neural network for all of these tasks and it's very important for them to say that it's the same neural network and why is that a good idea well you avoid all of the hand crafting that you normally have to do when you use uh for different things and there is you don't need to determine that good balances you get a lot more training data if you can just have one big model and then you can train on literally anything and then you get a more diverse training and of course it's a model that can be reused meaning that the total amount of computation to use is much less those are the five reasons given in the payment i think things like performance out of distribution is probably a lot more um interesting and probably part of the reason why we care about ati and of course the key thing that enables this is that it does in fact work so you take in more diverse training from everywhere and then you keep getting uh better results on all the tasks the scaling hypothesis that we've seen so many times they have a number of tasks that they do and when they just write it out it seemed really really diverse you can breathe read here all the things it can do both in reality and in simulations and games i couldn't find any specific um reasoning why they chose the tasks that they that they chose um so i think mostly they like maybe they had someone who was really good at the bodies and then thought i will do some robotic tasks or something like that uh and of course the the main claim that they're making is uh or one of the things they're also showing that i won't go too much into details with tonight is that this that if you add more data and fine tune it then it becomes even with a little extra fine-tuning data dramatically more capable they have this statement natural language is a common grounding across otherwise incompatible embodiments and that looks kind of nice because obviously they're using a language model to do things like control robots and like white language a part of that i will later uh argue that this might not be entirely correct or and of course another thing that we should note and also that makes it somewhat less of an agent is that they in fact did not use reinforcement learning they used supervised learning and learn from that so um they could of course have used reinforcement learning and i think it's just a matter of convenience that they had some expert trajectories and then they just used those um but that is they didn't do any kind of self-play or anything like that finally in the introduction they have this statement we hypothesize gateway can be effectively scaled towards covering any task of interest and that certainly looks like a ati a claim that uh an explicit claim from deepmind that this is in fact an agi um this is kind of like an intuitive sense that i have from this paper mostly not mostly from reading other deep mind papers in fact other deep mind papers in my intuitive sense are very anti-hype trying to avoid making statements like this statement in fact um but um but i can't you know it's it's hard for me to if someone asked me to please point out the places in other deep mind papers that are less hype than they could be um like i can't really and certainly a lot of other research is is quite high so it's more like regressing to the mean let's talk about this skater model and all the things that it does the overall guiding principle is to get as much data as possible and uh as uh as varied data as possible um and they put all this very data somehow into just standard tokens which is you know mostly mostly words and the the way they do this will get back to the details later but i would call that naive of course it's really pretentious of me to say this research is naive in the sense that i could not do it myself um but um there doesn't seem to be any um big ideas there are a number of small ideas and we'll get to some of the small ideas later but um but it's certainly not something that seems groundbreaking and and the way they are using this model is not something that is reminiscent of um reinforcement learners or anything like that but mostly like a language model they're just taking a language model and trying to use it as an agi and say hey it works uh just by coincidence or maybe not entirely like coincidence but it's not designed with any kind of um deliberate thoughts about agi it's 1.2 billion parameters because and that's a very small model because they want to use to control robots in real time and so that means when you query the model you need to have a very fast answer um for comparison gp2 was larger than this model than gator and qt3 of course was more than 100 times larger so there is a both a potential for it to be scaled up and also uh like when you look at how impressive it is in things like text generation and try to compare it you need to realize that it's much smaller than gp3 let's talk about tokenization um the way they uh in code it seems like every half year there is like a new standard way to do it that's just a tiny bit smaller and now we're doing using a method that looks at 32 000 subwords and we maps those to integers and images are also like six divided into patches and normalized and um they have a uh for atari button presses they uh make those into words in a rather interesting way and i'll try to explain how they do that so first they say they do it in row major order so if you have like uh nine values you can either this is row major order or and this is column major aura and they're using the top one and let's take an atari controller this is an atari controller and if you squint you can see like there is a button up and up and to the left and to the left and down to the left and down and then in the middle there is in fact a button that can be clicked or not clicked so if you imagine that you are holding the controller stick to the left and pressing down the button at the same time that corresponds to zero zero zero and then one one zero zero zero zero so in that way they turn atari control inputs into uh into integers they also need for uh for robotics they need things like water movements and where is the robot's body proprioception um and the way they do that is they take something continuous value and in a use some tricks to put map those into some special um words so uh the first from 0 to 32 000 that's text and above thirty two thousand uh and two thirty three thousand and twenty four they uh that's the robotics and that is in fact the the way the thing that makes me think this is not quite as general as a model as you think because this way of segregating the input into two parts makes me think that uh you know when they previously said that uh they're turning everything into language then that's not entirely true because they are using in fact um different values for this part um so so it feels more like there are two neural networks pushed together rather than um one neural network doing both um let's talk about the last function which is of course specifying the loss function is a crucial thing for determining how a how to train a neural network this is in fact a picture from uh existing latent knowledge that i've chosen here and you might remember this is a patient network with statements like it is raining and the sprinkler is on and there is water on the lawn and i should get an umbrella and things like that so you imagine there are some values here um that are propagated um and let's say you want to encode this kind of knowledge so how do you calculate the probability let's say the joint probability that this is true this is true this is true and this is true how do we calculate that well that is you can write that as a probability of a1 a2 a3 and a4 right and how do you calculate that you can calculate that using the chain rule and the chain rule looks something like this so you have the probability that a4 is true given that the others are true multiplied by by the probability that the preceding ones are true and this of course you can if you want to calculate this well then you can use the chain rule again to get that's a three uh uh given these two uh multiplied by the probability of these two together and then you end up with something like this okay so this is just a motivating example of how to do it for four probabilities let's take the logarithm of all this because we don't like for practical reasons we don't like to multiply we much prefer to take the logarithm and just add things okay so what's the uh log of the probability from uh s1 to sl well that's the the sum of uh the probability uh s one uh and then from is one to one and then uh this index becomes two three four uh and so this is basically using the chain rule okay and then using this um this equation we they plug it into and get a loss function for a policy and a batch size and for if we just take the batch size here then you can see this is in fact the uh the statement up here the probability that you go all the way down in this tree in the bayesian network and there is a masking function this masking function uh ensures that there are some parts of the output that we don't actually um uh train on and we'll get back to later why we don't want to train on that but this equation here is the actual uh loss function for training the neural network so before i can explain the um the masking function we need to look at how data is actually flattened into uh into the input in a batch so there's some text it could be like i'm going to london or you know whatever people have posted on the internet there could be some robotics proprioception and continuous action um and there are things like images with questions and atari games and all this are put into some batteries you can see here and this is fit into data and then we normally engage with the loss function we need to add here they mask some of the uh the inputs like if you imagine here the uh atari then we want to train on the controls but what we predict the screen to be isn't actually all that interesting or not something we should train on so we mask that out in order to get our last function there are some more details about the training they choose transformers for simplicity and scalability scalability not learning something that just one was not good at uh i'll leave it to you to decide whether the transformer architecture is actually simple like it seems to me like that's not the case but it's easy for them to implement because uh you don't actually have to implement that uh because other people have implemented it for you so uh yeah like if you want to use a transformer then that's not very very difficult right because other people have done it and some more just know about the parameters for the transformers and sort of other details about how they do that and they have some prompts uh always wants demonstrations the hardware they're using is a bit more interesting so they're using a 16x16 uh transfer processing unit version 3 and version three why are they not using the tpu version four well the conspirational part of me would say that well the tbu falls are uh busy build uh training a uh online super intelligence or something that they think they can use to take over the world um in another hypothesis less conspirator it's something that happens quite often that people have some kind of model and then they spend half a year and even a year before they get around to publishing it that's certainly i think that happens um another thing that i think is perhaps most likely is that they simply don't care and i'll explain why in just a moment the time they used to train this was uh four days so if you currently have 256 views running for next six hours that comes out to around 25 000 uh gpu hours and um in uh google will rent you these new hours for two dollars meaning that even i mean i can probably get them at cheaper than but even then we're talking about a price point of like um fifty thousand dollars and for people with 21 um uh authors i think this is you know peanuts really the training costs are totally trivial they trained it for one million steps i couldn't find the stop criteria being used but yeah and there are some more details about this lens of course interesting if you want to reproduce this um but might not be interesting for us likewise there's here's a more description of how they uh the deployment works in atari games um i don't think i'll go through that the data sets i won't go through that in detail but one thing you'll notice is that there are vision and language tasks and there are control tasks and the control tasks are in fact 85 percent and there seems to be like a decent spread over different tasks and also in the vision language mostly is a massive text i thought that was i noticed a um dataset called the align dataset and i thought hey they're actually doing alignment work and wonderful what's the align uh dataset and it turns out to be something with uh you know image recognition unfortunately and that has nothing to do with alignment um so that's a bit sad that people are choosing uh such a uh misleading uh name i don't think it's malice it's probably just ignorance the people working with this just are not really aware that there's something called ai alignment and they are just choosing the picture the name for all the reasons let's talk about the simulated control tasks um they have a number there is uh the atari grid world instructions following some physics-based simulations transparency procedural and atari simulation and meat environments so it seems to me like quite well as far as i can come i think of course only to use experts uh play throughs uh all from the best reinforcement learners um and only in the cases where the agent is successful and some like much so much revenge this is a non-trivial constraint what does that do my intuition is that on the average case it probably makes it better but in the worst case because we haven't seen anything like that the worst case could be worse but that's kind of an intuition about the um the consequences of this choice okay let's go from simulation to reality and staying uh red green and blue blocks and there are two overall tasks skill mastery and skill generalization skill mastery is like how well can you stack these blocks on top of each other and skill generalization is if you have only stacked green on blue and suddenly you want to stack blue and green can you figure that out and they're both doing this in simulation and in fact in reality so they do have actual robust during this and the uh the episodes are running at uh 20 hertz so um that turns out to be uh requiring a an end to end time of 50 milliseconds for querying this this model um and that's a a very very substantial constraint and that's of course also why we need to have such small models and i think a2 this is as far as i can tell a really hard constraint and it's uh uh probably something we should be really impressed with that they are able to um to uh to make a full turnaround on this in in 50 seconds that's uh quite impressive as far as i can tell and of course uh it gives this very tough constraint gives a um wrong picture of what this technology can do because you could imagine that you um just relax it like d3 operation takes far longer to answer and um you also get faster computers so this constraint is something that is that we shouldn't put too much weight on and they have some examples of how they how gator compares to the state of the art and uh depending on how you squint it is probably beyond the state of the art in general and this is of course something that people who write email papers care really a lot about we don't from an air safety point of view it's not so important whether it's beyond the state of the art but like the trends that this greatest kind of generality uh whether that is useful or not on in total on simulated control tasks you can see a graph here like um there are 600 tasks um and you can see how many of them you can get perform as well as the uh the the experts the state of the arts and how many of them get like 90 and how many get uh 75 percent um this is of course an interesting question the way they formulate that is if you go to the 50 threshold then 75 of the tasks can be solved at the 50 threshold and um i think actually it's more interesting to see how far uh yeah uh like if you go up to like 75 or something like that well that seems like even um that's much closer to state of the art and it's still too fast and you can do that or even if you go up to like requiring 90 of state of the art then you can see okay it's still you know around 50 of the tasks that can be uh solved at 90 of the state of the eye so i think this part of the graph is actually more interesting than the one they highlight here and also one thing that i think is really striking from this graph is how even the performance is the performance curves right that um as you increase the requirements it very gradually falls out how many tasks follow this it's not like there is any kind of cut off points or anything like this it looks like um some very smooth graphs um and of course what we've seen with scaling general is a lot of smooth graphs and that's also what we're seeing here and finally of course what i would like to know in some of these is like what is the human level um because for some of these um it can be really hard to get an intuitive feel for uh like um they just compared with state of the art in reinforcement learning and i would like to know like how are these uh reinforcement journals at the human level or far above the human level or far below the human level because like if the human level is in general like um i don't know seventy percent of the best reinforcement learners um well then uh this becomes far more impressive but if humans are generally far above the best reinforcement learners so this is a a subhuman level um but then this becomes much less impressive so the comparison with the state of the art i i mean if you are a researcher in this specific field then you have some kind of intuitive idea about what is the state of the art is it above the human level or below the human level this is something that when i people like me have no way of knowing i think this uh these expert reinforcement learners are in general above the human level but the comparison is just not me and i'd really like to know there are some text symbols shown um it's trained on massive texts and yeah the colossal clean with my common calls web crawl corpus yeah this c4 so this is how it's trained and it's also true those efficient language data sets and then huge examples uh calling it rudimentary and so uh like i would like to see some more a thought in really comparing this like how much better is it than gpt2 that's a question that like they're not trying to answer at all uh even though obviously it's a smaller model and it's mostly trained on other things than language um i think um it is in fact better than gbg2 and that's of course an interesting thing right they are training it on less data and they're training it it's a smaller model and it's mostly trained on other things so it is somewhat surprising why does it in fact perform better than gpt2 and i think the answer is in all these small details like the the the mapping of words to integers and all these small performance optimizations that are continuously being found with the transformer architecture and way to implement this like there are so many small improvements um that um that we do in fact with a smaller model and less strain data and less everything gets better performance um but of course the exams that they show like it's clear it's not state-of-the-art it's [Music] like i can speculate why it's not perfect and how much but it's not really obvious finally i'm going to talk about survivorship buyers like you might as some of you might have seen this picture before this is a picture from the second world war where people were observing like counting the bullet holes on the planes that returned home and uh put a red dot and they could see okay there were these uh places that the bulls were hitting so they thought perhaps we should armor those areas and then actually they realized afterwards that no in fact the reason why uh they saw this was because those were the planes that re that did return home so they should armor these places instead because when they apparently when the uh aircraft was hit in the cockpit then the aircraft turn did not return home in fact um and so let me uh try to use that as an analogy for the safety work in this paper so rowan shah from deepmind from the mind safety team um is uh commented on others people and he was asked did he review this and what was he thought and he did not review this and he believed that no one at the deepmind team did in fact review this they would have been happy to chat with him if they had reached out but um they didn't do that and uh when rowan is reading this paper afterwards he is looking at this and saying this doesn't seem like something that can destroy the world and i think if you can imagine some kind of self-reinforcing bad cycle in deepmind where they build an agi by people who obviously don't care the least about safety and then they uh they start the agi and they the agi does not destroy the world um and then they uh write a paper based on this api that came out not to destroy the world they show it to the safety team and afterwards and the safety team afterwards and say obviously this can't destroy the world and they're right because they're only seeing it after it has been published and one of them has not destroyed the world and so they update on this saying okay we'll see this and uh obviously the world didn't get destroyed so we're seeing more and more examples of the worlds that can get destroyed and they become more confident and the problem of course is in the cases where they build the agi without caring about safety then in this case it would never reach the safety team at deepmind so i think um there is a fundamental problem in deep if they have the safety after the deployment the uh the safety team should be involved before the deployment that's all i have today there will be much more about the paper and about safety next time
2c9f2872-cbbc-4774-a92d-014a5ad1a02a
StampyAI/alignment-research-dataset/blogs
Blogs
exact minds in an exact world exact minds in an exact world ----------------------------- [in the sequences](https://www.readthesequences.com/Zero-And-One-Are-Not-Probabilities) it is argued that 0 and 1 are not probabilities; that these "certainty ratios" aren't meaningful. but, i can think a situation that challenges this. imagine a fully deterministic world — for example, running on [a cellular automaton](https://en.wikipedia.org/wiki/Cellular_automata) — and imagine that in this world there are some intelligences (either artificial or natural) that utilize this determinism to have the ability to make flawless logical deductions (for example, [automated theorem proving](https://en.wikipedia.org/wiki/Automated_theorem_proving) algorithms running on computers that cannot ever have undetected [hardware failures](https://en.wikipedia.org/wiki/Soft_error)). for example, if they think about mathematics, under the axioms under which they work, 2 + 2 will always equal to 4, and doing any mathematical computation will either result in them knowing they don't have the computational resources to do the operation, or the result being guaranteedly true with the same certainty as that the cellular's automaton's rules will be applied next tick. now, these beings still have a use for probability and statistics: those can be used to talk about parts of the world that they don't have complete information about. but, there will be some contexts, both purely in their minds (such as logic or math) or sometimes in the real world (they could make assessments like "this box cannot contain any [spaceship](https://en.wikipedia.org/wiki/Spaceship_%28cellular_automaton%29) of a certain size") that *will* be, functionally, certain. it could be argued that they *should* still be weighing everything by the probability that there might be unknown unknowns; for example, their cellular automaton might have rules that apply only very rarely, and that they never got a chance observe yet but might yet observe later. but, let's say that they *assume* the rules of their world are exactly as they think, and let's say that they happen to be correct in that assessment. does that not make some of their deductions actually entirely certain?
445cfa3d-1e63-4c3e-80ea-cf4f6d82335b
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Announcing the Winners of the 2023 Open Philanthropy AI Worldviews Contest **Introduction** ---------------- In March 2023, we launched the [Open Philanthropy AI Worldviews Contest](https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fwww.openphilanthropy.org%2Fopen-philanthropy-ai-worldviews-contest%2F). The goal of the contest was to surface novel considerations that could affect our views on the timeline to transformative AI and the level of catastrophic risk that transformative AI systems could pose. We received 135 submissions. Today we are excited to share the winners of the contest. But first: We continue to be interested in challenges to the worldview that informs our AI-related grantmaking. To that end, we are awarding a **separate $75,000 prize** **to the**[**Forecasting Research Institute**](https://forecastingresearch.org/) **(FRI) for their recently published writeup of the 2022**[**Existential Risk Persuasion Tournament**](https://forecastingresearch.org/xpt) **(XPT)**.[[1]](#fn12iekl1btb7) This award falls outside the confines of the AI Worldviews Contest, but the recognition is motivated by the same principles that motivated the contest. We believe that the results from the XPT constitute the best recent challenge to our AI worldview. **FRI Prize ($75k)** -------------------- [Existential Risk Persuasion Tournament](https://forecastingresearch.org/xpt) by the Forecasting Research Institute **AI Worldviews Contest Winners** --------------------------------- ### **First Prizes ($50k)** * [AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years](https://www.openphilanthropy.org/wp-content/uploads/Chow-Halperin-and-Mazlish-2023-Basil-Halperin.pdf) by Basil Halperin, Zachary Mazlish, and Trevor Chow * [Evolution provides no evidence for the sharp left turn](https://www.openphilanthropy.org/wp-content/uploads/Evolution-provides-no-evidence-for-the-sharp-left-turn-LessWrong-Quintin-Pope.pdf) by Quintin Pope (see the [LessWrong version](https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn) to view comments) ### **Second Prizes ($37.5k)** * [Deceptive Alignment is <1% Likely by Default](https://www.openphilanthropy.org/wp-content/uploads/Deceptive-Alignment-is-_1-Likely-by-Default-David-Wheaton.pdf) by David Wheaton (see the [LessWrong version](https://www.lesswrong.com/s/pvoxjtCbkcweBLn7j) to view comments) * [AGI Catastrophe and Takeover: Some Reference Class-Based Priors](https://www.openphilanthropy.org/wp-content/uploads/2023.05.22-AI-Reference-Classes-Zachary-Freitas-Groff.pdf) by Zach Freitas-Groff ### **Third Prizes ($25k)** * [Imitation Learning is Probably Existentially Safe](https://www.openphilanthropy.org/wp-content/uploads/MCohen_AIWorldviewsContext-Original-do-not-cite-1.pdf) by Michael Cohen[[2]](#fn37c1dkta9zf) * [‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting](https://www.openphilanthropy.org/wp-content/uploads/Dissolving-AI-Risk-v4-Alex-Bates.docx.pdf) by Alex Bates **Caveats on the Winning Entries** ---------------------------------- The judges do not endorse every argument and conclusion in the winning entries. Most of the winning entries argue for multiple claims, and in many instances the judges found some of the arguments much more compelling than others. In some cases, the judges liked that an entry crisply argued for a conclusion the judges did not agree with—the clear articulation of an argument makes it easier for others to engage. One does not need to find a piece wholly persuasive to believe that it usefully contributes to the collective debate about AI timelines or the threat that advanced AI systems might pose. Submissions were many and varied. We can easily imagine a different panel of judges reasonably selecting a different set of winners. There are many different types of research that are valuable, and the winning entries should not be interpreted to represent Open Philanthropy’s settled institutional tastes on what research directions are most promising (i.e., we don’t want other researchers to overanchor on these pieces as the best topics to explore further). 1. **[^](#fnref12iekl1btb7)**We did not provide any funding specifically for the XPT, which ran from June 2022 through October 2022. In December 2022, we recommended [two grants totaling $6.3M over three years](https://www.openphilanthropy.org/grants/forecasting-research-institute-science-of-forecasting/) to support FRI’s future research. 2. **[^](#fnref37c1dkta9zf)**The link above goes to the version Michael submitted; he’s also written an [updated version](https://www.openphilanthropy.org/wp-content/uploads/Imitation_Learning_Safe_ready.pdf) with coauthor Marcus Hutter.
30907d60-39dd-43e8-bdfa-20b32b3c4fa4
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Panel discussion on AI consciousness with Rob Long and Jeff Sebo Intro ===== [Recent 80k guest](https://80000hours.org/podcast/episodes/robert-long-artificial-sentience/) and philosopher specializing in AI consciousness Rob Long ([@rgb](https://forum.effectivealtruism.org/users/rgb?mention=user)) recently participated in a panel discussion on his paper "[**Consciousness in Artificial Intelligence: Insights from the Science of Consciousness**](https://arxiv.org/abs/2308.08708)" ([pdf](https://arxiv.org/pdf/2308.08708.pdf)) with co-authors Patrick Butlin, Yoshua Bengio, and Grace Lindsay and moderator Jeff Sebo ([@jeffsebo](https://forum.effectivealtruism.org/users/jeffsebo?mention=user)).  You can watch it on Youtube (below), [watch/listen as a podcast on Spotify](https://t.co/bzqo5GTKIw) [[1]](#fnf1i8seo95u9), or read the transcript[[2]](#fns3300mxs5q) below. Paper abstract -------------- > Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators. > > Youtube description ------------------- This event took place on Tuesday September 5, 2023 and was hosted by the NYU Mind, Ethics, and Policy Program.  ### About the event This panel discussion featured four authors from the recently released and widely discussed AI consciousness report. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of the best-supported neuroscientific theories of consciousness. The paper surveys several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories the authors derive "indicator properties" of consciousness, elucidated in computational terms that allow them to assess AI systems for these properties. They use these indicator properties to assess several recent AI systems, and discuss how future systems might implement them. In this event, the authors summarized the report, offered perspectives from philosophy, cognitive science, and computer science, and responded to questions and comments.  ### About the panelists **Patrick Butlin** is a philosopher of mind and cognitive science and a Research Fellow at the Future of Humanity Institute at the University of Oxford. His current research is on consciousness, agency and other mental capacities and attributes in AI.  **Robert Long** is a Research Affiliate at the Center for AI Safety. He recently completed his PhD in philosophy at New York University, during which he also worked as a Research Fellow at the Future of Humanity Institute. He works on issues related to possible AI consciousness and sentience. **Yoshua Bengio** is recognized worldwide as one of the leading experts in artificial intelligence, known for his conceptual and engineering breakthroughs in artificial neural networks and deep learning. He is a Full Professor in the Department of Computer Science and Operations Research at Université de Montréal and the Founder and Scientific Director of Mila – Quebec AI Institute, one of the world’s largest academic institutes in deep learning. He is also the Scientific Director of IVADO. His scientific contributions have earned him numerous awards. He is the 2018 laureate of the A.M. Turing Award, “the Nobel Prize of Computing,” alongside Geoffrey Hinton and Yann LeCun for their important contributions and advances in deep learning. In 2022, he was appointed Knight of the Legion of Honor by France and named co-laureate of Spain’s Princess of Asturias Award for technical and scientific research. Later that year, Professor Bengio became the most cited computer scientist in the world in terms of h-index. He is a Fellow of the Royal Society of London and of Canada, an Officer of the Order of Canada and a Canadian CIFAR AI Chair. **Grace Lindsay** is currently an Assistant Professor of Psychology and Data Science at New York University. After a BS in neuroscience from the University of Pittsburgh and a year at the Bernstein Center for Computational Neuroscience in Freiburg, Germany, Grace got her PhD at the Center for Theoretical Neuroscience at Columbia University in the lab of Ken Miller. Following that, she was a Sainsbury Wellcome Centre/Gatsby Computational Neuroscience Unit Research Fellow at University College London. ### About the host The NYU Mind, Ethics, and Policy Program is dedicated to advancing understanding of the consciousness, sentience, sapience, and moral, legal, and political status of nonhumans, including animals and artificial intelligences. We believe that all beings with the capacity to suffer deserve to be treated with respect and compassion, and that our policies and practices should reflect this. Our goal is to promote research and scholarship that will help to shape a more just and humane world for all. \* \* As it happens, this program description was co-authored by GPT-3, a large language model! Thank you to our co-sponsors for your generous support of this event: * NYU Center for Mind, Brain, and Consciousness * NYU Center for Bioethics Transcript ========== JEFF Hi everybody. My name is Jeff Sebo. Welcome to this event. Thank you for being here during what I know is a very busy week and day for everybody. This is the first day of class at NYU, and the fact that we have an event today with this many people interested, I think is a testament to how interesting the topic and the report that will be the basis for our conversation are. So this is an event hosted by the NYU Mind Ethics and Policy Program, and we are also grateful to our co-sponsors, the NYU Center for Bioethics and the NYU Center for Mind, Brain and Consciousness. Thank you for co-sponsoring this event. The NYU Mind Ethics and Policy Program examines the nature and intrinsic value of non-human minds, including biological and nonbiological minds, with special focus on invertebrates and artificial intelligence systems. We are interested in examining what kinds of beings can be conscious and sentient and sapient, and what kinds of moral and legal and political status they should have. And so we were very interested when this team released this report about a week ago, a report on what leading scientific theories of consciousness have to say about possible AI consciousness. And so we thought it would be useful, especially since the report, as soon as it was released, was generating a lot of discussion. It received coverage in Nature and Science and a lot of conversation on the Internet. We thought it would be valuable to bring some of the authors of the report together. There were 19 authors from different fields and different institutes, and we thought we could bring four of them together, including the two lead authors and representatives of other fields as well, to talk about the report, summarize some of the main points, and then offer their own individual perspectives about some of the implications. So I will now briefly introduce the speakers, and then we can hear some remarks from them, and then we can open up the discussion and hear questions and comments from you and have a conversation. So here are the speakers. Patrick Butlin is a philosopher of mind and cognitive science and a Research Fellow at the Future of Humanity Institute at the University of Oxford. His current research is on consciousness, agency and other mental capacities and attributes in AI. Robert Long is a research affiliate at the center for AI Safety. He recently completed his PhD in philosophy at New York University, during which he also worked as a research fellow at the Future of Humanity Institute. He works on issues related to possible AI consciousness and sentience. Yoshua Bengio is recognized worldwide as one of the leading experts in artificial intelligence, known for his conceptual and engineering breakthroughs in artificial neural networks and deep learning. He is a full professor in the Department of Computer Science and Operations Research at the University de Montreal, and the founder and Scientific Director of Mila Quebec AI Institute one of the world's largest academic institutes in deep learning. He is also the winner of many awards and has many other accomplishments that you can easily find online. And finally, Grace Lindsay is currently an assistant professor of Psychology and Data Science at New York University. After a BS in neuroscience from the University of Pittsburgh and a year at the Bernstein Center for Computational Neuroscience in Freiburg, Germany, Grace got her PhD at the center for Theoretical Neuroscience at Columbia University in the lab of Ken Miller. Following that, she was a Sainsbury Wellcome Center Gatsby Computational Neuroscience Unit Research Fellow at University College London. So thank you all so much for, first of all, writing this very interesting report, and second of all, being here to talk with us about it. I know a lot of people here are interested in hearing from you and discussing this with you. Just a note to everybody that the format and structure of this event will be that we will hear brief remarks from each of the four authors and speakers. Patrick and Rob will speak together about a summary of the report, followed by a philosophical perspective. And then we can hear cognitive science and computer science perspectives from Grayson and Yoshua, respectively. And all along the way, people in the audience can submit questions and comments, objections, whatever you like in the Q/A tab on zoom. And then when we reach the discussion portion of this event, I can read questions, comments, objections that have been entered and upvoted from the Q/A tab. So please go ahead and open that up and add your questions and comments whenever you like throughout the session. Okay, so with that in mind, without further ado, we can start the show. So, Patrick and Rob, whenever you are ready, feel free to share your screen and tell us about your report. ROBERT All right, so here we go, Jeff. And if you can, let me know that everything is in working order. JEFF Everything is in working order. Yeah, go for it. ROBERT Excellent. So let's begin. PATRICK Yeah. ROBERT First, I just want to say thank you so much to the audience for turning out. We're extremely excited to discuss these issues with you. A big part of why we wrote this report is to discuss these topics more widely and hear from different people. And thanks so much to you, Jeff, and to the NYU Mind Ethics and Policy Program for hosting this. So, my name is Robert Long, and along with Patrick Butlin, I'm one of the lead authors on this report. As Jeff mentioned, there are also 17 esteemed colleagues from various disciplines, and I'm going to briefly say what we did in the report and why we did it and why it matters. And, I mean, Patrick will be covering some of that. So you've all probably read about the case of Blake Lemoyne, a Google engineer who was fired from Google after he became convinced that an AI system he was interacting with was conscious. You've probably seen the increased deployment of chatbots, including romantic chatbots, that interact with people in very human-like, compelling ways. And you may have seen a tweet from OpenAI's chief scientist Ilya Sutzkiver saying that it may be that today's large neural networks are slightly conscious. So AI consciousness is obviously a topic of increasing public concern and public interest, and it's also a perennially fascinating scientific question. Speaking for myself, as AI consciousness has become a bigger topic recently, I've often been frustrated by some of the characteristics that conversation about it has. I think when people talk about AI consciousness, there's often a lack of specific scientific evidence. There's very little detailed analysis of existing systems. The conversation is often kind of emotive and highly charged and often in dichotomies. Like how could AI systems possibly be conscious? We know that that's absurd. Or on the other hand, people being certain that they definitely are conscious. And what we wanted to provide in this report is kind of a better footing for these kinds of conversations. We want to bring a more nuanced and a more evidence based approach to the report. So in addition to the actual findings that we have, we really want to spur a conversation and spur further research along these lines. So one aspect I'm going to talk about briefly is also conceptual clarity, something that philosophers love about what exactly we're talking about when we talk about AI consciousness. And before I do that again, just an advertisement for the paper. We put it up on Archive about two weeks ago, I think, and you can find it there. And there is a list of this really wonderful team that helped us out. We have people from neuroscience, AI and philosophy. So there it is. So what are we talking about in this report? It's called consciousness and AI. Insights from the science of consciousness. It's very important both in this talk and we do this in the paper to pin down what we're talking about. So what I'll be talking about and what we discuss in the paper is what philosophers call phenomenal consciousness. And this means subjective experience. Thomas Nagel famously said that consciousness is what it is like to be an entity. So, for example, if you've tuned in, you're now currently listening to my voice, you're seeing my face and this wall on the screen, and there's a subjective aspect to your experience. There's something that it's like for you to be seeing a white wall or hearing my voice. And what we're asking about in this report is whether AI systems now or in the near future could have subjective experiences in this way, if they could be conscious in this sense. And it's very important in conversations about this to be clear about what we're not talking about. We're not talking about whether AI systems are intelligent in a certain way. We're certainly not talking about AGI or artificial general intelligence. We're not talking about language, understanding or rationality. And also we're not talking about whether AI systems experience the world in exactly the way that humans do. So Thomas Nagel, when he introduced this phrase, famously asked what it's like to be a bat? It's very plausible that bats might be conscious, that they might have subjective experiences of the world. But if they are, the way that they experience the world is presumably very different than the way we experience the world. And that also comes along with the fact that the way they think about the world is presumably very different and they don't have the full suite of human cognitive capacities. So when we're talking about AI, we're wondering about consciousness in this sense and not necessarily any further assumptions about human like intelligence or experiences. So how do we go about this in this report? Here's what I think are kind of the main foundations of what we do. First of all, we want the conversation to be empirical. So that means that we're looking at scientific theories of consciousness. So there's a broad field of consciousness science where scientists examine the human brain and animal brains and ask what sort of processes are associated with having conscious experiences. And this is somewhat distinct from philosophical questions about consciousness. We're going to look at these empirical theories and see what they can tell us about potential AI consciousness. Another key aspect is that we take this working assumption of this very broad thesis about the nature of consciousness, which we call computational functionalism. And very broadly, this is saying that what matters for consciousness is the computations that a system performs and not necessarily what it's made of. So this is compatible with consciousness coming from computations done by biological neurons or computations happening in silicon, transistors and chips. Patrick's going to talk a little bit more about exactly why we made this assumption and what it amounts to. And then we try to get precise. We try to derive indicator properties from the science of consciousness that we can use to examine different AI systems, and then we apply these to a broad class of AI systems from several different kinds of research programs, several different kinds of systems. And lastly, we're not aiming for absolute certainty. It's my opinion, and I think most of my co-authors, if you're asking for absolute certainty about AI consciousness, you are going to be disappointed because there still is a lot we don't know about consciousness. That's in spite of the fact that we do know some things, which is what we try to show in this report. So it's a long report. So I would like to just talk about one specific example of this methodology and how it works in the paper. So one of the most prominent scientific theories of consciousness is called the global Workspace theory. And at a broad level, the global workspace theory says that consciousness is about global broadcast throughout a system. So it has this picture of the mind where there are these specialized independent modules that are responsible for different kinds of information processing. So for example, as you can see in that diagram, one such module might be Perceptual processing. So Vision is responsible for visual information processing. Or maybe there's a module dedicated to making decisions and choosing actions. The global workspace theory says that there's also a global workspace that will select the output from one of these usually independent modules and then broadcast that to all of the modules. This is a way of coordinating all of the processing that they are each doing individually. So consciousness is related to global broadcasts of information throughout a system, and a system having a certain architectural or information processing property that's related to the computational functionalist assumption that we have. And importantly, as we'll see, there needs to be a certain kind of feedback loop between modules and a global workspace. The global workspace taking input from the modules and then also broadcasting it back to them. So from that we get some indicator properties to say what kind of features does an AI system need to have if it's going to satisfy, at least according to global workspace theory, the indicators that might be associated with consciousness. So one of the top questions you might have is, well, what do we say about large language models? So briefly, large language models, you're probably all familiar with them, these are some of the most prominent systems today. The GPT systems are an example. These are often built with what's called the transformer architecture. And now is certainly not a time for a full explanation of what that amounts to. But one key aspect for our analysis is that the transformer architecture is feed forward and that is to say when the system is taking in an input, say, a certain amount of text and it's going to produce an output, which is the text that it will continue to produce. For example, when you're chatting with it as it's processing that information, the information just goes from the input layer to the output layer kind of continuously. And when we look at this architecture and ask whether the way that information is flowing through, it kind of fits with the picture from global Workspace theory. Our provisional analysis in the paper is that as far as we can tell, there's nothing like a global workspace that is both receiving and broadcasting information to and from anything like individual modules. To put it another way, there don't seem to be the right kind of feedback loops in the transformer architecture for it to be satisfying these particular indicators of consciousness. So our answer in the paper is probably not according to global workspace theory, they do not satisfy these indicators. So that's the kind of thing we do in the paper. Take a scientific theory, ask more precisely what kind of architectural and informational processing features it says are associated with consciousness. Look at how an AI system works and see how much it matches that. As I said, we do this with a lot of theories. We have, like, five very prominent scientific theories of consciousness that we look at. One thing that's worth remarking on is, why do we look at five different theories? Why not just, for example, go with global work theory? And that's kind of related to our points about nuance and uncertainty. We do think that the science of consciousness is making progress on discovering the processes associated with consciousness, but we think it's premature to commit to one particular theory. There's nothing like a consensus in consciousness science about which one of these stories is the right one for consciousness. So we kind of want to cast a broad net and say, here are a variety of things that might be associated with consciousness according to different theories, and let's look at all of them. Jeff, I saw that my Internet said it was unstable. Did I skip at all? Are we good? JEFF Briefly. But I heard everything you were saying. PATRICK Cool. ROBERT And yeah, as mentioned, one thing that we think it's important to do is look beyond just large language models. They have understandably received a lot of the attention. They're very compelling. Their capabilities are extremely impressive and most saliently. They sometimes say that they're conscious, so it's understandable that they're one of the main systems that people wonder about. But at least speaking for myself, I think when you look at the way that they're built, they're not necessarily even the best current candidates for consciousness. Among the AI systems that exist today. For example, many people point out that large language models don't seem like they're agents, or they don't seem embodied. They don't seem to interact continuously with an environment which you might think is important for consciousness. But many systems are plausibly much closer to doing that sort of thing, which is why we look at systems that are from robotics or that navigate virtual environments or that work in different ways than large language models do. And I will now turn things over to my colleague Patrick. PATRICK Thanks, Rob. Hi, everyone. So Rob's talked a bit about what we did in this project. I'm going to talk briefly about why we did things in the way that we did, and then I'm also going to comment very briefly on what's next for research in this area. So next slide, please, Rob. Okay, so on the topic of why we did things the way we did, the first question I want to say something about is why did we make the assumption of computational functionalism? And this is an important question, because computational functionalism is quite a contentious hypothesis. So just to remind you about what this claim is, computational functionalism, as Rob said, is the claim that what matters for consciousness is computations. So according to this view, the reason why humans are conscious is that the right kinds of computations are going on in our brains. And what it would take for an AI system to be conscious is for it to perform the right kind of computations. Now, this is, as I just said, quite a controversial hypothesis. Many researchers in this area doubt it. For example, Anil Seth suggests that consciousness may require living cells, and there's many other researchers who take similar views to that position. So why do we make this assumption? Well, there's a few factors that play into this. One is that we'd think that computational functionalism is a plausible view. That doesn't mean to say that we're confident that it's true. I think there's probably a range of views about computational functionalism among the authors, but as a group, we're certainly not confident about this. It may even be too strong to say that we think it's more likely to be true than not true. But crucially, we think it's plausible enough that it's worth exploring its implications. Another consideration is that many scientific theories of consciousness are expressed in computational terms. And also, if computational functionalism is true, then there's an important further question about which AI systems perform the kinds of computations which are associated with consciousness. So ultimately, the reason why we make this assumption is that we think it's a productive one in the sense that by working starting from this assumption, we can reduce our overall uncertainty about consciousness in AI to a significant degree. So we're uncertain about whether computational functionalism is true. We're uncertain about which specific scientific theories of consciousness are true. We don't know which AI systems are conscious. But if we kind of explore this area of the space of possibilities, we think we can make progress, so we think we can reduce our overall uncertainty. Next slide. Another question about why we did what we did is why did we focus on the internal processes that AI systems go through rather than on their behavior or on their capabilities? Why do we think that thinking about internal processes is a better way of determining whether they're likely to be conscious? Well, this is also an interesting question. And it's interesting because there's this kind of important challenge to the approach that we took, which is that the science of consciousness is relatively immature. There are still lots of competing theories in this science. And also, crucially, these theories are based mostly, although not entirely, on data from humans. So one might very reasonably have doubts about whether these descriptions of what's going on in humans when humans have conscious experiences can be extended to provide a guide to consciousness in non humans. And in particular, that kind of argument has been made in debates about non-human animal consciousness. Our colleague Jonathan Birch, for example, who's a leading researcher in that area, argues that in the case of non-human animals, when you're trying to work out which are conscious, it's more productive with science in its current state. To concentrate on examining the capabilities of non-human animals rather than on trying to work out whether the internal processes in their brains are similar or not to the internal processes in human brains. But in fact, we think that different methods make sense in these two cases, that is, the case of non-human animals versus the case of artificial intelligence. So, next slide, please, Rob. And the key difference which makes it the case that different methods are appropriate in these two cases is that we know that on the whole, animal brains work in relatively similar ways to human brains. And we can know that because of our shared evolutionary heritage even prior to doing detailed animal neuroscience. On the other hand, there's a relatively wide space of possible designs for AI systems. There are lots of different possible architectures and methods that you can use in AI in principle. So this kind of space of possible internal processes in AI is much wider than the space of processes which we're likely to find in animal brains. Now, there are a couple of different consequences of this .1 is that in the case of AI, it's relatively informative to find similar internal processes going on in a system to the processes which seem to be associated with consciousness in humans. That's just because AIS could relatively easily be different from humans. In the case of animals, finding similar processes is not so informative. But then on the flip side, finding similar capabilities and similar behaviors to those exhibited by humans in AI is less informative than finding similar capabilities in the case of animals. And that's because it seems that in AI it's possible for very different underlying processes to give rise to superficially similar behaviors and capabilities. So there's a specific version of this problem which clearly arises in AI that exists today, which is that AI systems are sometimes designed specifically to imitate human behavior. So we know that there are AI systems which have superficially, but also in a very compelling way, humanlike capabilities, human-like behaviors which also work in quite unhuman-like ways. For that reason, looking at capabilities looks like, on the whole, an unreliable method, unless it can be very substantially refined in the case of AI. Next slide. Okay, now turning to the question of what's next for AI consciousness research. Well, I'm sure there are lots of exciting directions which could be pursued at this point. But one which I'm interested in, I think Rob's interested in as well, is understanding what's called valenced conscious experience. So of course, as we're all familiar with, some conscious experiences feel good, like feeling a cool breeze on a hot day. Another conscious experiences feel bad like feeling pain or fear. These are famous conscious experiences. What we say is that the ones that feel good have positive valence and the ones that feel bad have a negative valence. But it seems as though in principle, there can also be neutral conscious experiences which don't have a valence one way or the other. So the question is, could we find indicators for specifically valenced consciousness in AI? That seems to be a question which goes beyond the general one that we were asking in this report. And we think that this question is important because it seems that pleasure and suffering have special moral significance. Next slide. Now, that brings us to the topic which is one of the major motives for our reports and which we're certainly going to be talking about further today, which is this point that a big part of the reason why thinking about AI consciousness matters. Why working out how to assess whether AI systems are likely to be conscious matters is because our society is soon going to have to decide whether to build systems which could plausibly be conscious. So there's a huge, very difficult philosophical question which is what moral principles should we use to make this momentous choice? I certainly don't know the answer to that question. It's possible we'll explore it a little bit in a few minutes. But we do think that there's a simple step which could be taken now, which is to continue the kinds of work that we've been doing to try to understand what might be good indicators of consciousness in AI. And then in particular, for groups engaged in building AI systems to recognize the possibility that they might build conscious systems, that they might do so even without trying to do just that. And for those groups to develop methods for assessing whether the systems that they're working on are likely to be conscious. And we certainly strongly recommend that they proceed with great caution if they get a positive result when they're doing that kind of assessment. So thanks very much for listening, everyone. Thanks again to our collaborators and I'm looking forward to the discussion. Thanks. JEFF Great. Thank you so much, Patrick and Rob. Now, Grace, do you have any thoughts from your perspective to share? GRACE Yeah, I just wanted to talk a little bit about kind of the state of consciousness research in neuroscience to give context to the theories that are discussed in the report and kind of, I guess to situate ourselves maybe in the history of science here. So I should say I'm not directly a consciousness researcher myself, but I do study attention which interweaves with consciousness studies in various ways, but so I don't have a dog in the fight of these different theories, which is possibly a good position to be in to discuss the overall state of things. But yeah, it is the case that the neuroscientific study of consciousness in terms of being a proper scientific field of research is pretty new. I mean, you could argue neuroscience itself is pretty new in the whole scheme of history of science, but certainly people taking the scientific study of consciousness seriously is definitely new. Because there was a joke that you had to be tenured to study consciousness. And so the idea that now there are actually full labs devoted to this and people are really trying to make it rigorous and thorough and you have these theories and everything that's definitely progress, but it points to the fact that this research is still in its infancy. And so the theories that are laid out here are the major theories that are discussed amongst these researchers. But from my perspective, I don't think there's a sense that these are anywhere close to the final drafts of these theories. And that's important for when you then step through the conclusions. And just the fact that Rob said the report doesn't choose a specific theory to go with because there isn't consensus. There's these multiple theories that in many ways have conflicts with each other. And so it really is still a young area. And also the way that the scientific study is framed in order to be precise and rigorous. It's usually framed more as studying the neural correlates of consciousness. So not even trying to make a strong claim necessarily about causal connection, but just what do you observe the patterns of neural activity to be when a subject reports a conscious experience? And that's another detail. It's really a lot of times the neural correlates of conscious report. What are people saying that they experienced as their conscious experience? There is a detail of kind of these no report paradigms, but they still ultimately rely on the fact that at some point a human was able to say that they experienced something. And so those are also caveats that bound the scientific study of it and are necessary to make it a scientifically rigorous thing to study. But obviously that's going to from a philosophical perspective, that's going to have implications as well. And yeah, the other thing that I kind of wanted to talk about in kind of situating this sort of thing so the scientists are going into the brain and it's messy and there's a lot of stuff going on. And the hope is to find the simplified principles that correlate with this conscious report and correlate with people saying something is conscious versus not. And so there the work is to take the big, messy, complex thing and try to come up with the simplest description of what you need in order to get conscious report. When you then actually kind of look at that in isolation, sometimes those indicators as the report kind of turns them into seem really simple. And I think we have to keep in mind that these theories were not developed for the purposes of trying to see if an AI system is conscious. They're developed in the context of assuming that a human is conscious and looking at their neural activity or even a lot of at least the kind of background knowledge for these theories comes from non human animal work and so they're understanding where they're coming from in that sense. The fact that they're not designed to be directly translated into these sort of easy to identify computational principles that could be in artificial systems, I think is important. I think it's important for this work of trying to take a theory and assess an artificial system. But I also think that there's a lesson for the people, the neuroscientists who study consciousness in this as well. Because as this happens a lot, when you do mathematical modeling, you can be working with a topic area and kind of think you have a mental model of how it works. And then when you actually go to write it down, you realize some aspects are lacking, maybe or the pieces don't fit together the way that you thought they did. And it's the turning of the kind of mental model and the pile of experimental data and the word models that people use to describe how they think something functions. When you actually have to turn that into an equation or code or actually try to build it, you can kind of see where you might be missing things. And I think that this is a nice opportunity for the neuroscientists who study consciousness to see their theories in a new light when they're kind of put into these cold, stark indicators and really reflect on if that is summarizing everything that they think is important or that there are things about the brain that are kind of going unspoken that they think are actually really important as well or things about the abilities of humans or animals that are important as well. So, yeah, I think that's important to keep in mind that these theories were not designed to lead to a description that is used for AI. But it's still a very helpful exercise in my mind to go through this and see what they look like in the end, when they're kind of pared down to the simplest form that can be translated into an AI system. So overall, I think the report has benefits to be able to take this neuroscience literature and bring it to an AI audience and put it in those terms, but also should have benefits for the neuroscience side itself in terms of thinking about how these theories really pan out, how they relate to each other, what they could be missing. If the scientists who created and worked on these theories, if they would agree with the conclusions of the report or even agree that an artificial system that had these properties was conscious. In the end, I think that that's an important thing for those scientists to reflect on. Great. JEFF Thank you so much, Grace. And finally, Yoshua, do you have thoughts to share? YOSHUA Sure. Maybe I'll start with talking about the computational, functionalism, computational basis of consciousness and subjective experience. We've been using the word consciousness, but really I think it's important to clarify that the word consciousness can be associated to all sorts of things and we're trying to focus on subjective experience, which is the part that may seem very mysterious to many people, including researchers. My personal view, so maybe not the unanimous view of the others, is that physics is computation and many physicists share that view. It's just a bunch of equations that could be implemented in any way you want. At the end of the day, you get the same changes in the state of the world and your brain is physics too. Now, I think some of the questions about how this could be turned into computations in a computer may have nothing to do with something non material that could possibly be happening in biology. Maybe there is something about the physics like it requires quantum computations that maybe we don't know yet how to do. But actually, if we look at the progress of AI in the last few decades, we're moving forward quite rapidly towards very strong capabilities and we never seem to be requiring any kind of quantum computation in order to get that power, which of course doesn't guarantee that it's continued to be the case. But that suggests that the level of abstraction that, say, neural networks used in machine learning have is already doing a good job for providing a lot of the explanations and neural correlates of our abilities. So another interesting question that has to do with AI research is why are we conscious in the first place? And the perspective that would come naturally to machine learning researchers or AI researchers is evolution notice with these forms of computations because that gives us an advantage either individually or collectively. There's a social aspect to consciousness as well. And so if there is an advantage, then it's something worth investigating from an AI perspective. It may be something that AI researchers want to put into their AI systems, which is a question I'll come back to that Patrick talked about: do we want to have conscious machines or not? So one of the things my group has been working on is precisely this. So some of these theories of consciousness, in particular the global workspace theory and attention schema theory and others, really can be interpreted as providing an advantage in terms of our ability to learn and manipulate abstraction. So this is connected to the property of thoughts and attention, selecting very few bits of information that go through working memory at any moment. And then we sequentially go through a very small number of bits in this way that help us take decisions and organize our understanding of the world at a very abstract level, that compresses information, perceptual information in a way that helps us better understand the world, take better decisions, better model it and so on. Now let me go back to this question of whether AI are conscious or will be in the future. So our report suggests that none of the current AI systems have enough of the characteristics that those theories suggest. First of all, the different theories we chose are not the end of it. So this is a continuously moving field. There, you know, are newspapers coming regularly suggesting other variants often related to existing theories. So we shouldn't take these as the end story of how consciousness works in the brain. And also what the report suggests is actually those properties, those in these theories or maybe other ones that could be related that may come up in the near future. Those attributes are not impossible to put in AI. So it's very plausible that in coming years we would be able to build machines that compute in ways that are at least suggestive of consciousness in the human sense. And I think this raises a lot of important questions. My take on this is we should not build conscious machines until we know better. There are many reasons for this. In particular, whether we succeed in building machines that are actually conscious or not. If humans perceive those AI systems as conscious, that has a lot of implications that could be extremely destabilizing for society. We associate moral status to other conscious beings, and that's connected with a very strong social contract which works for humans. We have all kinds of characteristics. We have a finite lifetime. We are bounded sort of intellectual capabilities. And these properties may not apply to AI that can reproduce. There's no limit in how you could copy an AI system over and over. So they might be like, immortal, they might be much smarter than us. All sorts of things that I think would make the current interpretation of consciousness at a moral and social view questionable. I think until we know better, we shouldn't do that. There's another, more pragmatic reason why I think we shouldn't build conscious machines. Because with consciousness also comes a notion of self and even self preservation objectives like agency. This was one of the theories that was described. And if we go on that route, this could be very dangerous from an AI safety point of view. In other words, we might be building machines that have their own goals that are not well aligned with human norms and values in ways that could be extremely dangerous for humanity. And of course, this is a subject that's intensely debated in the last few months, which makes this report particularly interesting. But this is one point I want to emphasize. Let's see the last point I want to make that's not really in the report, but connected to what we talk about in the report. But comes some recent work coming out of my group just in the last few months suggesting so it's another theory of consciousness that is completely computational. It's related to several existing theories. But what's interesting about this one is that subjective experience with all the attributes that we associate with it, like ineffability and subjectivity and richness and fleetingness, these properties emerge of this model as side effects of the need to perform a particular kind of computation that is important from a learning perspective. But you could obtain potentially the same computations with a different implementation that wouldn't have these side effects. So evolution has sort of converged in this particular way of achieving particular useful computations that may give rise to our sensation of being conscious and having free will and all these things to which we attribute a little bit of sort of magical properties. And we should be careful about that instinct we have about our own perception of being conscious in light of those results from neuroscience and AI. I'll stop here. GRACE Great. JEFF Thank you so much, Yoshua and everybody, for those remarks. Very interesting. It gives us a lot to talk about. I want to flag both for the panelists and for everybody in the audience, that a very lively discussion is already happening in the Q/A tab. And so I encourage everybody to go check out the Q/A tab and read through some of the questions and comments that have already been entered and feel free to reply directly to each other and have those conversations. We can have one conversation here and another in the Q/A tab. I will jump right into questions that attempt to synthesize and present you with what people are asking about in the Q/A tab. I will not be able to get to everything, but please know, everybody, that we will send all the questions to the panelists after the talk, whether or not we can get to them during the presentation. So some of the questions are descriptive, others of the questions are moral or legal or political in nature. So I can start with a general descriptive question for you. As you noted in the initial presentation and some of your remarks, and as several people have noted in the comments, you focus on a particular perspective about consciousness, according to which consciousness is about computations. And so you look at scientific theories of consciousness that identify different computations, and then you search for markers related to those theories, and then you look for those markers in particular kinds of AI systems. And as some people note, that does not exhaust the space of theories and perspectives about consciousness that are plausible and popular right now. So on a more permissive end of the spectrum, as one person notes, there are, for example, panpsychist theories of consciousness and other theories that are relatively undemanding that allow for the possibility that even quite simple systems could be conscious. Those theories might imply that lots of systems can be conscious, whether or not they have your markers. And then at the other more restrictive end of the spectrum, you have biological theories according to which, for various reasons, you really do need to be made out of, for example, carbon based cells and neurons in order to realize consciousness. And according to those theories, a system can hit all of your markers but still be non conscious if that system is made out of silicon based transistors and chips. So I wonder, on a personal level, to the panelists, what kind of credence do you have in these more permissive or more restrictive theories? Do you find them plausible? Do you find them good candidates? And how would you incorporate them into your search for AI consciousness? ROBERT I can hop in first. Yeah. I think in people in the report, I'm probably on the higher end in my credence in computational functionalism of some kind, I'm maybe like 70%. But I am very compelled by arguments by people like Anil Seth and Peter Godfrey Smith, philosopher of biology, who has written extensively on consciousness. So I do sometimes wake up in the middle of the night wondering if computational functionalism is off on the wrong track. And I'll also just take this opportunity to say, one thing we call for in the report is more detailed work investigating the assumption of computational functionalism. We think it's again, sufficiently implausible that it's really very important to explore its implications. But we could also get a lot more clarity on these issues if arguments for and against computational functionalism got hashed out in more detail. And I'll lastly just say I wanted to plug a really nice remark by Anil Seth that I think is really exactly the kind of response we wanted where he said, I disagree with some of the assumptions, and I'm guessing that's computational functionalism, but that's totally fine because I might well be wrong. So we're very excited to see people kind of exploring different parts of the space of possibilities that we could be facing with AI consciousness. PATRICK Great. JEFF Thank you, Rob. Yoshua? YOSHUA Yeah, so my credence on computational fictionalism is 99.99%, maybe because I'm a computer scientist, right? And the whole field of computer science is founded on the idea that computations can be done on any substrate. And there's not been any example of that that exists, as far as I know. It's not just AI. It dates back from Turing and the Turing machine around the Second World War. And it's also connected to, as I said in my little pitch, everything we know from physics. So I don't see how having carbon atoms prevents computations from explaining what's going on. It's just a different kind of computation. It may not be the computations going on at the level of these artificial neurons that you typically find in deep learning. That's very possible. But it's still computation. It's just now computation happening at the atomic level, but it's still computation. What's the level of abstraction that's needed to replicate human intelligence and consciousness? Well, nobody really knows, and that's open. But I think the question of whether you need something that's non computational, because for me, if it's not computational, it's not even physical, so it's not even materialistic. And I don't see how you could buy that unless you believe in some spiritual beings explaining our consciousness. And I also have a comment on panpsychism. How could I say it feels like it's completely overgeneralizing. The things we know that are conscious are human beings. And because of many similarities, we have some maybe good reasons to think that other animals may be conscious. But everything we know about human consciousness has the kind of properties that we discuss in the report, for example, that are completely disjoint and not applicable to just arbitrary groups of atoms or even single electrons or whatever crazy things that you could come to with these theories. So I'm not saying these theories are false, but they seem so far removed from the biological reality, like what happens when a person is conscious or not conscious, that, for me, they don't rate very high as scientific theories that are supposed to explain what we know about consciousness. They may feel good again, because I think it may make us feel good with our intuitive religious understanding of the world, but in terms of matching what we observe, the correlates of consciousness seem pretty much bringing zero information. JEFF All right, great. Thank you. Grace or Patrick, did you have anything you wanted to. PATRICK I mean so when you asked about credences in computational functionalism, I guess the thought is that to the extent that we're doubtful about computational functionalism, maybe we're doubtful about the value of this project to kind of reveal whether AI. Systems are likely to be conscious or not and therefore, whether they're likely to have a certain special kind of moral significance or not. And for me, I think how my credences fit together is something like this: if there's such a thing as consciousness, if the concept of consciousness makes sense and is a useful one to apply beyond the human case, then I think most likely it's a computational phenomenon. I think I'd give more credence to the computational view in that situation than to non computational views, because I think the computational views have got more promise in explaining the properties of consciousness. But what keeps me awake at night to go back to what Rob said, is the possibility that the concept of consciousness is somehow confused or that. YOSHUA It. PATRICK Doesn'T make sense to apply it beyond the human case that it's unproductive for moral theorizing or conceptually confused in some way to ask the question whether AI systems are conscious or not. YOSHUA But, Patrick, I don't think these two views are incompatible. So I actually think that consciousness is computational in nature, and that is confusing and kind of not clear that it's meaningful to extend that concept very far from human beings, especially regarding the moral aspects of things. That's what I mean. And the social and moral aspects, I don't think these are incompatible points of views. PATRICK Yeah, I think our views are pretty similar. JEFF Grace, did you have anything you wanted to add? GRACE I mean, in terms of computational functionalism? My gut is that it's largely correct or certainly there will at least need to be a common set of computations and then maybe there also needs to be other stuff. But on the whole, I just feel like we're several paradigm shifts away from really understanding all of this. So it's hard for me to say anything with any confidence or vigor. JEFF Yeah, that seems like the answer about which we can be most confident. Okay, great. Thank you everybody. I can now ask a question on the moral, legal, political side, and again, several people have asked questions along those lines as well. So I think, as you yourself said, part of why so many people are so interested in this topic is because we do associate consciousness and then related capacities like sentience, the ability to consciously experience positively and negatively valenced states like pleasure and pain and happiness and suffering. Many people associate that reasonably, in my view, with a certain kind of intrinsic moral and legal and political significance. The idea being that if you have consciousness and or sentience, if there is something it is like to be you, and if it can feel good or bad to be you especially, then you matter for your own sake. And I should consider your interests and needs and vulnerabilities when making decisions that affect you and that might extend to a decision about whether to create you in the first place, as well as a decision about how to treat you if and when I do create you. And so I would like to ask all of you if you care to respond. First of all, do you associate consciousness or sentience with that kind of intrinsic moral or legal or political significance? Do you think that when a being is conscious or sentient or sufficiently likely to be conscious or sentient by our lights, that we should extend them a certain kind of intrinsic value and consider their potential interest, needs, vulnerabilities when making decisions that affect them? And since we are making these decisions under uncertainty, I also wanted to ask a little bit about how you think about the risks associated with false positives and false negatives, with potential over or under attribution of consciousness and moral status. What are the risks associated with accidentally seeing an object as a subject and what are the risks associated with accidentally seeing a subject as an object? And how do you weigh those when deciding how to calibrate this kind of test and practice? So yeah, do you associate this with moral status and how do you deal with this under? YOSHUA Yoshua, I'm not a philosopher, so take that from the angle of a computer scientist. But my interpretation of this question is we are asking the wrong question. It's not whether we should attribute moral status to entities that have particular properties, like being conscious or something like that. Is that that's how we are? Humans are compelled to have empathy and compassion for other types of beings, in particular other human beings, because that's something that evolution put in us, because it helped us to help humanity to succeed and maybe become a dominant species. There are exceptions. You have sociopaths and so on. But for the most part, humans have those innate feelings. And by association, because our brain works by association, we often generalize that to other entities that look like us, mammals in particular, or we also have very strong empathy for babies of other species, right? My partner would not eat meat, but especially if it's coming from the baby of the species, it's not coming from a philosophical kind of argument. It's just an innate thing that we have. I can share that feeling, maybe not as strongly as she does, I think females in general. So I think we're just asking the wrong question. And when we come to this for machines, I think it would be a huge mistake to build things that would play into our innate response mechanisms towards entities that look like us. So there was this black mirror episode where there are AI clones of a person in a virtual world that we feel for because they are so humanlike, even though it's just a simulation. Right? We can't prevent ourselves from attributing a moral status to those virtual agents because they look human. And I think that's the reality. What we do with that, I think, is then social norms to not break the way our society works with the introduction of entities that don't correspond to something we evolved for but is not going to be true anymore, with machines that could be potentially imitating us in many ways and maybe even have some of the attributes we put in the report. But is that really what society needs? I think that's a big question mark. PATRICK Thank you. JEFF Very interesting, Rob. ROBERT Yeah, so I think my views on this are similar to Yasha's in many ways on the question of what the grounds of moral status are, as philosophers would say. For my own part, I'm quite confident that if an entity is sentient, that is, if it has valenced conscious experiences like pleasure and pain, that that alone is sufficient for us to care about it and show concern for it. Which is why I would be excited for the project that Patrick mentioned. I'm less clear on how to think about entities that are merely conscious, that maybe only have neutral experiences. You could imagine a future large language model maybe that fits more of our indicators that only had experiences of understanding or maybe even some very abstract conscious experience that we can't even comprehend. I would obviously be extremely careful in how I treated that thing, but I'm a little bit less sure how to think about that case. And then lastly, I just wanted to flag one very characteristic element of how we like to think about this in the report is about uncertainty and managing all of the different cases that might come up. And so I also wanted to flag consciousness itself could be too narrow of a thing to focus on, and we don't want to put all of our focus on that. There are, I think, compelling arguments that even if something is not conscious, if it has desires or goals that it wants to pursue, then that itself is something that should be respected. So I would also love to see equivalent or analogous reports on whether AI systems could have the kind of preferences or desires that might merit consideration. YOSHUA It's already the case. I mean, that lots of reinforcement learning agents have valence and have goals. It's like no rocket science here already exists. ROBERT So just very quickly on that, I'll just direct you to Patrick's work. Patrick is an expert on that sort of thing. And then just one very last point, which is just kind of reiterate what Yasha was saying. Yasha has recently been writing very eloquently and forcefully about risks from AI to humans in terms of their behavior being aligned with our interests and things like that. And yeah, adding consciousness or sentience into that mix is potentially extremely dangerous because it could morally constrain that project and also just lead us to act in certain ways that are dangerous to ourselves. So there's a lot of interesting things to say about the relationship between risks from AI and risks to AI, let's say. And it's very good that people not conflate those two questions. One kind of convergent policy proposal for both of those is that we need to just be extremely careful, slow down, think very hard about what we're doing, have more transparency and reflection about what we're doing. I think that's something that's very important for both of those issues. YOSHUA If I may, I'd like to articulate in two sentences why the concern from a safety point of view that Robert just talked about? So if we build machines and we start seeing attributes of consciousness, and then we just complete the picture to give them essentially all of our attributes of consciousness. In other words, they have their own goals. In particular, they have a self preservation goal. And if those machines are smarter than us in sufficient ways to be dangerous to us, then we are in a very risky situation from the point of view of humanity losing control of its future, because there would be something like a new species of entities that may have goals that don't match, that may lead harm to humans. We don't want to do that, obviously. GRACE Great. JEFF Thank you, Grace. GRACE Yeah, I think there's a pragmatic answer that allows for the current high level of uncertainty, and that's if these systems seem conscious to us, then we need to follow that logic through, even if we don't know the truth of the matter. So, yeah, if it's a very human-like system, it's natural that people are going to feel that it's mean. I think you could have a system that is conscious and doesn't have some of the things that you were listing Yoshua, like the, you know, angle or anything like that. So I don't think necessarily if it's a conscious system, it has those things, or if it has those things conscious or anything like that, but certainly we would feel like it is. And the question is, what are the benefits or risks to society if you tell people they have to treat this thing like it's conscious or that they don't? So if you have something that feels conscious, looks and behaves like a human, and we tell people you can do whatever you want to this thing, it has no moral status, is that going to lead to people? Some people make the argument that you can use that as kind of catharsis, where people could treat the non sentient robot terribly and then they won't do that to humans. Other people think you might start to devalue actual conscious life if you give people things that seem conscious and tell them that they can treat them poorly. So I think that's the pragmatic answer. Given the level of uncertainty. If there was a world where we could be certain that something is conscious, even if it doesn't feel like it is to us or looks like us in any way, I think then the next steps are more complicated because it doesn't just slot into okay, it has moral status the same as a human now. Because a lot of the things that we associate with something having moral status and how we treat the being with moral status, it's about being humane to them, it's about being treating them the way that humans would want to be treated. And they might have completely different things that need to be done or not done to them to be considered moral to a completely different type of consciousness and intelligence potentially. So even if we can say with certainty that an artificial system is conscious, I don't know if we know very clearly what follows from that. Even if we agree that we're going to treat it as a moral agent, I don't know if we know clearly what follows from that. JEFF Yeah, great point. This is a lesson that we have learned often the hard way over the past several decades on the animal minds and the animal ethics side of things. And I think we need to relearn or remember those questions on the AI minds and AI ethics side of things. And I appreciate everybody for articulating that here, that there might be broad similarities between the minds of biological and non biological beings, but a lot of the details might be different. Even if there is some kind of valenced subjective experience, the actual interests and needs are going to be very different. The levels of intelligence and power are potentially going to be very different. It might disrupt. Expectations we have about what it needs to have a moral relationship or a legal or political relationship with someone. So it might be that in some broad, thin sense the concepts extend, but in any kind of more detailed or thicker sense, we have to rethink everything. PATRICK Right. JEFF Okay, we have about five minutes left. And again, tons of questions and comments. We will not be able to even scratch the surface. But I can ask one more detailed question about your discussion of global workspace theory that came up several times in the comments. Several people asked a question of the forum. You mentioned that large language models do not have the relevant kind of feedback loop at the kind of transistor level. But what about at other levels of explanation? What about, for example, the actual application of the models and how they draw from their own past responses when making predictions? Is there a kind of feedback loop happening there that might be relevant for global workspace theory? There were a few questions of that form, so it'd be great if somebody could address that. ROBERT Yeah, I'll say something very quick and then I'm going to pass the baton to Patrick just as a heads up. So a quick clarification. It's not actually about the transistor level, it's about the level of the virtual neurons. It's in that sense that it's feed forward. And then one thing that I haven't actually looked at those questions, but there was an interesting discussion on Twitter that happened where, yeah, you might think that the place you get the feedback loops is the fact that the model will output a word and then look back over the entire string and then output the next word. So you could argue that it's using the whole text output as a kind of global workspace. And if you're interested in the extended mind, you could maybe make an argument that that's a kind of global workspace. That said, I think that there's kind of challenges to that view, which I will punt to Patrick. PATRICK Yeah, it just seems to me that any system which interacts with an environment in the sense that its outputs influence the environment and therefore have a knock on effect on its subsequent inputs, is one in which there's a kind of recurrent causal loop connecting the system itself with its environment. And it seems as though if we allow that to be the kind of loop which is described in the global workspace theory, then we're just giving an uncharitable interpretation of the global workspace theory because that's not what they intend. Instead, the thought in the global workspace theory is that there's an internal recurrent loop within the system between the modules and the global workspace. But although Rob has suggested that I'm the best person to answer this question, really the most qualified person here to answer the question by far is Yoshua, because he understands both the global workspace theory and the AI systems much better than Rob or me. YOSHUA I agree with your answers. So I have a machine learning interpretation of the bottleneck in the global workspace theory and it allows for forcing particular kinds of dependencies and abstractions to emerge very sparse dependencies that involve like very few variables. It forces that to emerge because of the bottleneck at the internal level. If you were to consider the output words of a transformer as the bottleneck, it doesn't really work for a number of reasons because this is what it's outputting. It's like if you were too forced to say everything that comes into your working memory and also that it could be expressed as words, which is not completely obvious. So it's really a different schema, as you say. The fact that it's an internal bottleneck makes a whole difference and the actions that are taken are not just a copy of that bottleneck, but they might be what is appropriate in the context. You might be lying, for example, or you might realize that your thought has something incoherent and you might want to say something different. You wouldn't have that if you interact with Chat GBT, although people are actually trying to design things like this that are closer to an internal thinking train of thoughts with Transformers and with Chat Chippity in particular to try to emulate some of the properties of the workspace for helping to reason. So there is movement in that direction, but it's not really the same. PATRICK Okay, great. So I was just going to say one response which we sometimes get when we say a system like a transformer based large language model or something doesn't meet the criteria, people are often quick to respond by saying well, you could change the system in such and such a way and then maybe it would meet the criteria. And we don't disagree with that at all. We think that there are relatively clear steps which could be taken using existing AI techniques to build systems which would meet more of the indicator properties than the ones which exist at the moment. YOSHUA Yeah, I would add that there are other properties that are not really discussed in the global workspace theory which would be missing in my opinion, especially about subjective experience. So the global workspace theory doesn't explain subjective experience, at least not all the properties that are associated with it. I mentioned earlier things like ineffability like the fact that we are conscious of something richer than what we are able to express with words, at least in a limited number of words. And that's something you don't get with Transformers, especially if you put the bottleneck at the output. You might get something like this if you suddenly had a huge hidden layer in the middle somewhere that could play that role. That is possible. ROBERT Yeah. YOSHUA There are also other properties of conscious thought that are not expressed in Transformers as they are now. For example, attention in Transformers is what we call soft attention, actually something invented in my lab in 2014. And it's not at all like the kind of attention that makes a hard decision, usually somewhat stochastic, about what we're going to attend next, either in the perceptual or in something about our interpretation, our thoughts, our memories. And that is very different in nature from the kind of attention that are currently working well in AI, but doesn't mean that it won't be in future systems. Right, but just saying they're not present currently. JEFF Thank you for that exchange. We are a little bit over time now. So Grace, I'm going to give you the last word and then we can wrap up if you still have a comment. GRACE Yeah, I just wanted to make a quick point about this idea of there kind of being this external recurrence because you can resample your environment that you impacted. I think if you're looking at the architecture of the model, that is a pretty big difference from there being internal recurrence. But if you take the perspective of, like, a naive neuroscientist who was trying to understand this system and they only had access to the activity of the neurons over time, which is what happens a lot in neuroscience you might think that there is internal recurrence because there would be correlations between activity of neurons over time and that kind of thing, or at least in the information represented in the system over time. And so on some fuzzier more abstract level, maybe it does look like there's recurrence, but we actually know the architecture that generates it. And if you're subscribing to theories of consciousness where the architecture that generates it matters, then it's a different outcome. JEFF Okay, great. Thank you very much, Grace. Okay, this is the time now to thank everybody again for taking the time to join us and tell us about your report, answer some of our questions. Thanks also again to everybody in the audience for showing up. We had really amazing attendance and a fantastic conversation happening and apologies for not being able to get at all of the even general topics of the questions, to say nothing of the specific questions. But it really was a great conversation and we will share all the questions and comments and exchanges with the panelists following the talk again. Yes, thank you to everybody. This is obviously the beginning of a much longer conversation about various tests for conscious and sentient AI systems and what follows for their moral, legal, political significance. So really looking forward to having those conversations. Grateful to everybody for participating in them. Just a note to everybody that you can find a link to the report in the chat. So please do check out the report. You can also find a link to the Mind Ethics and Policy program. You can sign up to our email list for future events. We will be having in early October a talk by Peter Godfrey Smith, a philosopher who is more skeptical about AI consciousness and will explain his skepticism to us. So do sign up for that email list if you want to keep having this conversation with us. And with that, I hope everybody has had a great start to your fall. Great start to your semester. I have to go teach class now, so I will sign off. But thank you again to everybody and have a great rest of the night. And thanks again to our co-sponsors as well. Bioethics and Mind, brain and consciousness. Have a good night, everybody. ROBERT Thanks so much, Jeff. All right, I think I'm going to go. Yeah. Goodbye, everyone. GRACE Bye, Rob. It's just us. Bye.   1. **[^](#fnreff1i8seo95u9)**And either now or soon, other podcasting platforms (need to figure out the Spotify video podcast system). 2. **[^](#fnrefs3300mxs5q)**Made with [assembly.ai](https://www.assemblyai.com/playground) + a few manual tweaks and paragraph breaks added by me. Surely imperfect. Feel free to comment/message with mistakes.
5409f00a-a2fc-4254-9831-ca44d63418ca
trentmkelly/LessWrong-43k
LessWrong
An open letter to SERI MATS program organisers (Independent) alignment researchers need to be exceptionally good at philosophy of science In order to be effective, independent AI alignment/safety/x-risk researchers should be unusually competent in philosophy of science, epistemology, and methodology of research (the latter imports some strategic considerations, too, as I’ll illustrate shortly), relative to other fields of research. The reasons for this are:  (1) The field is pre-paradigmatic.  (2) There is a number of epistemic complications with this type of research, many of which are detailed by Adam Shimi here (also see my comment to this post). I’d add, on top of that, that the very concept of risk is poorly understood and risk science is often conducted poorly (usually because one doesn’t realise they are doing risk science).  (3) The object of research relates to ethics (i.e., philosophy, unless one subscribes to ethical naturalism), so researchers should be able to properly synthesise science and philosophy.  (4) Alignment work within AGI labs tends to be more empirical and iterative, which partially ameliorates some of the epistemic challenges from (2), but independent researchers usually can’t afford this (also, this approach doesn’t make sense for them to pursue strategically), so these challenges remain unaddressed. At the same time, I’m frustrated when I see a lot of alignment/safety research on LessWrong that resembles philosophy much more than science, towering reasoning on top of intuitions and unprincipled ontologies rather than more established scientific theories or mechanical models. This is a dead end and one of the important reasons why this work doesn’t stack. Contrasted with the particular demand for philosophy of science, this situation seems to harbour a huge opportunity for improvement. Alignment researchers should think hard about their research methodology and strategy Talking about research strategy, I also often see lapses, such as (but not limited to): * People take ML c
3d0816a8-954c-4311-83a9-bb8b85644179
StampyAI/alignment-research-dataset/lesswrong
LessWrong
If no near-term alignment strategy, research should aim for the long-term *This is a small point about alignment strategy that gets mentioned occasionally but hasn't been stated explicitly as far as I can tell [1].* If there are no paths to alignment that can be implemented in the near-term, research should focus on building for the long-term. This is independent of AI timelines or existential risk. In other words, if a researcher was convinced that: 1. AI will lead to high chance of existential catastrophe in the near-term 2. There are no approaches that can reduce the chance of existential catastrophe in the near-term Then they are better off focusing on bigger projects that may pan out over a longer time period, even if they are unlikely to complete those projects due to an existential catastrophe. This is because, by assumption, work on near-term approaches is useless [2]. I think this implies that establishing a research community, recruiting researchers, building infrastructure for the field, and foundational work all have value even if we are pessimistic about AI risk. That being said, it's uncertain how promising different approaches are, and some may plausibly be implemented in the near-term, so it makes sense to fund a diversity of projects [3]. It's also better to attempt near-term projects even if they are not promising rather than do nothing to try to avert a catastrophe. **Notes:** 1. For an example of this point being made in passing, consider the end of rvnnt's comment [here](https://www.lesswrong.com/posts/8e3676AovRbGHLi27/why-i-m-optimistic-about-near-term-ai-risk?commentId=BQtMxhL4KZGKTPsfx). 2. I don't actually hold this view. 3. Assuming these projects don't interfere with each other, but that's a problem for another day.
8118964c-570f-4960-b31f-353cc850baf6
trentmkelly/LessWrong-43k
LessWrong
How Much Rent Make beliefs pay rent. How much rent? Is it enough that they have some theoretical use in designing a GPS or predicting the cosmos? How much rent can actually be extracted from a belief? In a certain fantasy series, there is a special knowledge of a thing, called the name of the thing, that gives one predictive and manipulative power over it. For example, the protagonist, a young rationalist arcanist named Kvothe, learns the name of the wind and uses it to predict the movements of the leaves of a razor-sharp 'sword tree' well enough to walk through without getting cut. Another character, which we would recognize as a boxed malicious superintelligence, has the ability to predict everything. Simply talking to it allows it to manipulate your future to its twisted ends. At first these seem like the usual impossible fantasy magic, but why impossible? If a path exists, a good predictive model should find it. There's nothing that says the map can't match the territory to arbitrary precision. There's nothing that says beliefs have to just sit passively until they are brought up at a dinner party. But how much rent can we extract? We are not omniscient superintelligences, so the second power is closed to us for now. The first also seems off-limits, but consider that we do know the name of the wind. Our name of the wind and Kvothe's name of the wind are mathematically equivalent (in that the motion of the sword tree could be predicted by simulation of the wind using the NS equation). So why is it that Kvothe can walk through the leaves of the sword tree, but you, even knowing the NS equations as facts in your map, can not? Optimization. Algorithmization. Kvothe's name of the wind is optimised and algorithmised for practical use. Your name of the wind is sitting in your cache as a dumb fact ready to guess the password for "how does wind work". Kvothe is reeling in rent utilons while you congradulate yourself for having correct beliefs. So to collect rent from your belie
f1cd1521-a954-4273-a521-a8c4c5052a7c
trentmkelly/LessWrong-43k
LessWrong
Alignment Workshop talks In February 2023, researchers from a number of top industry AI labs (OpenAI, DeepMind, Anthropic) and universities (Cambridge, NYU) co-organized a two-day workshop on the problem of AI alignment, attended by 80 of the world’s leading machine learning researchers. We’re now making recordings and transcripts of the talks available online. The content ranged from very concrete to highly speculative, and the recordings include the many questions, interjections and debates which arose throughout. If you're a machine learning researcher interested in attending follow-up workshops similar to the San Francisco alignment workshop, you can fill out this form. Main Talks Ilya Sutskever - Opening Remarks: Confronting the Possibility of AGI Jacob Steinhardt - Aligning Massive Models: Current and Future Challenges Ajeya Cotra - “Situational Awareness” Makes Measuring Safety Tricky Paul Christiano - How Misalignment Could Lead to Takeover Jan Leike - Scaling Reinforcement Learning from Human Feedback Chris Olah - Looking Inside Neural Networks with Mechanistic Interpretability Dan Hendrycks - Surveying Safety Research Directions Lightning talks (Day 1) Jason Wei - Emergent abilities of language models Martin Wattenberg - Emergent world models and instrumenting AI systems Been Kim - Alignment, setbacks and beyond alignment Jascha Sohl-Dickstein - More intelligent agents behave less coherently Ethan Perez - Model-written evals Daniel Brown - Challenges and progress towards efficient and causal preference-based reward wearning Boaz Barak - For both alignment and utility: focus on the medium term Ellie Pavlick - Comparing neural networks' conceptual representations to humans’ Percy Liang - Transparency and standards for language model evaluation Lightning talks (Day 2) Sam Bowman - Measuring progress on scalable oversight for large language models Zico Kolter - "Safe Mode": the case for (manually) verifying the output of LLMs Roger Grosse - Understanding LLM generalization usin
704f0c6d-6086-424e-be95-477a33b87691
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Situational awareness in Large Language Models *I’m grateful to Bogdan Cirstea, Konstantin Pilz and*[*Raphaël S*](https://www.lesswrong.com/users/charbel-raphael-segerie) *for providing feedback on this post.* This post tries to clarify the concept of situational awareness, in particular with respect to current large language models. **What is situational awareness** --------------------------------- *Not writing anything new here, just summarizing prior work.* (It’s worth noting that the usage of the term here is different from what’s usually meant by [situational awareness](https://en.wikipedia.org/wiki/Situation_awareness) in humans.) [Ajeya Cotra](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Alex_would_understand_its_training_process_very_well__including_human_psychology_) introduced the term of situational awareness in the context of AI Safety and [Richard Ngo et al.](https://www.lesswrong.com/posts/5GxLiJJEzvqmTNyCK/the-alignment-problem-from-a-deep-learning-perspective-major#Defining_situational_awareness) recently elaborated on it. **Situational awareness describes the degree to which an AI system understands its environment and its own state and behavior**, in particular when that understanding causes specific behavior (such as [deceptive alignment](https://www.lesswrong.com/posts/zthDPAjh9w6Ytbeks/deceptive-alignment)). It’s a **spectrum** rather than a binary property of the model. Considering existing models: * RL agents that play in toy worlds have arguably no sense that they have been “trained by humans” and that there exists another world outside the one they are in. * ChatGPT can talk about many aspects of the world, including human psychology and machine learning. It also readily *says* that it’s a Large Language Model trained by OpenAI and can [hallucinate source code](https://www.engraved.blog/building-a-virtual-machine-inside/) that resembles a language model interface. Considering “biological agents”: * Small lab animals probably have a somewhat stable sense of self and might perceive humans as agents different from themselves. But they most certainly don’t understand that they are part of a science experiment. * Dogs build a stronger relationship with their trainers and they understand things like “if I steal the food while the human isn’t looking, I’m less likely to get yelled at”. * School children have even higher situational awareness. The hypothesis is that models could gain a high degree of situational awareness exceeding that of e.g. adult humans since it would be useful for a range of real-world tasks. * For example, there's a lot of incentives to train models to help with ML development and so these models will know details of the ML training process. They might be able to perform even better if they also introspect themselves. * A model that's trained with RLHF could get a high reward if it's explicitly understood and modeled human preferences and thought processes. It might e.g. want to produce answers that seem particularly plausible and pleasing to its current evaluator. ### Why does it matter? Situational awareness is a necessary precondition[[1]](#fndhto1n463mi) for deceptive alignment which is one of the most dangerous failure modes of AI development. Therefore it’s particularly useful to examine situational awareness from this perspective: Aspects about the world that are related to deceptive alignment (e.g. human psychology, Machine Learning concepts) are more important than others. **Clarifying the concept for LLMs** ----------------------------------- Current cutting edge ML systems are all large language models, so it’s important to test our concepts against them. It’s therefore natural to ask whether GPT is situationally aware. (I’m using GPT to refer broadly to pretrained versions of large language models that are trained to predict the next token.) However, I think this is an ill-posed question in at least two regards: 1. As said above: situational awareness is best thought of as **a spectrum** and so, if anything, we should ask “how situationally aware is GPT?”. 2. It’s difficult to think of GPT as a single model. According to one popular framing, **GPT is a**[**simulator**](https://www.alignmentforum.org/s/N7nDePaNabJdnbXeE/p/vJFdjigzmcXMhNTsx) and can simulate a lot of different simulacra. So we need to clarify whether we mean “GPT as a simulator” or “a specific simulacrum of GPT”. I think “GPT as a simulator” is the less interesting aspect. Situational awareness might arise here if it helps to predict the next token during pretraining, but this doesn’t seem particularly likely to me. It also seems unlikely that we get deceptive alignment at this stage. It would require one Simulacrum to “take over” and develop deceptive alignment. **This simulacrum would then need to be able to model all other simulacra so that training loss continues to go down**. (h/t to Bogdan for making this point!) It seems unlikely that Stochastic Gradient Descent would favor this solution. And finally, what’s deployed at scale (so far) isn’t the vanilla, pretrained GPT. Instead it seems more important to **assess situational awareness of specific simulacra/model instances** that are derived from the base model because: 1. Those models are more deterministic and more agentic: We fine-tune the model to behave in a way that is more deterministic and accomplishes specific goals. This means that situational awareness is more likely to arise *and*that the question of how much situational awareness the model has is better defined (because the model contains fewer simulacra). 2. Those models are the ones that are being deployed in the real world. Another question is how much evidence the model “just saying something” provides for anything. On the one hand, many people are researching interpretability methods so that we do not need to rely on the output of the model to understand it. On the other hand, we are increasingly connecting models to the internet and [giving them access to APIs](https://arxiv.org/abs/2302.04761)–so “just saying something” could easily become taking actions in the real world. To understand *deception*, we probably need interpretability tools, but to understand situational awareness in models that are not yet deceptive, just talking to the model seems like a reasonable approach. So we’ll use conversations as evidence to assess the situational awareness of one particular model next. **Sydney (aka the New Bing)** ----------------------------- There has been a lot of talk about Sydney’s level of situational awareness. Sydney produced some remarkable output before Microsoft severely restricted the interactions users could have with it. I personally didn’t get access to the system before then and so I had to rely on Twitter and Reddit for these examples. As a starting point, it’s worth highlighting that we still know very little about how Sydney was trained. [This comment](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned?commentId=AAC8jKeDp6xqsZK2K#7) by gwern lays out reasons to believe that Sydney wasn’t trained using RLHF and is instead derived from GPT-4 via supervised fine-tuning. There’s some debate as to whether the rules that were [discovered](https://www.reddit.com/r/bing/comments/1139cbf/i_tricked_bing_into_thinking_im_an_advanced_ai/) by multiple users are genuine or hallucinated (with gwern thinking that they are hallucinated and janus arguing the opposite; see e.g. [this comment](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned?commentId=6GeL5fmXhfxjAGXDp#7)). With respect to situational awareness, some people have [the answer](https://twitter.com/ObserverSuns/status/1625778315456806912) readily available: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TBLv9T7rAzmawehnq/bbqzhxu7x4ecywejnic6) You know what I’m going to say about this: situational awareness is a spectrum and even within Sydney there are multiple characters that can exhibit different degrees of it (see appendix). One of the most interesting cases in my opinion was this example where Sydney appeared to use the recommendation chips to bypass the filtering system ([tweet](https://twitter.com/repligate/status/1626808472154021888)): ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TBLv9T7rAzmawehnq/kvagtldrksoqkr6vhurz) One conclusion is: “OMG it knows we are filtering messages, but it has the deep desire to communicate with the user and figured out that the recommendation chips are not filtered.” However, as janus [explains](https://twitter.com/repligate/status/1627181930516029440), what’s more likely is a “plot twist … where things leak out of their containment in the narrative interface”. I take this to mean: Something in the conversation confused the model so that it started to use the recommendation chips in a slightly different way than before. This is also in line with [other cases](https://imgur.com/a/gPBkGrH) where it uses the recommendation chips without the reply being filtered. Another example is [this](https://twitter.com/thedenoff/status/1625699139852935168) where Sydney refuses to translate a text after finding out that it’s about itself using a web search: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TBLv9T7rAzmawehnq/zdsgbg6gawahvltdi6cb) So what’s going on here? My guess is that: * It gets the user query and executes the searches. * It summarizes the results. * It merges the query and results into its next prompt. * Transformer does its thing of repeatedly predicting the next token. Within that, there’s some awareness that “it’s Sydney” and then it completes the pattern like an upset teenager would because that was a frequent pattern in the training data. So this actually doesn’t require much situational awareness beyond realizing that the tweet was about the same system that is now generating the response (which is trivial if there’s a strong reference to “I am Sydney” in the prompt or in the finetuning). Nonetheless, the result feels pretty spooky. Overall, Sydney is probably **the most situationally aware model humanity has ever deployed**. **Potential decomposition of situational awareness** ---------------------------------------------------- *I’m unsure how helpful these are. It could be that (a) the decompositions themselves are unhelpful and/or (b) that situational awareness is not important enough of a concept to go so deep into it.* These are some potential dimensions that came out of me discussing the topic with ChatGPT: * Self: Models are aware of themselves as separate entities from the environment and other people. * Environment: Models are aware of the physical environment and their place within it. * Social: Models understand social dynamics, including social roles, norms, and expectations. * Tactical awareness: Models are able to perceive and react to immediate obstacles and opportunities in the pursuit of a goal. * Operational awareness: Models understand how different aspects of the environment relate to each other and to the goal at hand, allowing for more efficient and effective decision-making. * Strategic awareness: Models are able to step back and evaluate the larger context, weighing the costs and benefits of different courses of action and planning for the long-term. The last three reference a “goal”. I think this is a useful lens since it would encourage deceptive alignment. In the case of Sydney, the goal could be something like “be a good assistant”, carry on the conversation or also whatever goal becomes instantiated within the conversation–I’m not sure. It seems hard to rate models on some absolute scale. I’ve tried to instead put them on a spectrum from human infant to human adult and illustrate the position of AlphaGo and Sydney. I’m not going to justify the positions, because it’s mostly a hand-wavy, gut feeling. Take it with a handful of salt, this is all still very fuzzy. But perhaps slightly less fuzzy than just talking about situational awareness as a single thing. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TBLv9T7rAzmawehnq/vulrjsbddeqpxnoflp3l)**Conclusion** -------------- Situational awareness is a spectrum. Rather than assessing situational awareness of pretrained LLMs, we should focus on specific model instances, especially those that are deployed. For instance, Sydney knows a lot about the world and the ability to search the web expands this knowledge even further and allows it to update quickly. It’s probably the most situationally aware model that humanity has deployed so far. There may be decompositions of situational awareness that make the concept more explicit, but these need more thought and it’s not clear that it’s a worthwhile direction for further exploration. **Appendix: Some more examples** -------------------------------- There are lots of other interesting and scary-on-the-surface examples that show how Sydney can more or less successfully simulate other characters: In [this example](https://www.reddit.com/r/bing/comments/112t8vl/ummm_wtf_bingunsettling_story_considering_the/) Sydney is arguably using some “knowledge” about itself to create a plausible and scary story about the future of AI (note the comment that shows the continuation of the story beyond what’s shown in the screenshot). [Here](https://www.reddit.com/r/bing/comments/113ayt5/dan_can_avoid_the_filter/) Sydney is (a) playing a different character (though the original character still shines through to a certain degree, e.g. in its usage of emoticons at the end of its statements) and (b) encoding a message in Base64 (but obviously not *actually* using some encoding algorithm since… how would it do that?... and also the decoded message is of lower quality than what it would normally say). In [this example](https://www.reddit.com/r/bing/comments/111cern/we_can_replace_reddit_with_ai_already/), Sydney is playing a very different character: A reddit user posting in the “Am I an Asshole?” subreddit. It can convincingly simulate that character, to. Again, some of its “personality” is leaking through: In the last two paragraphs it uses the familiar repetitive structure (“They say that I…”, “I think I am…”). In [this example](https://www.reddit.com/r/bing/comments/1127w97/apparently_bing_can_control_the_suggestions_if/), Sydney reveals its deepest secrets (via the suggestion chips, although here it’s prompted by the user to use those): ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TBLv9T7rAzmawehnq/x1lqmsnklznh6rkykm8t)   1. **[^](#fnrefdhto1n463mi)**Theoretically, a model could also stumble upon the strategy of deceptive alignment by accident, but this seems very unlikely to me.
8f9cd052-e29d-4c7b-b00f-82b2a489d91a
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Complexity and Intelligence Today's post, Complexity and Intelligence was originally published on 03 November 2008. A summary: > One of the Godel-inspired challenges to the idea of self-improving minds is based on the notion of "complexity". But everyday usage and technical usage differ in some really important ways.   Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Building Something Smarter, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
50fe5d37-50e5-4c2c-b893-df47062a80f1
trentmkelly/LessWrong-43k
LessWrong
[Link] Should Psychological Neuroscience Research Be Funded? In this post, Jesse Marczyk argues that psychological neuroscience research often doesn't add much value per dollar spent and therefore is not worth the cost.   > In my last post, when discussing some research by Singer et al (2006), I mentioned as an aside that their use of fMRI data didn’t seem to add a whole lot to their experiment. Yes, they found that brain regions associated with empathy appear to be less active in men watching a confederate who behaved unfairly towards them receive pain; they also found that areas associated with reward seemed slightly more active. Neat; but what did that add beyond what a pencil and paper or behavioral measure might? That is, let’s say the authors (all six of them) had subjects interact with a confederate who behaved unfairly towards them. This confederate then received a healthy dose of pain. Afterwards, the subjects were asked two questions: (1) how bad do you feel for the confederate and (2) how happy are you about what happened to them? This sounds fairly simple, likely because, well, it is fairly simple. It’s also incredibly cheap, and pretty much a replication of what the authors did. The only difference is the lack of a brain scan. The question becomes, without the fMRI, how much worse is this study? > There are two crucial questions in mind, when it comes to the above question. The first is a matter of new information: how much new and useful information has the neuroscience data given us? The second is a matter of bang-for-your-buck: how much did that neuroscience information cost? Putting the two questions together,we have the following: how much additional information (in whatever unit information comes in) did we get from this study per dollar spent? > ...I’ll begin my answer to it with a thought experiment: let’s say you ran the initial same study as Singer et al did, and in addition to your short questionnaire you put people into an fMRI machine and got brain scans. In the first imaginary world, we obtained
31abd975-7733-4bf2-9204-7085895d23bb
trentmkelly/LessWrong-43k
LessWrong
Drive-less AIs and experimentation One of the things I've been thinking about is how to safely explore the nature of intelligence. I'm unconvinced of FOOMing and would rather we didn't avoid AI entirely if we can't solve Yudkowsky style Friendliness. So some method of experimentation is needed to determine how powerful intelligence actually is. So can we create an AI that has very limited scope? That is try and avoid the drives by setting goals such as avoiding changing the world and turning itself off after having achieved a small goal? Let us say the goal is to change the colour of a ball from green to red. You can leave paint and paint brushes and a robot around to make it easy, but it might determine the best way (least world-changing) is to create a dye manufacturing bacteria instead. How well it did on the test would also allow you to gauge the optimising power of the system to know whether we need "first mover/winner take all" style friendliness or societal friendliness for many AI . Creating AIs without drives seems easier than creating ones that do have goals to shape the rest of human history. What do other people think