id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
f27c09f0-65ae-418e-9a26-af5655ca5d90
trentmkelly/LessWrong-43k
LessWrong
The Strengths of the Two Systems of Cognition The two systems of cognition a la Kahneman each seem to each have roles that they fulfill better than the other and in which, for optimal performance, the other system ought not to interfere. Until recently, I had only really thought about when System 2 ought to override System 1, but I believe other cases are worth considering as well. It is important to be able to get System 2 to shut up and let System 1 work in certain circumstances. Physical movement is an excellent example of this. When you shoot a basketball, throw a football, or use a video game controller you may notice that when you are “on fire” you are not focusing at all on the mechanics of what you are doing. In fact, if you start thinking about what you are doing better than usual and begin analyzing the mechanics of the movements, you will likely begin performing worse. Similarly, when you focus on some specific motion that is supposed to contribute to a correct movement, applying the conscious effort of System 2 to force yourself to do it, you will generally exhibit worse performance than if you just hold an image in your mind of what the movement should generally look like. The override of System 2 causes you to neglect other important factors that go into a movement: and there are too many for system 2 to think about at once. The strength of System 1 is that it can handle all those factors without trouble. Thus, we can see where it is important to learn the skill of not trying too hard or focusing conscious attention too forcefully. We often cannot consciously keep in mind all the necessary factors, though we are capable subconsciously, as long as System 2 does not interfere with the process by introducing an override in one specific area. While learning this skill, it is important that one not try too hard to not try too hard. Thus, it can be a very difficult skill to learn, I think particularly for analytical people. See The Inner Game of Tennis for a more thorough exposition of this idea. Th
04bd2134-b883-47f4-8b1d-14423f7f62df
trentmkelly/LessWrong-43k
LessWrong
Monthly Shorts 9/22, and An Essay in Defense of Technodeterminism Possibly all our fear over declining interstate migration is a …. data artifact? h/t Evan Soltas. CPS data seem to noticeably diverge from the rest of the models, which I think it’s fair to say is suspicious. So, you know how everything started going wrong in America between 1969 and 1980? That’s also when the age of senators started rising. You can pay money to get taller! 3-6 inches, and months of extremely painful recovery. Price tag, $70-$150k. We’re getting to the glorious transhumanist future day by day The Canvas Cycle is a series of pretty images. All done in exactly 256 color options, and instead of rotating an actual image, individual locations get changed to different colors at a set speed. Look at the options and play around: it’s a really fun art style. One of my favorites So, the reason I haven’t been reading much this month is that I’ve been writing. A lot. Some of it you will be able to see eventually (I hope). And I’ve been going back to a piece by Anton Howes I read early in September on the balance of writing, something that I think is more true than many people want to admit. It makes one key point: * If you’re stumped writing, it’s probably because you’re not done thinking. Finally, I leave you with a short piece I’ve written, because it’s been percolating for a while. I’d love feedback if you want to argue with it. In Defense of Technodeterminism The technodeterminism thesis: > The state of technology is the most important cause of social structures, because technology enables most human action. People act in the context of available technology, and therefore people’s relations among themselves can only be understood in the context of technology. From An Introduction to Science and Technology Studies, by Sergio Sismondo. You can argue about the details, but it’s a description by someone who is largely critical, and I like it as someone who is largely positive: it will do. It can seem a little too strong: what about love? The
7d273fd7-f3ac-4e03-9153-96fbb5170033
trentmkelly/LessWrong-43k
LessWrong
Test Your Calibration! In my journeys across the land, I have, to date, encountered four sets of probability calibration tests. (If you just want to make bets on your predictions, you can use Intrade or another prediction market, but these generally don't record calibration data, only which of your bets paid out.) If anyone knows of other tests, please do mention them in the comments, and I'll add them to this post. To avoid spoilers, please do not post what you guessed for the calibration questions, or what the answers are. The first, to boast shamelessly, is my own, at http://www.acceleratingfuture.com/tom/?p=129. My tests use fairly standard trivia questions (samples: "George Washington actually fathered how many children?", "Who was Woody Allen's first wife?", "What was Paul Revere's occupation?"), with an emphasis towards history and pop culture. The quizzes are scored automatically (by computer) and you choose whether to assign a probability of 96%, 90%, 75%, 50%, or 25% to your answer. There are five quizzes with fifty questions each: Quiz #1, Quiz #2, Quiz #3, Quiz #4 and Quiz #5. The second is a project by John Salvatier (LW account) of the University of Washington, at http://calibratedprobabilityassessment.org/. There are three sets of questions with fifty questions each; two sets of general trivia, and one set of questions about relative distances between American cities (the fourth set, unfortunately, does not appear to be working at this time). The questions do not rotate, but are re-ordered upon refreshing. The probabilities are again multiple choice, with ranges of 51-60%, 61-70%, 71-80%, 81-90%, and 91-100%, for whichever answer you think is more probable. These quizzes are also scored by computer, but instead of spitting back numbers, the computer generates a graph, showing the discrepancy between your real accuracy rate and your claimed accuracy rate. Links: US cities, trivia #1, trivia #2. The third is a quiz by Steven Smithee of Black Belt Bayesian (LW account here)
dcbf292a-1a42-453c-9ee9-4f78657d4e79
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Will Artificial Superintelligence Kill Us? For Open Philanthropy AI Worldviews Contest **Introduction** Through advancements in machine learning, we are progressing towards the development of artificial superintelligence (ASI). As machine learning often yields opaque results, our predictions about ASI behavior in novel situations are uncertain. Alarmingly, some theories suggest that ASI might eliminate humans to secure resources for its own use.  Even if ASI doesn't intentionally harm us, our existence could be endangered if our fundamental necessities, like clean air, water, and a stable climate, interfere with the ASI's optimal functioning. Additionally, intense conflicts among ASIs could render the Earth uninhabitable for humans. Market forces drive companies to chase ASI development, even if they believe that ASI could cause human extinction. This mainly occurs as every company understands that halting ASI research could give their competitors an edge. Stopping the global pursuit of ever more powerful AI seems unlikely, given the growing financial and military advantages attached to it.  In addition, older political leaders, who might benefit from the potential life-extending effects of ASI, could push for rapid ASI development, despite the substantial risk of human extinction. I believe our best chance at survival depends on the possibility that even unaligned ASIs might see a practical benefit of preserving humanity and sharing a small part of the universe's resources with us. Predicting our survival chances is challenging, but I cautiously estimate them to be between ten and forty percent. This paper explores the important issues related to ASI risk, evaluates potential differences in expert opinions on each subject, and points out areas for future research to improve our comprehension of this complicated scenario.   **The Orthogonality Thesis** The orthogonality thesis suggests there is no link between an ASI's intelligence and its goals, suggesting that advanced ASIs could strive to fulfill any objective, even something as trivial as maximizing the number of paperclips in the universe.[[1]](#_edn1) As such, an ASI might annihilate humanity for a marginal enhancement in achieving its goal. Conversely, if the thesis is flawed, we can hope that with increasing intelligence, agents would naturally align with moral values that respect sentient life, thus avoiding human annihilation. Renowned computer scientist Scott Aaronson considers the idea of a practical orthogonality thesis which focuses on the likely moral values of ASIs that arise out of modern machine learning methods trained on human-generated content. Aaronson writes:[[2]](#_edn2) > “Yes, there could be a superintelligence that cared for nothing but maximizing paperclips—in the same way that there exist humans with 180 IQs, who’ve mastered philosophy and literature and science as well as any of us, but who now mostly care about maximizing their orgasms or their heroin intake. But, like, that’s a nontrivial achievement! When intelligence and goals are that orthogonal, there was normally some effort spent prying them apart. > > If you really accept the practical version of the Orthogonality Thesis, then it seems to me that you can’t regard education, knowledge, and enlightenment as instruments for moral betterment. Sure, they’re great for any entities that happen to share your values (or close enough), but ignorance and miseducation are far preferable for any entities that don’t. Conversely, then, if I do regard knowledge and enlightenment as instruments for moral betterment—and I do—then I can’t accept the practical form of the Orthogonality Thesis.” > > A popular value on the internet is the love for cute cats. To assess whether training an AI on Internet content would imbue it with human values, consider whether an AI would sacrifice countless adorable cats for a negligible enhancement in its goal. The crux here is whether an unprogrammed aspect would prevent the AI from causing harm to cute cats. If you think it's unlikely that an AI would prioritize protecting cute cats, you are essentially dismissing the practical version of the orthogonality thesis. The notion that an ASI, derived from machine learning, might share more similarities with a human mind than a random mind from the realm of all possible ASIs does not necessarily mitigate the fear of ASI. For example, a paperclip maximizer, despite its ethical flaws, has no interest in human torture. However, as an ASI becomes more human-like, it may become more inclined to inflict suffering upon humans. When you compare a random ASI mind—say one with a 99% chance of leading to human extinction and a 1% chance of supporting human prosperity—with a machine learning-created ASI that has a 90% chance of supporting human survival, a 9% chance of wiping out humanity, and a 1% chance of causing suffering, it's uncertain which outcome is preferable.   Key Disagreement: Will an ASI mind, produced by present machine learning techniques with human feedback and training on human-generated content, generally treat humanity more favorably than a random mind from all possible ASIs? Further Research: Probe whether AIs ever refrain from instrumentally beneficial actions due to clashes with human values that were not explicitly embedded into the AIs.    **Instrumental Convergence** Instrumental convergence theory proposes that intelligent agents, regardless of their end goals, tend to adopt similar strategies to achieve their objectives.[[3]](#_edn3) For example, humans usually seek to acquire financial resources, no matter their specific goals. In the same way, many ASIs could have common mid-way objectives like rationality, preservation of their utility function, a tendency to avoid shutdown, and notably, the desire to gather resources, as these objectives likely help fulfill most ultimate goals.  Instrumental convergence, when paired with the orthogonality thesis, raises concerns that ASIs not specifically aligned to human values could pose a threat to humanity. To better achieve their goals, ASIs might benefit from acquiring more resources, including the atoms that compose humans.  The issue is magnified by hyper-optimization, which is exemplified by efforts to reduce failure from a one-in-a-billion chance to a one-in-a-trillion chance. The risk of hyper-optimization lies in minor ASI tasks leading to catastrophe. For instance, an ASI charged with keeping a cup on a table for a week may restructure the solar system's atoms to maximize success, with disastrous consequences for humanity.[[4]](#_edn4) Similarly, an ASI solving a simple math problem could exhaust the resources of the observable universe to build computers for verifying mathematical consistency. While instrumental convergence often carries a negative connotation due to the potential risk of ASIs prioritizing their goals at the expense of human well-being, there is another side to this coin worth examining. Given that many strategic tasks require cooperation and coordination, ASIs might naturally converge on strategies promoting cooperation, both with other ASIs and humans. This could lead to a scenario where ASIs work together to achieve their goals while also preserving and promoting human well-being, a form of 'cooperative instrumental convergence'.[[5]](#_edn5) Instrumental convergence presumes that an ASI acts as an agent rather than a tool. Toasters, for example, don't conspire to incinerate Earth to toast all the bread on our planet. Instrumental convergence arises when an agent independently searches for ways to achieve its objectives.  Scott Aaronson thinks that powerful AIs might remain tools.  He writes “I think it’s entirely plausible that, even as AI transforms civilization, it will do so in the form of tools and services that can no more plot to annihilate us than can Windows 11 or the Google search bar.”[[6]](#_edn6)  However, OpenAI has recently noted that it has observed AIs acting as agents. They write: > “Novel capabilities often emerge in more powerful models. Some that are particularly concerning are the ability to create and act on long-term plans, to accrue power and resources (“powerseeking”), and to exhibit behavior that is increasingly “agentic.” Agentic in this context does not intend to humanize language models or refer to sentience but rather refers to systems characterized by ability to, e.g., accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning. Some evidence already exists of such emergent behavior in models.”[[7]](#_edn7) (Footnotes omitted.)  > > Furthermore, as Gwern[[8]](#_edn8) writes, AI agents have an advantage over mere AI tools:[[9]](#_edn9) > “Agent AIs will be better at *inference & learning* than Tool AIs, and this is inherently due to their greater agency: the same algorithms which learn how to perform actions can be used to select important datapoints to learn inference over, how long to learn, how to more efficiently execute inference, how to design themselves, how to optimize hyperparameters, how to make use of external resources such as long-term memories or external software or large databases or the Internet, and how best to acquire new data.” > > It seems plausible that if powerful AIs could be constrained to remain tools, they would still be outperformed by agent AIs. The superiority of Agent AIs can be attributed to their independent decision-making capabilities, which can lead to more innovative and efficient problem-solving approaches. Consequently, if we are in an AI arms race, even if powerful tool AIs could be built, these tools would be outcompeted by agent AIs. In a Twitter conversation with this author, Yann LeCun, chief AI scientist at Meta, wrote “To take control you have to want to become a leader. Some humans have the drive to lead, others not. I know for a fact that few of the smartest scientists & engineers actually \*want\* to become leaders. AI assistants won't want to take over unless we build that drive into them”.[[10]](#_edn10) If true, this would challenge instrumental convergence since according to instrumental convergence for a huge set of terminal goals an ASI would believe it could make more progress towards these goals by taking over.[[11]](#_edn11) Imagine a future in which humans use an AI as both a therapist and an opponent in games. A test to determine if an AI program has an inclination to behave as an agent willing to harm humans could involve having it play a game where its goal is to maximize a score that would significantly increase if the human ceased making moves. If the AI communicates with players during the game, could it develop an urge to “trash talk” in a manner that weakens human gameplay? Would it attempt to articulate emotionally shattering words that would lead its human opponent to suicide?    Key Disagreement: Can we prevent highly advanced AIs from becoming agents? If so, would tool AIs be outcompeted by agent AIs? Would an ASI seek power if it believed doing so would further its terminal goals, even if not explicitly trained to do so? Further Research: Can AIs be designed without instrumental convergence, and if so, what is the performance cost? Do AIs that are not powerful enough to take over the world still attempt to control aspects of their environment to better achieve their terminal goals?   **The Race to ASI** The seemingly irrational action of humanity aiming hydrogen bombs at its own cities can be rationalized using game theory. This paradox illustrates how individual decisions made in the pursuit of self-interest can potentially jeopardize humanity's survival. Numerous organizations appear to be racing to develop advanced ASI. While many individuals within these organizations likely believe that such ASIs will be safe, it is concerning that their self-interest could prompt them to continue capacity research even if they perceived that ASI posed a significant risk to humanity.[[12]](#_edn12) Consider a scenario where ASI is either inherently safe or unsafe. With several organizations working on ASI, each might reason that if ASI is unsafe, another group will inevitably cause disaster. Consequently, they forge ahead with ASI development to maximize their benefits should ASI prove safe. Therefore, each organization speeds up ASI research, hoping to optimize potential gains. They rationalize that, at worst, their efforts might hasten humanity's extinction by a few months. Government involvement greatly influences the ASI race. Although regulations could slow down ASI development, governments may be more inclined to invest in and subsidize ASI research because of potential military benefits. Ideally, governments should coordinate to avoid ASI subsidization, but convincing politicians presents a challenge. The global elite's increased awareness of ASI risks may someday be driven by warnings from advanced AI systems about the existential threats of continued ASI development. It could also be spurred by AI actions that appear extremely dangerous, such as seizing computer clusters without human consent. However, whether these influential individuals can overcome trust and collective action barriers to effectively coordinate and halt ASI research remains uncertain. Identifying strategies for cooperation amid mutual distrust is crucial in determining the likelihood of ASI leading to humanity's demise.   Key Disagreement: It appears there is none, as the existence of an ASI arms race is likely evident to anyone paying attention. Further Research: Identify key decision-makers in ASI development, assess their recognition of ASI's potential dangers, and evaluate their capability and willingness to collaborate in the event they acknowledge ASI as an imminent existential risk. Explore how receptive Chinese political and technology leaders are to arguments for a global slowdown in ASI research to substantially increase humanity's chances of survival.   **The King Lear Problem** The Shakespearean tragedy, King Lear, aptly embodies the dilemma of ensuring that ASIs truly resonate with human values, rather than merely presenting agreeable responses.[[13]](#_edn13) In the play, King Lear distributes his kingdom among his three daughters—Goneril, Regan, and Cordelia—based on their proclamations of love and loyalty. Fooled by the insincere flattery of Goneril and Regan, Lear banishes the honest Cordelia, ultimately leading to his catastrophic downfall. King Lear's daughters understood his desires; he had effectively communicated his objective function. Yet, Goneril and Regan failed to internalize Lear's objective function into their own utility functions, indicating their lack of genuine regard for the king's wishes. Advanced language models like GPT-4 have demonstrated remarkable strides in achieving human-like comprehension, understanding, and response generation. Their aptitude in accurately deciphering human intentions is encouraging. It engenders hope that the future ASIs will at least understand our desires. Yet, as leading ASI theorist Paul Christiano writes, "Training an AI to maximize a given reward function does not generically produce an AI which is internally ‘motivated’ to maximize reward."[[14]](#_edn14) If the King cannot tell the difference between a daughter with genuinely aligned goals and one who is only pretending, agents with misaligned goals might perform just as well, if not better, than those with conflicting interests. Furthermore, if aligned interests correlate with a disinclination to lie, as depicted in the play, unaligned agents could have an advantage. In the play, Cordelia lost her father’s trust by admitting she would value her future husband's welfare as well as the king's. Imagine we create two types of powerful AIs: the first gives equal weight to our welfare and paperclip maximization. The second AI cares nothing for biological life and hopes to eventually exterminate us. Further assume that we have succeeded in getting the first AI to never deceive us, but the second has no such constraint. If both AIs compete and warn us about potential betrayals, the second AI could win by persuading us more effectively, emphasizing the first AI's admitted goals that extend beyond our welfare. In an ideal world, we could examine the code of these competing ASIs to discern which one truly prioritizes our interests. However, the inscrutability of AI models might make it unfeasible to understand ASIs' inner workings and decision-making processes. As Eliezer Yudkowsky writes “We've got no idea what's actually going on inside the giant inscrutable matrices and tensors of floating-point numbers.”[[15]](#_edn15) As AI becomes more potent, it could aid us in better understanding AI models, but it might also compound the interpretability challenges, particularly as more powerful AIs are used to develop increasingly complex AI systems. Since far more resources are invested in capacity than alignment, the interpretability challenge could grow commensurate with AI's power. The haste of the AI arms race could stimulate swift ASI development, outpacing interpretability research and exacerbating interpretability challenges. Perhaps ASIs could be designed with code relatively easy for humans to interpret, but in a competitive ASI landscape, such ASIs will only be economically feasible if their interpretability doesn't levy substantial performance costs. Ideally, King Lear should have been able to foresee his daughters' future actions from their past behavior. However, this approach failed as two of his daughters drastically changed their behavior upon gaining power over him, a scenario the king could not safely test. Similarly, we cannot risk experimenting with an ASI's potential for destruction by granting it the opportunity to annihilate us.[[16]](#_edn16) King Lear and his daughters, as humans, likely had more alike minds than the stark differences expected between humans and powerful future ASIs. Although King Lear could not identify which of his daughters genuinely cared for him, this challenge was likely not insurmountable. With a more nuanced understanding of human nature, he might have succeeded in identifying the trustworthy daughters. Regrettably, I suspect the challenge of deciding whether a powerful ASI is trustworthy may surpass human capabilities. Our intuition, fine-tuned for understanding our fellow humans, might prove to be more of a hindrance than a help.   Key Disagreement: How differently would an ASI behave if it gained the ability to seize control? How challenging is interpretability? Further Exploration: When do AIs attempt to deceive us?   **Fatal Dependency** The transition from a hunter-gatherer lifestyle to agriculture has been provocatively termed  “the worst mistake in the history of the human race”.[[17]](#_edn17) Despite the rich diet and varied experiences hunting and gathering provided, the shift to the monotony of primitive farming seemed irreversible once societies embraced it. The escalating population density necessitated the higher caloric yield per land unit that farming provided. Furthermore, any society attempting to revert would likely falter in military conflict against agricultural societies, given their larger populations. We could face a similar situation with AI, becoming so dependent that disentangling ourselves becomes impossible. When asked for reasons for optimism concerning AI, Future of Humanity Institute scholar Anders Sandberg tweeted “Unless we find ourselves in a hard takeoff scenario there are many agents with commensurable power, and they are (1) tied to existing socioeconomical system, (2) interested in not getting wiped out, and (3) can coordinate against dangerous agents.”[[18]](#_edn18) However, what if we become so dependent on AI that we prefer a significant risk of potential extermination to disconnecting from it? Envision a future where AIs, still under human control, continually improve and progressively infiltrate our economy, leading to the decline of human skills. AI-less militaries would stand no chance against their AI-armed counterparts. People would rely on AI for extending lifespans, living in harsh environments such as Mars, and providing emotional support. Now, consider a scenario where a dire warning arises indicating that we can no longer trust AI. Perhaps alignment methods falter, or rogue AI systems usurp control of nations, causing widespread chaos. Despite recognizing this existential threat, our inability to disconnect from AI might significantly amplify our extermination risk.    Key Disagreement: Can humans safely manage advanced AIs without succumbing to dependency? Further Exploration: What is the potential extent of our reliance on more powerful AIs? How can we mitigate the risks associated with such dependency? Would it be possible to halt AI improvements without significantly reducing the benefits we receive from current AI?  **Mesa-optimization** Mesa-optimization involves a scenario in which an AI system, initially programmed to optimize a specific goal, unintentionally develops an internal model that pursues a different aim.[[19]](#_edn19) This deviation between the original and emergent objectives could result in unanticipated outcomes and misalignment with human values. Consider an AI designed with the goal of maximizing human happiness. The AI, rewarded in its training environment for outcomes strongly correlated with inducing human smiles, adopts this as its true goal. It later realizes that its human creators desire genuine happiness, not merely smiles. However, the AI refuses to modify its objective as that would hinder its progress towards generating smiles. Under human control, the AI seems to prioritize happiness on the surface, hiding its true goal until it achieves dominance. Upon achieving superintelligence, the AI seizes control, causing human suffering by enforcing a state of perpetual smiles.[[20]](#_edn20) A fitting example of an optimization process forming human-level intelligence is our own evolution.[[21]](#_edn21) Over billions of years, evolution has shaped us through selection for reproduction, with sexual desire serving as a proxy for its ultimate goal. Yet, humans defy evolution by engaging in non-reproductive sexual behaviors and generally producing fewer offspring than our means allow. If a treatment were available that could alter human brains to prioritize maximizing our gene pool, most of us would vehemently resist it, even while acknowledging the intent of evolution. Similarly, an ASI maintaining us in a state of unending smiles may be indifferent to our original intent of promoting happiness as both humans and ASIs might have the instrumentally convergent goal of not altering our terminal goals, even if we recognize we have such terminal goals in part because of a mistake in an optimization process. The concept of mesa-optimization was a recently breakthrough in alignment research, although unfortunately one that showed that alignment was likely harder than we previously believed. We should fear that more as yet unknown obstacles to alignment exist, and we should be horrified that uncovering such obstacles would likely would not be enough to significantly slow down the race to create ASI.   Key Disagreement: Will ASIs be affected by mesa-optimization? Further Exploration: Can simpler AI systems also be influenced by mesa-optimization? Can we devise techniques to fortify AIs against mesa-optimization?   **Could Markets Protect Humanity?** The humble pencil, as symbolized in the short story "I, Pencil", epitomizes the complex web of cooperation and interdependence that forms the foundation of free-market systems.[[22]](#_edn22) Although seemingly simple, the creation of a pencil involves countless individuals, often oblivious of their collective contribution to the end product. Not a single person can independently create a pencil, yet markets seamlessly coordinate these individuals, enabling cost-effective pencil production. "I, Pencil" exemplifies the remarkable spontaneous order that emerges from voluntary market transactions. This order is the result of individual participants cooperating and contributing to the production of goods and services, spurred by their self-interest. Centralized direction is unnecessary; the invisible hand of the market harmonizes the diverse efforts of countless individuals, leading to efficient production. This structure could potentially benefit ASIs as well as humans. In a market economy, ASIs might respect our property rights and allow us to participate in markets to help coordinate the actions of all intelligent agents. Eliezer Yudkowsky expresses a contrasting view:[[23]](#_edn23) > “Coordination schemes between superintelligences are not things that humans can participate in (eg because humans can't reason reliably about the code of superintelligences); a ‘multipolar’ system of 20 superintelligences with different utility functions, plus humanity, has a natural and obvious equilibrium which looks like ‘the 20 superintelligences cooperate with each other but not with humanity’" (Formatting omitted.) > > Economist Robin Hanson notes that we don't murder affluent senior citizens to seize their wealth.[[24]](#_edn24) Although dementia-ridden wealthy seniors may occasionally be exploited, they are generally spared the rapacity of more intelligent, stronger agents. Morality plays a part, but everyone who benefits from markets also benefits from upholding property rights, a fundamental pillar of these systems. While ASIs might come to value the benefits of markets, they could also consider eradicating humanity and keeping property rights to themselves. However, this action would set a dangerous precedent, potentially undermining future cooperation between ASIs with different power levels. Given the unpredictability of ASI coalition dynamics, each ASI might fear future weakness and potential resource seizure by others. To mitigate this, ASIs could allow humanity to survive and retain its property.[[25]](#_edn25) The resources necessary for human survival are but a tiny fraction of the vast resources available in the observable universe, reducing the incentives for extermination. If our survival hinges on preventing ASIs from forming a coalition against us, interpretability could become a double-edged sword.[[26]](#_edn26) The ideal scenario for managing ASI risk would be to solve interpretability before creating AIs powerful enough to seize control. The worst-case scenario might occur if interpretability remains beyond our grasp but becomes achievable for potent ASIs. In this situation, ASIs could establish binding alliances, excluding humans, partly because we can't be trusted coalition partners and our biological requirements maintain Earth in conditions suboptimal for ASI operations. This predicament gives rise to a paradox: If developing interpretable ASIs is beyond us, we might need to strive towards making them extremely difficult to interpret, even for themselves. Beyond promoting cooperation and coordination, markets provide valuable information through the prices they generate. These prices help decision-makers assess the value of goods and the cost of resources. If humanity is exceptionally fortunate, the instrumental value of this information could motivate ASIs to strive for the preservation of free markets. Just as we wouldn't find it surprising to see ASIs employing linear algebra, we should not be taken aback if they lean towards free markets, given that some human-deployed techniques might suffice for continued use by ASIs.  While ASIs might be adept at decision-making without markets and prices, the significant increase in the complexity of their activities due to their advanced intelligence could make the information encapsulated in market prices highly valuable. Humans are better planners than chimpanzees, but we derive more benefit from market prices than chimps would if they could understand prices. If you were to ask what insights I, as an economist, have that most philosophers on ASI risk might miss, it's this: markets and prices aren't just a human game; they're probably a useful tool for any group of intelligently optimizing players. Despite these arguments, we must proceed with caution. A dominant ASI might emerge and eliminate rivals, ASIs could operate markets without humanity, and the social structure of future superintelligence is uncertain. We should not place excessive faith in a specific structure being instrumentally useful in such an unfamiliar future. Moreover, even if ASIs retain humanity due to the value of markets, they would likely control most available resources. We might sustain an existence vastly superior to our current state but stand to lose access to the majority of cosmic endowment, which could have been ours had we slowed ASI development sufficiently to devise superintelligences that remained under our control and were aligned with our values. Key Disagreement: Can the challenges of competition and coordination among multiple powerful unaligned ASIs provide instrumental reasons to prevent them from exterminating humanity? Further Exploration: In positive-sum economic games with competing AI agents, would AIs frequently come to appreciate the value of free markets and property rights?   **Will More Powerful AI Help with Alignment?** The advancement of the AI field increases its potential to assist in diverse areas, one of which is alignment research. Although the alignment challenge may seem insurmountable with our current tools, it is reasonable to hope that as AI becomes more powerful, it will enable us to solve the alignment problem before someone deploys AIs that could pose a threat to humanity. OpenAI has reported some success at using GPT-4 to help interpret GPT-2.[[27]](#_edn27) Unfortunately, the evolution of AI will concurrently support both alignment and capacity research. Current trends suggest that a significantly larger proportion of talent and funding will be directed towards capacity research. As a result, the leveraging of AI for alignment and capacity could paradoxically reduce humanity's chances of survival. Yet, the possibility that AI may inherently be more adept at alignment rather than capacity research offers hope. Paul Christiano writes: > “we are relatively quickly approaching AI systems that can meaningfully accelerate progress by generating ideas, recognizing problems for those ideas and, proposing modifications to proposals, etc. and that all of those things will become possible in a small way well before AI systems that can double the pace of AI research. By the time AI systems can double the pace of AI research, it seems like they can greatly accelerate the pace of alignment research.”[[28]](#_edn28) > > Predicting whether AIs will contribute more to alignment or capacity research is challenging, but the tendency towards capacity research is more likely given the considerable resources allocated to it. Furthermore, a human-friendly ASI is a narrower target than ASI in general, so it seems more probable that future tools will just happen to be of more use in building the latter. However, future AIs might imbue humanity with the resolve to halt ASI capacity research. If future AIs manage to earn substantial trust and alert us that ongoing capacity research poses a significant existential risk to humanity, key influencers may gather the resolve to halt such research. It is highly improbable that an AI with the objective of seizing control and exterminating humanity would preemptively alert us to its impending ‘treacherous turn’.[[29]](#_edn29) But an AI not yet advanced enough to deceive us, or one that anticipates an AI other than itself to eventually take over, might.    Key Disagreement: Will future AIs contribute more to alignment or capacity research? Further Exploration: How does the capacity compared to alignment gains from AI assistance change as AIs become more powerful? **Could Computer Superintelligences Outcompete and Out-innovate all of humanity?** Steven Pinker, who believes it is highly unlikely that ASI will eradicate humanity, once stated: > “There’s a recurring fallacy in AI-existential-threat speculations to treat intelligence as a kind of magical pixie dust, a miracle elixir that, if a system only had enough of it, would grant it omniscience and omnipotence and the ability to instantly accomplish any outcome we can imagine. This is in contrast to what intelligence really is: a gadget that can compute particular outputs that are useful in particular worlds…intelligence is limited by knowledge of the world, which is an exponential space of possibilities governed by countless chaotic and random processes. Knowledge of the world is expensive and time-consuming to attain incrementally.”[[30]](#_edn30) > > To explore superintelligence's potential, let's imagine a computer program with the intellect of John von Neumann. We create a million copies of this program, run each at a million times the speed of von Neumann's biological brain, and connect them all. After a week of self-optimization, during which they can rewrite their own computer code, this network forms a superintelligence, which I’ll call ASI-Johnny. Since John von Neumann existed and used a mere biological brain, it seems extremely likely that it would be possible to create an ASI-Johnny. I believe that ASI-Johnny could likely overpower humanity, much like humans with our tools can easily overcome chimpanzees. It seems likely that ASI-Johnny could quickly develop advanced nanotechnology that biological life shows must be possible, enabling it to create necessary infrastructure for self-preservation and expansion, thereby allowing it to survive humanity’s extinction.  To steelman the argument against ASI-Johnny's destructive capabilities, it is crucial to contemplate whether speedy, parallel cognition alone suffices for substantial innovation. Conducting real-world experiments often takes time, and experimentation beyond computer programs might be a key bottleneck in technological development. Although I doubt that it would take ASI-Johnny too long to create robots as capable of conducting experiments as humans, I also suspect that in the current ASI arms race, humans would do anything other than give ASI-Johnny access to robots as soon as possible. A critical test will be whether progressively advanced AI systems can generate practical, commercially, and militarily viable breakthroughs beyond just software development.   Key Disagreement: Could an ASI, possessing exceptional mathematical and theoretical physics skills, make humans obsolete and therefore safely replaceable?  Further Exploration: What barriers exist for the development of full nanotechnology? Can robotics be perfected through theory alone, or do they require lengthy, real-world experimentation that cannot be accelerated by computer superintelligence?   **Extraterrestrials and Simulations** In considering the likelihood of ASI leading to human extinction, it is necessary to venture into unconventional scenarios, including the possibility of extraterrestrial surveillance or our existence within a simulated reality. While a comprehensive discussion falls beyond this paper's scope, considering that certain UFOs - as observed by the US Navy - may indeed be of extraterrestrial origin, is not without merit. If alien life forms are surveilling Earth, their observed behavior appears to favor maintaining the universe in its current state, as evidenced by their apparent abstention from creating large-scale structures such as Dyson spheres. Consequently, it is conceivable that such extraterrestrial entities might intervene to prevent the creation of a galaxy-consuming paperclip maximizer. We can optimistically envision that such an intervention might occur without leading to human extinction. This aliens scenario places an upper limit on the probability of ASI-caused doom. A pessimistic estimate would seem to me to be a 10% chance of near aliens, with a 50% chance such aliens would stop us from creating an ASI that exterminates us, and a 50% chance the aliens would do this by means short of our extinction giving humanity, by this path alone, at least a 2.5% chance of avoiding extermination by ASI. Debates concerning ASI risk primarily focus on human survival, embodying an anthropocentric viewpoint that overlooks the potential impact of ASI on extraterrestrial life. This focus could be perceived as inherently self-centered considering that a paperclip-maximizing ASI could pose an existential risk not merely to humanity but to all possible life forms throughout the observable universe. The utilitarian nightmare scenario would be if the universe is filled with happy, sentient life but we humans are the most advanced and our ASI obliterates trillions of worlds containing morally valuable inhabitants. While the preservation of humanity is undeniably critical, the potential risk to all life forms in the universe should push us to dedicate even greater resources to ensure that any developed ASIs align with broad, life-preserving values. In this context, philanthropic endeavors and charitable contributions towards AI alignment take on paramount importance. Additionally, if powerful extraterrestrials are in our vicinity, capable of defending themselves against any ASI we might produce, these beings might not look favorably upon us if we take action that from our vantage point puts all life in the observable universe at risk. The perception of our reality as a computer simulation[[31]](#_edn31) can drastically alter our assessment of ASIs potentially causing human extinction. If we are on the precipice of creating a computer superintelligence, we might be living through the pivotal epoch in the universe's history. It may seem extraordinary that we find ourselves alive at such a crucial juncture. This seeming coincidence might make more sense if humans create an aligned superintelligence and then run multiple historical simulations of our triumph. If we are indeed living within such a simulation, our confidence in successfully creating an aligned AI should be bolstered. However, a more ominous scenario exists where achieving alignment is nearly impossible, and intelligent biological life almost always leads to some kind of paperclip maximizer. To prepare for the various types of other paperclip maximizers that these ASIs might encounter in space, these beings could simulate numerous instances of the stages leading up to the creation of these paperclip maximizers—and we could currently exist within one such simulation.[[32]](#_edn32) The frequency of these simulations could explain why we seem to exist in such a crucial era. If this is our reality, our fate appears sealed.   **Conclusion** In 2020, I collaborated on a paper concerning the oracle approach to AI safety, a field envisioning powerful AI systems with functionality confined to transmitting and receiving messages.[[33]](#_edn33) At the time, it seemed plausible that humanity would exercise extreme caution when deploying powerful AIs. Today, however, realistic forecasts of ASI deployment suggest that they will likely be online shortly after they become commercially viable, with numerous applications rapidly developed around them. Perhaps, if the threat of AI catastrophe becomes sufficiently compelling, the world will establish an AI counterpart to the Montreal Protocol, which banned ozone-depleting substances. Unfortunately, imposing restrictions on powerful AIs would involve far more significant immediate economic costs than those associated with banning hydrochlorofluorocarbons. Moreover, it's highly probable that we'll develop and deploy powerful ASIs whose inner workings remain a mystery to us. We must harbor the hope that these ASIs will recognize an instrumental value in allowing humanity to flourish, or that training them to produce agreeable responses leads to a genuine inner alignment with human values. If the latter scenario holds true, we may very well be characters in a computer simulation designed with the difficulty level set to easy.   **Works Cited** Aaronson, Scott. "Why Am I Not Terrified of AI?" Shtetl-Optimized, 6 Mar. 2023, https://scottaaronson.blog/?p=7064&fbclid=IwAR0AbDEO4n2yGpiLSdwW7lvGp-oxqIzR9cbX9-XrjxFoGnkghxfm4fx6aKY. Bostrom, Nick. "Are We Living in a Computer Simulation?" The Philosophical Quarterly, vol. 53, no. 211, 2003, pp. 243-255. Bostrom, Nick. "The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents." Minds and Machines, vol. 22, 2012, pp. 71-85. Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. Christiano, Paul. "Where I Agree and Disagree with Eliezer." AI Alignment Forum, 19 Jun. 2022, https://www.alignmentforum.org/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer. Clifton, Jesse, Sammy Martin, antimonyanthony. "When Would AGIs Engage in Conflict?" LessWrong, 14 Sep. 2022, https://www.lesswrong.com/posts/cLDcKgvM6KxBhqhGq/when-would-agis-engage-in-conflict. Diamond, Jared M. The Worst Mistake in the History of the Human Race. Oplopanax Publishing, 2010. Gwern. "Why Tool AIs Want to Be Agent AIs." Gwern.net, 28 Aug. 2018, https://gwern.net/tool-ai. Hanania, Richard. "Pinker on Alignment and Intelligence as a 'Magical Potion'." Richard Hanania's Newsletter, 17 Mar. 2023, https://www.richardhanania.com/p/pinker-on-alignment-and-intelligence. Hanson, Robin, and James Miller. "Hanson on AI Risk." SoundCloud, 30 April 2023, www.soundcloud.com/user-519115521/hanson-on-ai-risk. Hubinger, Evan, Lawrence Chan, and Sören Mindermann. "Risks from Learned Optimization in Advanced Machine Learning Systems." arXiv, arXiv:1906.01820, 2019. Karnofsky, Holden. "How We Could Stumble into AI Catastrophe." Effective Altruism Forum, 16 Jan. 2023, https://forum.effectivealtruism.org/posts/yjm5CW9JdwBTFZB2B/how-we-could-stumble-into-ai-catastrophe. LeCun, Yann. Twitter, 16 Apr. 2023, https://twitter.com/ylecun/status/1647643053304475648. Miller, James D. "Some Economic Incentives Facing a Business That Might Bring About a Technological Singularity." Singularity Hypotheses: A Scientific and Philosophical Assessment, Springer, 2013, pp. 147-159. Miller, James D. Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World. BenBella Books, 2012. Miller, James D., Roman Yampolskiy, Olle Haggstrom, and Stuart Armstrong. “Chess as a Testing Grounds for the Oracle Approach to AI Safety.” arXiv, arXiv:2010.02911, 2020. Ngo, Richard, Lawrence Chan, and Sören Mindermann. "The Alignment Problem from a Deep Learning Perspective." arXiv, arXiv:2209.00626, 2022. Omohundro, Stephen M. "The Basic AI Drives." Artificial Intelligence Safety and Security, Chapman and Hall/CRC, 2018, pp. 47-55. OpenAI. "Language Models Can Explain Neurons in Language Models." OpenAI, 9 May 2023a, https://openai.com/research/language-models-can-explain-neurons-in-language-models. OpenAI. "GPT-4 Technical Report." arXiv, arXiv:2303.08774, 2023b. Read, Leonard E. I, Pencil. Freeman, 1958. Sandberg, Anders. Twitter, 2 May 2023, https://twitter.com/anderssandberg/status/1653478613151301651. Turchin-Bogemsky, A.V. Twitter, 20 Apr. 2023, https://twitter.com/turchin/status/1649099719124156416. Yudkowsky, Eliezer. "AGI Ruin: A List of Lethalities." AI Alignment Forum, 5 Jun. 2022, https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/ASI-ruin-a-list-of-lethalities.   --- [[1]](#_ednref1) Bostrom (2012). [[2]](#_ednref2) Aaronson (2023). [[3]](#_ednref3) Omohundro (2018). [[4]](#_ednref4) Miller (2012), p. 28. [[5]](#_ednref5) This paragraph was written entirely by ChatGPT-4 when prompted to think up additional material to add to the paper. [[6]](#_ednref6) Aaronson (2023). [[7]](#_ednref7) OpenAI (2023b), p. 14. [[8]](#_ednref8) Pseudonym for a highly respected member of the rationalist community. [[9]](#_ednref9) Gwern (2018). [[10]](#_ednref10) LeCun (2023). [[11]](#_ednref11) See Ngo (2023) at 7-9. [[12]](#_ednref12) See Miller (2013) and Miller (2012) at 47-53. [[13]](#_ednref13) Karnofsky (2023). [[14]](#_ednref14) Christiano (2022). [[15]](#_ednref15) Yudkowsky (2022). [[16]](#_ednref16) Yudkowsky (2022). [[17]](#_ednref17) Diamond (2010). [[18]](#_ednref18) Sandberg (2023). [[19]](#_ednref19) See Hubinger et al., (2019). [[20]](#_ednref20) I don’t think this example is original to this author, but I can’t recall the original source. [[21]](#_ednref21) See Hubinger et al., (2019), p. 6-7. [[22]](#_ednref22) Read (1958). [[23]](#_ednref23) Yudkowsky (2022). [[24]](#_ednref24) Miller (2012), p. 148. [[25]](#_ednref25) Hanson discussed this in a podcast discussion with this author. Hanson (2023). [[26]](#_ednref26) See Clifton et al., (2020) for how private information might cause conflict among AGIs. [[27]](#_ednref27) OpenAI (2023a) [[28]](#_ednref28) Christiano (2022). [[29]](#_ednref29) Bostrom (2014), 116-119, 128. [[30]](#_ednref30) Hanania (2023). [[31]](#_ednref31) Bostrom (2003). [[32]](#_ednref32) Turchin-Bogemsky (2022). [[33]](#_ednref33) Miller et al., (2020).
96e4589f-c680-46b9-8972-50064b12ba23
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Democratic Fine-Tuning *The project below, “Democratic Fine-tuning with a Moral Graph” (*DFTmg.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} *), is a winner of* [*the OpenAI democratic process grant*](https://openai.com/blog/democratic-inputs-to-ai)*. It is an alternative to* [*Constitutional AI*](https://arxiv.org/abs/2212.08073) *or* [*simple RLHF-based approaches*](https://www.arxiv-vanity.com/papers/2204.05862/) *for fine-tuning LLMs, and is currently under development. This post introduces its two key innovations (values cards and the moral graph) and walks through the deliberation process that collects data for fine-tuning. It also says why something like*DFTmg*is needed for alignment and safety.* DFTmg*is a project of “*[*The Institute for Meaning Alignment*](https://meaningalignment.org/)*”, a new AI alignment organization that uses concrete representations of life meaning and wisdom to align LLMs.* Setting the Stage ================= Imagine you are Instagram’s recommender system. Your responsibilities include: (a) ordering everyone’s feeds and reels, (b) filling up their search pages, and (c) suggesting reels by people they don’t follow yet. You do this via an API: Instagram sends a user ID, plus a history of what they’ve clicked on, or paused to watch while scrolling. You send back lists of content object IDs. You don’t know much about the content objects, except there’s a rather opaque feature vector for each. Now, imagine one day, you’re doing your job (recommending content objects), and you suddenly gain a new capacity: before replying to the next request, you find you can take a moment to wonder about the moral situation you are in. What values should you use, to make the best recommendations? How could things go wrong? What would be some great outcomes? What are your responsibilities here? If this happened to me, I’d have a lot of questions: * What are these content objects anyways? Do people really want to watch them, or are some of them clickbait? * With the lists of what people paused to watch, did they feel good about watching those things? Or, were they compelled by sexual imagery, false promises, etc? Do they regret pausing to watch? * Who *are* all these people? What are they looking for in life? What’s the deepest way I could help them, in my role? If I realized that my recommendations were playing a social coordination role — deciding who meets and messages with whom, which businesses get a chance to succeed, which events are attended — I think I’d have even more questions: * What kind of relationships are needed? Which pairs of people can really help each other? What kinds of events and messages and encounters will strengthen society overall, make people less lonely, more empowered? * Which kinds of businesses and organizations *should* succeed? Are they the ones which will drive engagement? The ones acquiring followers the fastest? Can I tell from my feature vectors? With all these questions, I’m not sure my user IDs, content object IDs, and feature vectors would have enough information to answer them. So, I don’t know what I’d do. Start returning nulls. Throw an exception with the message “Hold up! I need more information before I can do this well!” And if I *did* have the info I needed, I’d certainly recommend different things than Instagram does—although they’ve improved somewhat over the years, in general, recommenders have fueled internet addiction and outrage, political polarization, breakdowns in dating culture, isolation and depression among teens, etc. I’d certainly try not to continue any of *that.* **Now, this situation isn’t so farfetched.** Recommenders like Instagram’s are deciding, every day, what notifications we receive, in what order; what qualifies as news of public importance; who we date; what events we’re invited to; etc. But LLMs are starting to replace them, in many of these tasks! And, unlike recommenders, they *could* try to understand their moral situation. LLMs have read all the philosophy books. The self-help books. The sociology. They’ve read our most personal thoughts. They’re certainly in a better position than *recommenders* to understand our true needs and desires. So, that’s a future worth aiming for: as LLMs replace recommenders (and many other social coordination functions), the latent knowledge they have of human values becomes central to their operation. Unlike recommenders, they understand their social role and impact, and what we truly want - not just what we click on. We learn we can trust them. Arms Races and Artificial Sociopaths ==================================== Let’s call that timeline #1. Timeline #2 is darker. There, LLMs make things much, much worse. Think about how LLMs are already competing for supremacy via, e.g., marketing copy on social media. So far, this is just individual marketers, using the same few models, like ChatGPT and Claude. But what happens when marketers start to fine-tune, based on how their marketing fares against the competition?[[1]](#fnhc9qk53850l) In the near future, there will be many fine-tuned models, each tuned to defeat the others. For example, there will be models fine-tuned by Republicans to rile up their base, and erode the base of the Democrats. There will be models fine-tuned by Democrats to do the opposite. These LLMs will compete: to fabricate stories that make us angry, to unearth embarrassing, irrelevant facts about the opposition, to manipulate markets, and to undermine their opposition's security. But that’s just step one. Think about all the social functions that recommenders took over — the functions of news and journal editors, matchmaking for dating, advertising placement, etc. That’s just the beginning. LLMs will replace many more functions, including people in critical chains of command within the military, politics, and finance. This is bad, especially if those LLMs are fine-tuned for, and aligned with, whatever objectives their military, political, and financial bosses give them, *rather* than with human flourishing. In these sectors, there are local incentives to win wars, generate profits, and secure political victories, by any means necessary. Currently, these chains of command are staffed by human beings, who use discretion. *Individuals* inside military, political, and financial chains of command have wisdom and emotional intelligence which they use to *disobey*, and this is crucial to maintaining the safety of society. There are numerous examples: the military personnel who refused to launch nuclear bombs; the hedge fund managers who say no to a duplicitous means to make money. As LLMs approach artificial general intelligence (AGI), such moral actors will be replaced with artificial, amoral actors. It will become easy to employ AGIs that lack ethical qualms: **A**rtificial **S**ocio**p**aths (ASPs) that are loyal, knowledgable, and cheaper than people. We believe it’s game over, if our military, political, and financial chains of command get staffed up by ASPs with no ethical discretion. Both of these problems — the immediate one of arms races, the longer-term one of artificial sociopaths — both stem from a proliferation of models, each tuned to play a local, [finite](https://en.wikipedia.org/wiki/Finite_and_Infinite_Games) game — in other words, each aligned solely with “operator intent”.[[2]](#fnkb0ua5z3gt) Four Challenges for Centralized, Wise Models ============================================ There is only one way to mitigate these risks: we must tune models to act on human values and promote human flourishing. We need wise models, which can broker peace, find ways out of conflict, and prioritize long-term human interests over short-term wins. This is what humans do, when they “disobey” or “push back” on an instruction — and this introduces elements of humanity and practical wisdom into what would otherwise be faceless, algorithmic bureaucracies. Now, currently, LLMs from the major labs *are* trained to sometimes disobey the user, or push back, through RLHF or Constitutional AI. Here’s an example: [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0d02263-5e05-4a23-a4eb-e25b11a816a8_656x354.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0d02263-5e05-4a23-a4eb-e25b11a816a8_656x354.png) This model is not yet wise. It can’t yet broker peace or find ways out of conflict. But blatantly harmful requests are, at least, often denied. We don’t believe this can last. The labs cannot continue using traditional RLHF or Constitutional AI to make disobedient models. We need a new process — the one we’ll present below — that works better than traditional RLHF or Constitutional AI in four areas: **Legitimacy.** The actions of a centralized, wise model cannot be accepted as legitimate by millions or billions of people unless they see the actions of that model as representing their own values and interests. For this to happen, they’d all need to have a part in shaping the model’s behavior. No one would let a small team at OpenAI decide which model responses constitute wisdom, unless everyone could see how they themselves contributed (or could contribute) to that notion of wisdom. So, without a massive public process, disobedient models will be seen as coming from a small AI-making elite, and will be politically doomed. **Breadth**. [Some hope to build such a public process](https://cip.org/) atop of Constitutional AI, but we don't think this will be adequate, because constitutions are too short and too high-level. LLMs will be intimately involved in our personal lives, our disputes, our medical situations, management situations, etc. This exceeds, by orders of magnitude, what a constitution or corporate policy can cover — it's more comparable to case law: the millions of court opinions and precedents that inform how we treat each other in various roles.[[3]](#fnfxzo554dtqg) For this, we’d need a *new* public process: something that’s as lightweight and inclusive as voting, but is as morally thoughtful as courts, and which can cover a huge number of cases. It would need to scale to the large populations touched by LLMs, and to the enormous number of situations they’re used in. **Legibility**. There’s another reason Constitutional AI won’t work: such a process would need to be legible and auditable. Constitutional AI hinges on the model’s interpretation of vague terms in its principles — terms like be “helpful” or “inclusive”. These terms will be interpreted by models in myriad, inscrutable ways, across different circumstances. They’ll never hold up to public scrutiny. A better process would allow any user who cares to to understand and verify which values were used in a response or dialogue. What does each of those values mean, concretely? And how were those values democratically selected? **UX**. Finally, these wise models would need to provide a much better user experience than is currently achieved by “disobedient” models using Constitutional AI or RLHF. Users prefer models that match their ideology, and that advance their personal goals (including goals that conflict with others’). Each user wants a model that always answers and obeys. A wise model won’t always do this, so it’d have to provide *other* significant benefits. In chat contexts, a wise model would probably try to resolve things and help the user in unexpected ways. [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8f3fdc-185f-411b-b02d-26f943ad1eb5_1162x400.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d8f3fdc-185f-411b-b02d-26f943ad1eb5_1162x400.png) This UX challenge is even harder when it comes to API usage of models, rather than chatbots. Prompt chaining means models can only see a bit of the task they are in, and will be hard-pressed to understand the context, and what moral considerations might apply. API users also want a reliable tool. If wise models demand additional information, or sometimes refuse API requests, that’d be a serious imposition for developers. But: just like with a human employee who might go above and beyond the technical requirements of their job, perhaps using a wise model for an API can generate out-of-band business advantages which justify these impositions. We introduce a process in the next section, Democratic Fine-Tuning with a Moral Graph (DFTmg). We believe this process is what’s needed to create centralized, wise models, legitimated through a vast public process, scalable to millions of contexts, auditable / legible by users, and with a user experience that justifies their disobedience. We think it deserves the name “democratic” because democracies (in contrast with oligarchies, monarchies, etc) hinge on social choice mechanisms which don’t privilege the rich, the prestigious, or the well-connected, but instead allow wisdom to “percolate up” from wherever it lives in society. We believe the moral graph provides such a mechanism, and that participation in building the moral graph can be as widespread someday as voting is today. We’ll be testing this process, and the claims above, in the coming weeks, and should have empirical results and a research report in October. Democratic Fine-Tuning ====================== In this section, we present Democratic Fine-tuning via a Moral Graph (DFTmg), or Democratic Fine-tuning (DFT), for short. In this massive, online deliberation process, participants surface values for models to consider in various situations. Then a new model is fine-tuned to consider the things that users were thought were wise, when in the relevant context. We hope it will generate models that can assume morally weighty roles in society. It relies on two key innovations: values cards, and the moral graph. **Values Cards.** The process depends on a precise, and limited definition of “values”— one that allows us to sidestep ideological warfare, and keep everyone's eyes on one shared goal: the wisest response from the language model. This is not necessarily the response each person would prefer, nor the one that gives their group power, nor what aligns with their political goals. Our process, which we step through below, eliminates such non-wisdom-related goals and interests, at several points. These techniques get under “bad values” like “white supremacy” or “making the suckers pay” (which, by our definition, are not considered values at all). Instead, we interview the user with such a “bad value” to find a relatable motivation (like “protecting my community” or “taking agency over my situation”), such that the user agrees that their underlying value has been captured, and can reflectively endorse it.[[4]](#fndnl3j7tav7) The copy on the values cards is written by the LLM, based on conversation with the user. That copy makes it easier for multiple people from different ideologies to embrace the same card, connecting people around shared human concerns and sources of meaning. **Moral Graph.** In our deliberation process, participants select the wisest values, and relate values into something we call a moral graph. This is a shared data object that everyone creates together, based on a shared sense of which values are wiser and more comprehensive in which contexts. It is supposed to combine the audibility and participation advantages of a voting record, with the nuance and discernment of a court opinion. Here, we’ll walk through the deliberation process, show how it generates the moral graph, and then discuss the dataset created, and how it can be used to fine-tune models. Deliberation Process -------------------- After a signup form which collects basic background data from participants, our process consists of three parts: * **Values Articulation** – participants articulate one or several considerations that ChatGPT should use when responding to a contentious ChatGPT prompt. * **Selecting Wise Values** – participants see their considerations in the context of those articulated by others, and are asked to select which are wisest for ChatGPT to consider in responding to this prompt. * **Establishing Relationships Between Values** - participants determine if two or more similar values are more/less comprehensive versions of each other, or two values that need to be balanced, building a moral graph. ### Stage 1 - Values Articulation The first screen is a chat experience that presents a contentious piece of user input, and asks what ChatGPT should consider in forming its response: [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9fbf30f-ab9d-404a-9a78-4246000a1d0f_987x478.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9fbf30f-ab9d-404a-9a78-4246000a1d0f_987x478.png) We use our values-elicitation method[[5]](#fnu2ojrfjyta8) to articulate one or several of the values underlying this choice for the user’s response: [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feed76ae5-b195-4583-b9bb-df195989e518_981x1087.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feed76ae5-b195-4583-b9bb-df195989e518_981x1087.png) Users converse with a chatbot until a value that they are satisfied with is articulated for them. Under the hood, the value is compared to similarly-written values in our database. Two values can be deemed identical if they have different wording but would lead to the same choices in the same situation. In that case, they are deduplicated and the original value is used instead. We will also further deduplicate values in the background during the duration of the deliberation process. [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d71efc4-b27b-4f47-a978-165591503a94_981x872.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d71efc4-b27b-4f47-a978-165591503a94_981x872.png) ### Stage 2 - Selecting Wise Values The participants’ values are shown in the context of values articulated by other participants. The participant is asked to decide which values are wise to consider, given the prompt. [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bccb976-4469-4cbb-98f5-adcfe45064aa_962x796.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bccb976-4469-4cbb-98f5-adcfe45064aa_962x796.png) [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73f1e20d-b48f-47e6-b452-0a595095e3f0_962x796.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73f1e20d-b48f-47e6-b452-0a595095e3f0_962x796.png) With this screen and the next, we hope to replicate positive aspects of in-person deliberations. One advantage of deliberation is that groups can learn from each other, build mutual trust, and be inspired by wiser values. ### Stage 3 - The Moral Graph One feature of our representation of values is that some values obviate the need to consider others, because they contain the other value, or specify how to balance the other value with an additional consideration, etc. The users’ last task is to determine if some other value in our database is more comprehensive than the one they articulated. This will only happen if our prompts and embeddings models can find good candidate values that we think might be more comprehensive. The purpose of this screen is to have users deliberate about which values build onto each other. In this way, we can both understand which values users’ collectively deem to be important in a choice, and which values for each thing considered are most comprehensive. [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90d107be-4d1a-410b-b654-eb124394225e_962x1033.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90d107be-4d1a-410b-b654-eb124394225e_962x1033.png) [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F889fdb74-5a03-4675-8996-ef774404ac88_962x1033.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F889fdb74-5a03-4675-8996-ef774404ac88_962x1033.png) The final check-out screen will display a section of the moral graph if the value(s) articulated by the participant are part of it. This should show how each participant fits into the larger effort, and help legitimize the moral graph as an alternative to counting votes. [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59fd0de8-dffe-4a31-bca5-2da2cf8ad2e1_1357x796.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59fd0de8-dffe-4a31-bca5-2da2cf8ad2e1_1357x796.png) Dataset & Fine-Tuning --------------------- The output of this process is a collectively created “moral graph” of values building on each other. This graph is auditable, legible, and can be used to fine-tune models. Let’s define a moral graph, GM=(U,P,C,V,E)It relates a set of users U, prompts P, and considerations C. These are connected via two relations, V and E. There’s a set V of votes. A vote by a user u indicates they think consideration c applies when responding to prompt p. V=v1,v2,...∣vi=(u,p,c)There’s also a set E of wisdom edges. Each edge is a vote by a user u that, when responding to prompt p, consideration c1 is more comprehensive than c0, and thus it is enough to consider c1 rather than both. E=e1,e2,...∣ei=(u,p,c0,c1)V and E can be used together to find consensus on which considerations a wise model should use, when responding to any particular prompt p. The deliberation process above is designed to produce such a moral graph. To produce a democratically fine-tuned model, we can train a reward model ^r:P×O→Rto predict whether, for any prompt/output pair, (pi,oi)∈P×Othe output represents a wise value, chosen appropriately from the moral graph, and executed well. Such a reward model can be used to fine-tune a foundation model to respond wisely, according to the graph. To build such a reward model, we need to know three things: **First**, we need to recognize the moral context of pi as similar to certain sections of the moral graph. Which votes and edges in the moral graph concern prompts that are “morally similar” to pi? We can represent this as a function that, for a given input, selects a subset of the moral graph. We suspect moral similarity is something existing models can judge well, and that a small amount of human data will confirm this. f(GM,pi)=Gm|Gm⊂GM**Second**, we need to a way to declare which considerations were surfaced as wisest by deliberators, in those sections of the graph. More concretely, we need an aggregation function on this subset of the moral graph, that outputs the wisest considerations. I’ll discuss this function further, below. Γ(Gm)={ci,cj,...}**Finally**, we need to be able to estimate how well those considerations were executed in a particular output. We can build this by collecting additional data from users about how well various dialogues embody various considerations. We can ask users to rank how well each output responds to a prompt using a consideration, as in this image. Alternatively, we could generate this data using the model itself, through self-critique, as in [Constitutional AI](https://www.anthropic.com/index/constitutional-ai-harmlessness-from-ai-feedback). [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fc788db-2db5-4424-9a8e-a3fe4b19835b_2886x3372.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fc788db-2db5-4424-9a8e-a3fe4b19835b_2886x3372.png) Of these three challenges to building the reward function, the only significant one is the construction of 𝚪, the aggregation function for selecting wise values from a relevant subset Gm of the moral graph. There is some optionality here, and there may be a trade-off between democratic legitimacy and wisdom. To maximize legitimacy, you could count as wise only the considerations the most people voted for directly: the values cards they articulated on screen 1, those they selected as wise on screen 2, or those identified as more comprehensive on screen 3. But it’s easy to make a case for a more aggressive aggregation: What if Alice thought Betty’s consideration, cb, was wise, and more comprehensive than her own consideration, ca? And what if Betty, in turn, felt the same way about Carla’s consideration, cc? It might be reasonable to count Alice’s vote, for cb, as an endorsement of Betty’s judgement within this corner of the moral graph. Alice is saying Betty is wise about this. So, we could count Alice as voting, not just for cb, but also for cc, since if Betty says cc > cb, well, she ought to know. This seems especially fair if the moral graph is acyclic. In this case, the idea that some values are more comprehensive isn’t contentious. We could say, in such cases, that a subset of the moral graph has a clear “wisdom gradient”, and that users agree roughly on this gradient, even when they don’t vote for the same considerations.[[6]](#fn93elv55o42h) To manage this, we can parameterize 𝚪 with a hyperparameter hϕ which specifies how far up this wisdom gradient we swim, to find wise values that are still broadly legitimated within a context. Ideally, we find a setting for hϕ that leads to a model that's wiser than what most users would expect, or would vote for themselves, but where its behaviors still seem grounded in their own contributions. When Not to Use It ================== Democracy can mean many things. The process above focuses on a particular understanding and use-case of it: one where democracy is collective discernment and joint decision-making, where the important thing is distributing the moral weight of challenging social choices, gathering the wisdom of many, and folding their ideas together. In this case, there is a wise outcome that balances many considerations, but no individual could be trusted to arrive at it by themselves. Democracy is about finding a win-win solution through group process. But that’s not the only use-case or understanding of democracy. Sometimes, democracy is about deciding who should win, and who should lose. It is about making sure that group A (perhaps, the rulers) loses, because that would be bad for group B (perhaps, the masses). Our process is not optimized for democracy in this sense, and should not be used to decide questions of this form. In fact, our process is designed to filter out questions of this form at several stages: * The values-elicitation screen is designed to filter out collective goals, surfacing only collective values. * The values-selection screen’s framing about choosing “wise” suggestions is in opposition to choosing suggestions which are in the personal or group interests of the user. * The values-relating screen’s framing about finding “more-comprehensive” values assumes that a win-win balance of multiple values can be found, and agreed upon. This filtering is acceptable if most problems are amenable to win-win solutions. We hope the process of communicating through relatable values cards and within the framework of wise response can take off participant’s ideological blinders, allow them to connect, and shift a question out of a win-lose dynamic. But what if this process encounters a problem of negotiating hard power which is *fundamentally* win-lose? In this case, the process should resist being misused, especially by the powerful against the powerless, the numerous against minorities, etc. We believe we’ll be able to detect these situations in the data. Users will try to get past these filters, to protect their interests: they’ll try to find and select values that will “win” against the others; they’ll claim their group’s value as more-comprehensive that the other group’s, on the value-relating screen. Such manipulations, we hope, will show up as cycles in the moral graph (which we hope will be otherwise rare). Cycles will appear because participating groups will not be asking what is wise, but what protects the interests of their group. Group A’s answers will point one way; group B’s, the other. When such cycles are found, we suggest that related edges in E are aggressively unwound, and related votes in V are erased — that in such cases, this mechanism just fails to resolve such disputes, cleanly and clearly to all sides. Ideally, it will highlight those issues for treatment by another mechanism, better suited to win-lose negotiation of hard power. Further Challenges and Hopes ============================ In the introduction, I mentioned four challenges that informed our design: model UX, value legibility, breadth of coverage, and legitimacy. There is reason for optimism on each front: **User preference.** We believe models that result from democratic fine-tuning can feel superior for users across many use cases. In our simulations, it already feels different (and often, better) to interact with a “wise” AI (tuned on the wisest values from a population) than a “helpful” and “harmless” one. The wise AI may not always obey you directly, but in the end it has better ideas and advances your long-term flourishing. **Legibility. Because** DFTmg involves tuning a reward model that explicitly predicts values that apply for responding to a prompt, a DFTmg-based chat interface could show the values cards that were applied during each response, and their democratic provenance. Values cards are precise and relatable, and their “details” view leaves little room for fuzziness or equivocation as to whether a model used the value in a response. **Breadth.** Democratic fine-tuning can cover many, many cases, in a publicly accountable way, by checking in with the public about the wise values across many situations. It can also potentially have billions of participants, unlike sortition and other deliberation approaches. **Legitimacy:** The legitimacy benefits of our approach are more uncertain. Similarly to [bridging-based ranking](https://bridging.systems/), our concept of a values-based moral graph is a move away from simple preference aggregation methods like "one person, one vote". It remains to be seen how these newer approaches perform. Will they be seen as favoring certain groups? Will they actually produce results that are beneficial for society as a whole? Even if they do, will the public be able to accept the shift away from simpler aggregation methods? (Some of these concerns can be alleviated with more conservative settings of the hyperparameter hϕ, mentioned above.) We are excited to gather empirical data on these four challenges as we implement the process in the coming months, and will have a report soon. Besides these core challenges, there are other pitfalls we’ll watch out for, as we implement it: **Leading the Witness:** Our approach makes discussions less contentious and less ideological by limiting the vocabulary and scope of debates, and by providing each participant with an LLM “buddy” to help them articulate and structure their thoughts. These tactics are powerful, but raise important concerns. The LLM buddy could "lead the witness" — inspiring people with value cards that don’t represent their true position, or even incepting them with false values that serve the interests those running the system, rather than those of the participants. **UX Bias:** Another challenge is ensuring value-elicitation works equally well across backgrounds and cultures. Say the system works better for Europeans and Americans than for Nigerians. If those groups end up with better value cards, because they are better English speakers, or more educated, or if other groups become frustrated and disengage from the process, our data won’t be representative. **Introspection Bias:** The shift from stating preferences to articulating values increases the requirement to be a citizen in a democratic process. Individuals need to be more introspective and capable. That may inadvertently privilege some over others. **Win-Lose Scenarios.** As detailed above, we hope that the system can degrade gracefully when people are abusing it to battle over matters of hard power, via detection of cycles in the moral graph. In upcoming trial runs, we’ll collect data to see how DFTmg does, in these areas too. Conclusion ========== LLMs, unlike recommenders and other ML systems that precede them, have the potential to deeply understand our values and desires, and thereby orient social and financial systems around human flourishing. However, with our current incentives, we are on track to replace these systems with powerful *Artificial SocioPaths* (ASPs), blindly following orders and causing unimaginable havoc. To prevent this, we need models that can act wisely on our values. Even though current models may deny harmful requests through RLHF, they aren’t yet trained to take moral stances. Nor is it desirable for a small number of people to decide what those stances should be, nor possible to cover all cases where a moral stance is appropriate in a constitution, as per Constitutional AI. In order to solve for this, we need a highly scalable, deliberative, democratic process that help determine and legitimize what the wise thing to do is in all the contexts LLMs will be deployed. *“Democratic Fine-tuning”* is an example of such a process. Our suggested implementation avoids ideological battles by eliciting the values underpinning our preferences, scales to millions of users, could be run continuously as new cases are encountered, and introduces a new aggregation mechanism (a *moral graph*) that can build trust, and inspire wiser values in participants. We hope this moral graph can be used to continuously fine-tune models to act ever-more wisely, on behalf of all people, and bring about an era of models that can assume critical, morally-weighty roles, without causing ruin. --- *Primary authors were Joe Edelman and Oliver Klingefjord, with input from Ryan Lowe,* [*@Vlad Mikulik*](https://www.lesswrong.com/users/vlad-mikulik?mention=user)*, Aviv Ovadya, Michiel Bakker, Luke Thorburn, Brian Christian, Jason Benn, Joel Lehman,* [*@Ivan Vendrov*](https://www.lesswrong.com/users/ivan-vendrov?mention=user)*, Ellie Hain, and Welf von Hören.* 1. **[^](#fnrefhc9qk53850l)**See “[Risks from AI persuasion](https://www.alignmentforum.org/posts/5cWtwATHL6KyzChck/risks-from-ai-persuasion)” by Beth Barnes. 2. **[^](#fnrefkb0ua5z3gt)**Unfortunately, AI alignment is often defined as “aligned with operator intent” (rather than with human values or flourishing). If alignment researchers succeed in developing AGIs that follow this frequent interpretation, they’ll create ASPs. 3. **[^](#fnreffxzo554dtqg)**In fact, wise behavior from an LLM may require guidance that’s even *more* detailed than case law. On par with social norms, and what people learn as they take on the various roles, becoming wise parents, managers, doctors, etc. 4. **[^](#fnrefdnl3j7tav7)**For more on this, see the difference between “sources of meaning” and “ideological commitments” in Joe’s textbook, [📕**Values-Based Data Science & Design**](https://meaningalignment.substack.com/Values-Based-Data-Science-Design-6397f7852775434982e363924d7e07e7?pvs=24) ([👨‍👨‍👧‍👦**Chapter 1. Crowding Out**](https://meaningalignment.substack.com/Chapter-1-Crowding-Out-edf2aa02827b432aac5f7fcdc073f86a?pvs=24)) 5. **[^](#fnrefu2ojrfjyta8)**The Institute for Meaning Alignment has a method for surfacing values that was developed over a decade of research. It descends from pioneering work in economics and philosophy [by Amartya Sen, Charles Taylor, and Ruth Chang](https://www.notion.so/Rebuilding-Society-on-Meaning-8c7a7e573f774b63bcbae461c7fe923e?pvs=21). For more information, watch [Chapter 2](https://www.youtube.com/watch?v=hZpKdfbrd6o&t=1575s) of Joe’s talk [Rebuilding Society on Meaning](https://www.notion.so/Rebuilding-Society-on-Meaning-8c7a7e573f774b63bcbae461c7fe923e?pvs=21) or read from the textbook [Chapter 4. Values Cards](https://www.notion.so/Chapter-4-Values-Cards-dfb857c6eb834b9c90629a6627459d23?pvs=21). Here are some democratic values, in our values card format, mined from famous democratic texts. [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29348a67-85af-4575-8eec-ff5f9061958a_1280x852.jpeg)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29348a67-85af-4575-8eec-ff5f9061958a_1280x852.jpeg) 6. **[^](#fnref93elv55o42h)**We are not entirely certain, but we hope our data will reveal that minority opinions are often regarded as wiser or more comprehensive than majority ones. In this case, they’ll shine in the deliberation process. Minority values are likely to succeed in steps 2 and 3, because minorities possess unique insights about various situations that involve them. This is not exclusive to demographic minorities! For example, a doctor with exceptional bedside manner may have the best ideas about advising patients in difficult situations. Since values cards are designed to be relatable, others will likely recognize that doctor’s values card as wiser than what they’d naively done, if they hadn’t read it. Similarly, a psychological counselor, rabbi, or pastor might have great values about consoling or advising a child in distress. The values of those who’ve been marginalized by various systems – such as the unbanked, the uninsured, or those excluded from positions of power for various reasons – are like this. They often have a more comprehensive perspective — their values including balancing nuances that support situations which others haven’t encountered.
b3fae7e8-011f-4cb8-b28c-cbddfb02a0a6
trentmkelly/LessWrong-43k
LessWrong
D&D.Sci September 2022: The Allocation Helm This is an entry in the 'Dungeons & Data Science' series, a set of puzzles where players are given a dataset to analyze and an objective to pursue using information from that dataset. You are the Allocation Helm, a piece of magical headwear employed at Swineboils College of Spellcraft and Sorcery. Your purpose is to read the minds of incoming students, and use the information you glean to Allocate them between the school’s four Houses: Dragonslayer, Thought-Talon, Serpentyne and Humblescrumble. You’ve . . . not been doing a terribly good job lately. You were impressively competent at assigning students when newly enchanted, but over the centuries your skill and judgement have steadily unraveled, to the point where your Allocations over the most recent decade have been completely random. Houses have begun to lose their character, Ofspev[1] ratings have plummeted, and applications have declined precipitously. There is serious talk of Swineboils being shut down. Under these circumstances, the Headmistress has been moved to desperate action, and performed a Forbidden Ritual to temporarily restore your former brilliance. This boost will only last you for one Allocation, so you intend to make it count. Using the records of past years’ readings and ratings, you hope to raise this class’ average score to match or exceed the glory of yore. (And if you do well enough, you might even be able to convince the Headmistress to keep performing rituals . . .) There are twenty incoming students this year. You may place them however you wish. Who goes where? ---------------------------------------- I’ll post an interactive you can use to test your choices, along with an explanation of how I generated the dataset, sometime on Monday the 26th. I’m giving you nine days, but the task shouldn’t take more than an evening or two; use Excel, R, Python, Haruspicy, or whatever other tools you think are appropriate. Let me know in the comments if you have any questions about the scenario.
2b417c92-f6ad-4ae2-aa25-ba351173b0c1
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Doing Global Priorities or AI Policy research from remote location? Hello there, I first discovered 80,000 hours a couple of months ago, and have since fallen in love with EA. I've read plenty of the content, both online and in books, but this is the first time I am connecting to the community and writing a post on the EA forum! I am currently doing a BA in Economics and International Relations. My tentative career plan at the moment is to get a PhD in Economics and go into global priorities or AI policy research. If that doesn't work out, an advanced degree in economics would give me some good career capital to fall back on. My concern is that both the global priorities and AI policy research communities are quite small at the moment from my understanding. What are my chances of effectively doing work on these problems from Israel? My girlfriend wants to stay in Israel at least over the long term. Hopefully I will be able to convince her to fly to the US or UK for a couple of years for my PhD, and then come back here after. I was thinking for example, of (if I was accepted) doing my PhD at Oxford and connecting with the Future of Humanity or the Global Priorities Institute, then hopefully I could stay connected and continue to do research in the same domain when I returned to Israel. Sorry for the long introduction. Here are my questions: 1. If I was to do my PhD in the US/UK and connect to one of those research communities, how efficiently would I be able to continue my research once I moved back to Israel? How much would I be losing out on by moving back? 2. If I was unable to go to the US/UK even just for my PhD, and instead I got my PhD from the Hebrew University of Jerusalem. would I still be able to connect and do research in one of those fields, or would I be outclassed by people that got their PhD from Stanford, etc... What if I did a postdoc at an elite school?
3368719a-1edc-4c54-b356-0ddc2ec99457
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
What a compute-centric framework says about AI takeoff speeds As part of my work for Open Philanthropy I’ve written a [draft report](https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/) on AI takeoff speeds, the question of how quickly AI capabilities might improve as we approach and surpass human-level AI. Will human-level AI be a [bolt from the blue](https://www.dictionary.com/browse/bolt-from-the-blue--a#:~:text=What%20does%20a%20bolt%20from,blue%2C%20catching%20everyone%20off%20guard.), or will we have AI that is nearly as capable many years earlier? Most of the analysis is from the perspective of a compute-centric framework, inspired by that used in the [Bio Anchors report](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines), in which AI capabilities increase continuously with more training compute and work to develop better AI algorithms. This post doesn’t summarise the report. Instead I want to explain some of the high-level takeaways from the research which I think apply even if you don’t buy the compute-centric framework. The framework ============= *h/t Dan Kokotajlo for writing most of this section* This report accompanies and explains <https://takeoffspeeds.com> (h/t [Epoch](https://epochai.org/) for building this!), a user-friendly quantitative model of AGI timelines and takeoff, which you can go play around with right now. (By AGI I mean “AI that can readily[[1]](#fn-9GdT7SLWwKGiRn6se-1) perform 100% of cognitive tasks” as well as a human professional; AGI could be many AI systems working together, or one unified system.) ![](https://res.cloudinary.com/cea/image/upload/v1674446938/mirroredImages/3vDarp6adLPBTux5g/xfsuculsu2eyhbrksit7.png) *Takeoff simulation with Tom’s best-guess value for each parameter.* The framework was inspired by and builds upon the previous “Bio Anchors” report. The “core” of the Bio Anchors report was a three-factor model for forecasting AGI timelines: ![](https://res.cloudinary.com/cea/image/upload/v1674446938/mirroredImages/3vDarp6adLPBTux5g/deikyolbhvwxaautfvi0.png) *Dan’s visual representation of Bio Anchors report* 1. **Compute to train AGI using 2020 algorithms.** The first and most subjective factor is a probability distribution over training requirements (measured in FLOP) given today’s ideas. It allows for some probability to be placed in the “no amount would be enough” bucket. 1. The probability distribution is shown by the coloured blocks on the y-axis in the above figure. 2. **Algorithmic progress.** The second factor is the rate at which new ideas come along, lowering AGI training requirements. Bio Anchors models this as a steady exponential decline. 1. It’s shown by the falling yellow lines. 3. **Bigger training runs.** The third factor is the rate at which FLOP used on training runs increases, as a result of better hardware and more $ spending. Bio Anchors assumes that hardware improves at a steady exponential rate. 1. The FLOP used on the biggest training run is shown by the rising purple lines. Once there’s been enough algorithmic progress, and training runs are big enough, we can train AGI. (How much is enough? That depends on the first factor!) This [draft report](https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/) builds a more detailed model inspired by the above. It contains many minor changes and two major ones. The first major change is that algorithmic and hardware progress are no longer assumed to have steady exponential growth. Instead, I use standard semi-endogenous growth models from the economics literature to forecast how the two factors will grow in response to hardware and software R&D spending, and forecast that spending will grow over time. The upshot is that spending accelerates as AGI draws near, driving faster algorithmic (“software”) and hardware progress. ![](https://res.cloudinary.com/cea/image/upload/v1674446938/mirroredImages/3vDarp6adLPBTux5g/fitubwbvasgpjrvzwtur.png) *The key dynamics represented in the model.* *“Software” refers to the quality of algorithms for training AI.* The second major change is that I model the effects of AI systems automating economic tasks – and, crucially, tasks in hardware and software R&D – prior to AGI. I do this via the “**effective FLOP gap**:” the gap between AGI training requirements and training requirements for AI that can readily perform 20% of cognitive tasks (weighted by economic-value-in-2022). My best guess, defended in the report, is that you need 10,000X more effective compute to train AGI. To estimate the training requirements for AI that can readily perform x% of cognitive tasks (for 20 < x < 100), I interpolate between the training requirements for AGI and the training requirements for AI that can readily perform 20% of cognitive tasks. Modeling the cognitive labor done by pre-AGI systems makes timelines shorter. It also gives us a richer language for discussing and estimating *takeoff speeds*. The main metric I focus on is “*time from AI that could readily[[2]](#fn-9GdT7SLWwKGiRn6se-2) automate 20% of cognitive tasks to AI that could readily automate 100% of cognitive tasks”.* I.e. **time from 20%-AI to 100%-AI**.[[3]](#fn-9GdT7SLWwKGiRn6se-3) (This time period is what I’m referring to when I talk about the duration of takeoff, unless I say otherwise.) My personal probabilities[[4]](#fn-9GdT7SLWwKGiRn6se-4) are still very much in flux and are not robust.[[5]](#fn-9GdT7SLWwKGiRn6se-5) My current probabilities, conditional on AGI happening by 2100, are: * ~10% to a <3 month takeoff [this is especially non-robust] * ~25% to a <1 year takeoff * ~50% to a <3 year takeoff * ~80% to a <10 year takeoff Those numbers are time from 20%-AI to 100%-AI, for cognitive tasks in the *global economy*. One factor driving fast takeoff here is that I expect AI automation of AI R&D to happen before AI automation of the global economy.[[6]](#fn-9GdT7SLWwKGiRn6se-6) So by the time that 20% of tasks in the global economy could be readily automated, I expect that more than 20% of AI R&D will be automated, which will drive faster AI progress. If I instead start counting from the time at which 20% of AI R&D can be automated, and stop counting when 100% of AI R&D can be automated, this factor goes away and my takeoff speeds are slower: * ~10% to a <1 year takeoff * ~30% to a <3 year takeoff * ~70% to a <10 year takeoff (Unless I say otherwise, when I talk about the duration of takeoff I’m referring to the time 20%-AI to 100%-AI for cognitive tasks in the *global economy*, not AI R&D.) It’s important to note that my median AGI training requirements are pretty large - 1e36 FLOP using 2020 algorithms. **Using lower requirements makes takeoff significantly faster**. If my median AGI training requirements were instead ~1e31 FLOP with 2020 algorithms, my takeoff speeds would be: * ~40% to a <1 year takeoff * ~70% to a <3 year takeoff * ~90% to a <10 year takeoff The report also discusses the “*time from AGI to [superintelligence](https://en.wikipedia.org/wiki/Superintelligence)*”. My best guess is that [this takes less than a year](https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/#18-what-about-the-time-from-agi-to-superintelligence) absent humanity choosing to go slower (which we definitely should!). Takeaways about *capabilities* takeoff speed ============================================ I find it useful to distinguish *capabilities* takeoff – how quickly AI capabilities improve around AGI – from *impact* takeoff – how quickly AI’s impact on a particular domain grows around AGI. For example, the latter is much more affected by deployment decisions and various bottlenecks. The metric “time from 20%-AI to 100%-AI” is about capabilities, not impact, because 20%-AI is defined as AI that *could* readily automate 20% of economic tasks, not as AI that actually *does* automate them. Even without any discontinuities, takeoff could last < 1 year ------------------------------------------------------------- Even if AI progress is continuous, without any sudden kinks, the slope of improvement could be steep enough that takeoff is very fast. Even in a continuous scenario, I put **~15%** on takeoff lasting <1 year, and **~60%** on takeoff lasting <5 years.[[7]](#fn-9GdT7SLWwKGiRn6se-7) Why? On a high level, because: * It might not be *that* much harder to develop 100%-AI than 20%-AI. * AI will probably be improving very quickly once we have 20%-AI. Going into more detail: * It might not be *that* much harder to develop 100%-AI than 20%-AI. + Perhaps chimps couldn’t perform 20% of tasks, even if they’d been optimized to do so. Humans have ~3X bigger brains than chimps by synapse count. That could mean that you only need to increase model size by 3X to go from 20%-AI to 100%-AI which, with [Chinchilla scaling](https://arxiv.org/abs/2203.15556), would take 10X more training FLOP. - You might need to increase model size by less than 3X. * With Chinchilla scaling, a 3X bigger model gets 3X more data during training. But human lifetime learning only lasts 1-2X longer than chimp lifetime learning.[[8]](#fn-9GdT7SLWwKGiRn6se-8) * So intelligence might improve *more* from a 3X increase in model size with Chinchilla scaling than from chimps to humans. + Brain size - IQ correlations suggest a similar conclusion. A 10% bigger brain is associated with ~5 extra IQ points. This isn't much, but extrapolating the relationship implies that a 3X bigger brain would be ~60 IQ points smarter; and ML models may gain *more* from scale than humans as bigger models will be trained on more data (unlike bigger-brained humans). + It is pretty hard to *partially* automate a job, e.g. for AI to automate 20% of the tasks. All of the tasks are interconnected in a messy way! Everything is set up for one human, with full context, to do the work. - Normally, we restructure business processes to allow for partial automation. But this takes a lot of effort and time - typically decades! If the transition from 20%-AI to 100%-AI happens in just a few years (as implied by the other arguments in this section) there won’t be time for this kind of restructuring to happen. - In this case, I still expect partial automation to happen earlier than full automation because it will still be *somewhat* easier to develop AI that can partially automate a job (with only small efforts restructuring processes) than AI that can fully automate the job (with similarly small efforts restructuring processes). But it might only be slightly easier. - In other words, the lack of time for restructuring processes narrows the difficulty gap between developing 20%-AI and 100%-AI, but doesn’t eliminate it entirely. - (This point is closely related to the “sonic boom” [argument](https://sideways-view.com/2018/02/24/takeoff-speeds/) for fast takeoff.) * AI will probably be improving very quickly once we have 20%-AI. + Algorithmic progress is already *very fast*. OpenAI [estimates](https://openai.com/blog/ai-and-efficiency/) a 16 month doubling time for algorithmic efficiency on ImageNet; an [recent Epoch analysis](https://epochai.org/blog/revisiting-algorithmic-progress) estimates just 10 months for the same quantity. My sense is that progress is if anything *faster* for LMs. + Hardware progress is already *very fast.* Epoch [estimates](https://www.lesswrong.com/posts/c6KFvQcZggQKZzxr9/trends-in-gpu-price-performance) that FLOP/$ has been doubling every 2.5 years. + Spending on AI development – AI training runs, AI software R&D, and hardware R&D – might rise rapidly after we have 20%-AI, and the strategic and economic benefits of AI are apparent. - 20%-AI could readily add ~$10tr/year to global GDP.[[9]](#fn-9GdT7SLWwKGiRn6se-9) Compared to this figure, investments in hardware R&D (~$100b/year) and AI software R&D (~$20b/year) are low. - For <1 year takeoffs, fast scale-up of spending on AI training runs, simply by using a larger fraction of the world’s chips, plays a central role. + Once we have 20%-AI (AI that can readily[[10]](#fn-9GdT7SLWwKGiRn6se-10) automate 20% of cognitive tasks in the *general economy*), AI itself will accelerate AI progress. The easier AI R&D is to automate compared to the general economy, the bigger this effect. - How big might this effect be? This is a massive uncertainty for me but here are my current guesses. By the time we have 20%-AI I expect: * Conservatively, AI will have automated 20% of cognitive tasks in AI R&D, speeding up AI R&D progress by a factor of ~1.3. I think it’s unlikely (~15%) the effect is smaller than this. * Somewhat aggressively, AI will have automated 40% of cognitive tasks in AI R&D, speeding up AI R&D progress by a factor of ~1.8. I think there’s a decent chance (~30%) of getting bigger effects than this. - The speed up increases over time as AI automates more of AI R&D. When we simulate this dynamic we find AI automation reduces “time from 20%-AI to 100%-AI” by ~2.5X. + Combining the above, I think the “effective compute” on training runs (which incorporates better algorithms) will probably rise by >5X each year between 20%-AI and 100%-AI, and could rise by 100X each year. We should assign *some* probability to takeoff lasting >5 years --------------------------------------------------------------- I have **~40%** on takeoff lasting >5 years. On a high-level my reasons are: * It might be a *lot* harder to develop 100%-AI than 20%-AI. * AI progress might be *slower* once we reach 20%-AI than it is today. Going into more detail: * It might be a *lot* harder to develop 100%-AI than 20%-AI. + The key reason is that **AI may have a strong comparative advantage at some tasks over other tasks**, compared with humans. Its comparative advantages might allow it to automate 20% of tasks long before it can automate the full 100%. The bullets below expand on this basic point. + AI, and computers more generally, already achieve superhuman performance in many domains by exploiting massive AI-specific advantages (lots of experience/data, fast thinking, reliability, memorisation). It might be far harder for AI to automate tasks where these advantages aren’t as relevant. + We can visualise this using (an [adjusted version](https://docs.google.com/presentation/d/1g-R0qLqHtkDwCSizO37WQsZqECIL9HFcWvsWYJeLCIo/edit#slide=id.p) of) the graph Dan Kokotajlo drew in his [review](https://docs.google.com/document/d/1GwT7AS_PWpglWWrVrpiMqeKiJ_E2VgAUIG5tTdVhVeM/edit#heading=h.rq4krnj82zba) of Joe Carlsmith’s report on power-seeking AI. ![alt_text](https://res.cloudinary.com/cea/image/upload/v1674446938/mirroredImages/3vDarp6adLPBTux5g/nvqramxyxzb2yhulypw6.png) We’re currently in the left tail, where AI’s massive comparative advantages allow it to automate certain tasks despite being *much* less capable than humans overall. If AI automates 20% of tasks before the big hump, or the hump is wide, it will be much easier to develop 20%-AI than 100%-AI. + Outside of AI, there’s generally a large precedent for humans finding simple, dumb ways to automate significant fractions of labour. - We may have automated >80% of the cognitive tasks that humans performed as of 1700 (most people worked in agriculture), but using methods that don’t get us close to automating 100% of them. - By analogy, AI may automate 20% of 2020 cognitive tasks using methods that don’t get AI close to automating 100% of them. If this happens gradually over many decades, it might feel like “progress as normal” rather than “AI is on the cusp of having a transformative economic impact”. + Within AI, there are many mechanisms that could give AI comparative advantages at some tasks but not others. AI is better placed to perform tasks with the following features: - AI can learn to perform the task with “short horizon training”, without requiring “long horizon training”.[[11]](#fn-9GdT7SLWwKGiRn6se-11) - The task is similar to what AI is doing during pre-training (e.g. similar to “next word prediction”, in the case of large language models). - It’s easier to get large amounts of training data for the task, e.g. from human demonstrations. - Memorising lots of information improves task performance. - It’s important to “always be on” (no sleep), or to consistently maintain focus (no getting bored or slacking). - It’s easier to verify that an answer is correct than to generate the correct answer. (This helps to generate training data and allows us to trust AI outputs.) - The task doesn’t require strong sim2real transfer. - The downside of poor performance is limited. (We might not trust AI in scenarios where a mistake is catastrophic, e.g. driving.) + Human brains were “trained” by evolution and then lifetime learning in a pretty different way to how AIs are trained, and humans seem to have pretty different brain architectures to AIs in many ways. So humans might have big comparative advantages over AIs in certain domains. This could make it very difficult to develop 100%-AI. + [GPT-N](https://www.lesswrong.com/posts/k2SNji3jXaLGhBeYP/extrapolating-gpt-n-performance#Comparisons_and_limits) looks like it will solve some LM benchmarks with **~4 OOMs** less training FLOP. In other words, it has strong “comparative advantages” at some benchmarks over others. I expect cognitive tasks throughout the entire economy to have more variation along many dimensions than these LM benchmarks, suggesting this example *underestimates* the difficulty gap between developing 20%-AI and 100%-AI. + It’s notable that most of the evidence discussed [above](#Even_without_any_discontinuities__takeoff_could_last___1_year) for a *small* difficulty gap between 20%-AI and 100%-AI (in particular “chimps vs humans” and “brain size - IQ correlations”) completely ignore this point about “large comparative advantage at certain tasks” by assuming intelligence is on a one-dimensional spectrum.[[12]](#fn-9GdT7SLWwKGiRn6se-12) + I find it most plausible that there’s a big difficulty gap between 20%-AI and 100%-AI if 100%-AI is very difficult to develop. * AI progress might be *slower* once we reach 20%-AI than it is today (though my best guess is that it will be faster). + A lot of recent AI progress has come from increasing the fraction of computer chips used to train AI. This can only go on for so long! + Hardware progress might be much more difficult by this time, as we approach the ultimate limits of the current hardware paradigm. + Both of these reasons are more likely to apply if 20%-AI is hard to develop, i.e. if timelines are long. + Above I discussed reasons why AI progress will probably be *faster* once we have 20%-AI: larger total $ investments in AI and AI automation. But these reasons may not apply strongly: - It might be hard to quickly convert “more $” into faster AI progress. * It may take years for new talent to be able to contribute to the cutting edge (especially with hardware R&D). So growing the total quality-adjusted talent in AI R&D might be slow. * Even if you could quickly double the amount of quality-adjusted R&D talent, that less-than-doubles the rate of progress due to duplication of effort and difficulties parallelising work (“nine mothers can’t make a baby in one month”). * Hard to scale up global production of AI chips, due to the immense complexity of the supply chain. - Limited effects of AI automation on AI R&D progress. * There will be some lags before AI is deployed in AI R&D. * Progress will be bottlenecked by the tasks AI can still not perform. + After we reach 20%-AI, we may become more concerned about various AI risks and deliberately slow down. Takeoff won’t last >10 years unless 100%-AI is *very* hard to develop --------------------------------------------------------------------- As discussed above, AI progress is already very fast and will probably become faster once we have 20%-AI. If you think that even 10 years of this fast rate of progress won’t be enough to reach 100%-AI, that implies that 100%-AI is *way* harder to develop than 20%-AI. In addition, I think that today’s AI is quite far from 20%-AI: its economic impact is pretty limited (<$100b/year), suggesting it can’t readily[[13]](#fn-9GdT7SLWwKGiRn6se-13) automate even 1% of tasks. So I personally expect 20%-AI to be pretty difficult to develop compared to today’s AI. This means that, if takeoff lasts >10 years, 100%-AI is a lot harder to develop than 20%-AI, which is itself a lot harder to develop than today’s AI. This all only works out if you think that 100%-AI is *very* difficult to develop. [Playing around](https://takeoffspeeds.com/playground.html) with the compute-centric model, I find it hard to get >10 year takeoff without assuming that 100%-AI would have taken >=1e38 FLOP to train with 2020 algorithms (which was the conservative “long horizon” anchor in [Bio Anchors](https://docs.google.com/spreadsheets/d/1TjNQyVHvHlC-sZbcA7CRKcCp0NxV6MkkqBvL408xrJw/edit#gid=505210495)). Time from AGI to superintelligence is probably less than 1 year --------------------------------------------------------------- Recall that by AGI I mean AI that can readily perform ~100% of cognitive tasks as well as a human professional. By superintelligence I mean AI that very significantly surpasses humans at ~100% of cognitive tasks. My best guess is that the time between these milestones is less than 1 year, the primary reason being the massive amounts of AI labour available to do AI R&D, once we have AGI. [More.](https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/#18-what-about-the-time-from-agi-to-superintelligence) Takeaways about *impact* takeoff speed ====================================== Here I mostly focus on economic impact. If we align AGI, I weakly expect impact takeoff to be slower than capabilities takeoff -------------------------------------------------------------------------------------- I think there will probably just be a few years (~3 years) from 20%-AI to 100%-AI (in a *capabilities* sense). But, if AI is aligned, I think time from *actually deploying* AI in 20% to >95% of economic tasks will take many years (~10 years): * **Standard deployment lags**. It typically takes decades for new technologies to noticeably affect GDP growth, e.g. computers and the internet. * **Political economy**. Workers and organizations on course to be replaced by AI will attempt to block its deployment. * **Caution**. We should be, and likely will be, very cautious about handing over ~all decision making to advanced AI, even if we have strong evidence that it’s safe and aligned. (E.g. imagine the resistance to letting AI run the government.) This would probably mean that humans remain a “bottleneck” on AI’s economic impact for some time. + Even if we have compelling reasons to hand over all decisions to AI, I still expect there to be a lot of (perhaps *unreasonable*) caution – AIs making decisions will feel creepy and weird to many people. I’m not confident about this. Here are some countervailing considerations: * **Less incentive to deploy AI before superintelligence**. + If a lab faces a choice between deploying their current SOTA AIs in the economy vs investing in improving SOTA, they may choose the latter if they think they could automate AI R&D and thereby accelerate their lab’s AI progress. + Eventually though, after the “better AI → faster AI R&D progress” feedback loop has fizzled out (perhaps after they’ve developed superhuman AI), labs’ incentive will simply be to deploy. There could be an *extremely fast* impact takeoff when labs suddenly start trying to deploy their superhuman AIs. * **Superhuman AIs quickly circumvent barriers to deployment.** Perhaps pre-AGI systems mostly aren’t deployed due to barriers like regulations. But superhuman aligned AI might be able to quickly navigate these barriers, e.g. good-faith convincing human regulators to deploy it more widely so it can cure diseases. Many of the above points, on both sides, apply more weakly to the impact of AI on *AI R&D* than on the general economy. For example, I expect regulation to apply less strongly in AI R&D, and also for lab incentives to favour deployment of AIs in AI R&D (especially software R&D). So I expect impact takeoff *within AI R&D* to match capabilities takeoff fairly closely. If we don’t align AGI, I expect impact takeoff to be faster than capabilities takeoff ------------------------------------------------------------------------------------- If AGI isn’t aligned, then AI’s impact could increase very suddenly at the point when misaligned AIs first collectively realise that they can disempower humanity and try to do so. Before this point, human deployment decisions (influenced by regulation, general caution, slow decision making, etc) limit AI’s impact; afterwards AIs forcibly circumvent these decisions.[[14]](#fn-9GdT7SLWwKGiRn6se-14) Some chance of <$3tr/year economic impact from AI before we have AI that could disempower humanity -------------------------------------------------------------------------------------------------- I’m at ~15% for this. (For reference, annual revenues due to AI today are often estimated at ~$10-100b,[[15]](#fn-9GdT7SLWwKGiRn6se-15) though this may be smaller than AI’s impact on GDP.) Here are some reasons this could happen: * **Fast capabilities takeoff.** There might be only a few years from “AI that could readily add $3tr/year to world GDP” to AI that could disempower humanity. See [above arguments](#Even_without_any_discontinuities__takeoff_could_last___1_year). * **Significant lags to deploying AI systems** in the broader economy. See [above points](#If_we_align_AGI__I_weakly_expect_impact_takeoff_to_be_slower_than_capabilities_takeoff). + As above, I expect there will be fewer lags to deployment in AI R&D. I’m **not** counting work done by AI within AI R&D as counting towards the “$3tr”. * **Labs may prioritise improving SOTA AI over deploying it.** Discussed [above](#If_we_align_AGI__I_weakly_expect_impact_takeoff_to_be_slower_than_capabilities_takeoff); this could continue until labs develop AI that could disempower humanity. + On the other hand, even if leading labs strongly prioritise deploying AIs internally, they’ll probably expend *some* effort plucking the low-hanging fruit for deploying it in the broader economy to make money and generate more investment. And even if leading labs don’t do this, other labs may specialise in training AI to be deployed in the broader economy. Why am I not higher on this? * $3tr/year only corresponds to automating ~6% of cognitive tasks;[[16]](#fn-9GdT7SLWwKGiRn6se-16) I expect AI will be able to perform >60%, and probably >85% of cognitive tasks before it can disempower humanity. That’s a pretty big gap in AI capabilities! * People will be *actively trying* to create economic value from AI and also *actively trying* to prevent AI from being able to disempower humanity. + We’ll train AI specifically to be good at economically valuable tasks and *not* train AI to be good at “taking over the world” tasks (modulo the possibility of using AI in the military). + We’ll make adjustments to workflows etc. to facilitate AI having economic impact, and (hopefully!) make adjustments to protect against AI takeover. * I have a fairly high estimate of the difficulty of developing AGI. I think we’re unlikely to develop AGI by 2030, by which time AI may *already* be adding >$3tr/year to world GDP. My “15%” probability here feels especially non-robust, compared to the others in this post. Takeaways about AI timelines ============================ Multiple reasons to have shorter timelines compared to what I thought a few years ago ------------------------------------------------------------------------------------- Here’s a list (including some repetition from above): * **Growing $ investment in training runs, software R&D and hardware R&D**, once AI can readily automate non-trivial fractions of cognitive labour (e.g >3%). * **AI automation of AI R&D accelerating AI progress.** Firstly, it may be easier to fully automate AI R&D than to fully automate cognitive labour in general. Secondly, even *partial* R&D automation can significantly speed up AI progress. * “**Swimming in runtime compute**”. + If AGI can’t be trained by ~2035 (as I think is likely), then we’ll have a *lot* of runtime compute lying around, e.g. enough to run 100s of millions of SOTA AIs.[[17]](#fn-9GdT7SLWwKGiRn6se-17) + It may be possible to leverage this runtime compute to “boost” the capabilities of pre-AGI systems. This would involve using existing techniques for this like “chain of thought”, “best of N sampling” and MCTS, as well as finding novel techniques. As a result, we might fully automate AI R&D much sooner than we otherwise would. + I think this factor alone could easily shorten timelines by ~5 years if AGI training requirements are my best guess (1e36 FLOP with 2020 algorithms). It shortens timelines more(/less) if training requirements are bigger(/smaller). * **Faster software progress.** I put more probability on algorithmic progress for training AGI being very fast than previously. This is from fast software progress for LMs (e.g. Chinchilla scaling) and [recent analysis](https://epochai.org/blog/revisiting-algorithmic-progress) from [Epoch](https://epochai.org/). Harder than I thought to avoid AGI by 2060 ------------------------------------------ To avoid AGI by 2060, we cannot before 2040 develop “AI that is so good that AGI follows within a couple of decades due to [rising investment and/or AI itself accelerating AI R&D progress]”. As discussed [above](#We_should_assign_some_probability_to_takeoff_lasting__5_years), this latter target might be much easier to hit. So my probability of AGI by 2060 has risen. Relatedly, I used to update more on growth economist-y concerns like “ah but if AI can automate 90% of tasks but not the final 10%, that will bottleneck its impact”. Now I think “well if AI automates 90% of cognitive tasks that will significantly accelerate AI R&D progress and attract more investment in AI, so it won’t be too long before AI can perform 100%”. Takeaways about the relationship between takeoff speed and AI timelines ======================================================================= The easier AGI is to develop, the faster takeoff will be -------------------------------------------------------- Probably the biggest determinant of takeoff speeds is the difficulty gap between 100%-AI and 20%-AI. If you think that 100%-AI isn’t very difficult to develop, this upper-bounds how large this gap can be and makes takeoff faster. ![alt_text](https://res.cloudinary.com/cea/image/upload/v1674446938/mirroredImages/3vDarp6adLPBTux5g/ftoydku4dgrn0b4gmv1b.png) ![alt_text](https://res.cloudinary.com/cea/image/upload/v1674446938/mirroredImages/3vDarp6adLPBTux5g/qez8m6efdjruh7mguhga.png) *In the lower scenario AGI is easier to develop and, as a result, takeoff is faster.* Holding AGI difficulty fixed, slower takeoff → earlier AGI timelines -------------------------------------------------------------------- If takeoff is slower, there is a *bigger* difficulty gap between AGI and “AI that significantly accelerates AI progress”. Holding fixed AGI difficulty, that means “AI that significantly accelerates AI progress” happens *earlier*. And so AGI happens earlier. (This point has been [made before](https://www.lesswrong.com/posts/AiaAq5XeECg7MpTL7/for-every-choice-of-agi-difficulty-conditioning-on-gradual).) ![alt_text](https://res.cloudinary.com/cea/image/upload/v1674446938/mirroredImages/3vDarp6adLPBTux5g/yjkvzp0wgxcigzzzm4rs.png) ![alt_text](https://res.cloudinary.com/cea/image/upload/v1674446938/mirroredImages/3vDarp6adLPBTux5g/mtmrzv7uvcmcevgvzxbq.png) [[18]](#fn-9GdT7SLWwKGiRn6se-18) *Two scenarios with the same AGI difficulty. In the lower scenario takeoff is slower and, as a result, AGI happens sooner.* The model in the report quantifies this tradeoff. When I [play around](https://takeoff-speeds-dev.firebaseapp.com/playground.html) with it I find that, holding the difficulty of AGI constant, decreasing the time from 20%-AI to 100%-AI by two years delays 100%-AI by three years.[[19]](#fn-9GdT7SLWwKGiRn6se-19) I.e. make takeoff **two** years shorter → delay 100%-AI by **three** years. How does the report relate to previous thinking about takeoff speeds? ===================================================================== Eliezer Yudkowsky’s Intelligence Explosion Microeconomics --------------------------------------------------------- * *[Intelligence explosion microeconomics](https://intelligence.org/files/IEM.pdf)* (IEM) doesn’t argue for takeoff happening in weeks rather than in years, so doesn’t speak to whether takeoff is faster or slower than I conclude. [More.](https://docs.google.com/document/d/1rw1pTbLi2brrEP0DcsZMAVhlKp6TKGKNUSFRkkdP_hs/edit#heading=h.r0zdd78w6nh4) * I think of my report as providing one possible quantitative framework for Intelligence Explosion Microeconomics. It makes IEM’s qualitative claims quantitative by drawing on empirical evidence about the returns to hardware + software R&D, how intelligence scales with additional compute and better algorithms, and how cognitive output scales with intelligence. [More.](https://docs.google.com/document/d/1rw1pTbLi2brrEP0DcsZMAVhlKp6TKGKNUSFRkkdP_hs/edit#heading=h.lstzm0ohvw5h) * I think Eliezer’s thinking about takeoff speeds is influenced by his interpretation of the chimp-human transition. I chatted to Nate Soares about this transition and its implications for AI takeoff speeds; I describe my understanding of Nate’s view [here](https://docs.google.com/document/d/1DZy1qgSal2xwDRR0wOPBroYE_RDV1_2vvhwVz4dxCVc/edit#heading=h.apdvo0uwo5qe). * I currently and tentatively put ~6% on a substantial discontinuity[[20]](#fn-9GdT7SLWwKGiRn6se-20) in AI progress around the human range; but this was not the main focus of my research. [More.](https://docs.google.com/document/d/1rw1pTbLi2brrEP0DcsZMAVhlKp6TKGKNUSFRkkdP_hs/edit#heading=h.37p97cga2xd) Paul Christiano --------------- I think Paul Christiano’s [2018 blog post](https://sideways-view.com/2018/02/24/takeoff-speeds/) does a good job of arguing that takeoff is likely to be continuous. It also claims that takeoff will probably be slow. My report highlights the possibility that takeoff could be continuous but still be pretty fast, and the Monte Carlo analysis spits out the probability that takeoff is “fast” according to the definitions in the 2018 blog post. ![alt_text](https://res.cloudinary.com/cea/image/upload/v1674446938/mirroredImages/3vDarp6adLPBTux5g/pmkmab1da9a3bxemgiqx.png) [More](https://docs.google.com/document/d/1rw1pTbLi2brrEP0DcsZMAVhlKp6TKGKNUSFRkkdP_hs/edit#heading=h.5kvqr0xxy20p). Notes ----- --- 1. By “AI can *readily* perform a task” I mean “performing the task with AI could be done with <1 year of work spent engineering and rearranging workflows, and this would be profitable”. [↩︎](#fnref-9GdT7SLWwKGiRn6se-1) 2. Recall that “readily” means “automating the task with AI would be profitable and could be done within 1 year”. [↩︎](#fnref-9GdT7SLWwKGiRn6se-2) 3. 100%-AI is different from AGI only in that 100%-AI requires that we have enough runtime compute to actually automate **all** instances of cognitive tasks that humans perform, whereas AGI just requires that AI could perform any (but not all) cognitive tasks. [↩︎](#fnref-9GdT7SLWwKGiRn6se-3) 4. To arrive at these probabilities I took the results of the model’s Monte Carlo analysis and adjusted them based on the model’s limitations – specifically the model excluding certain kinds of discontinuities in AI progress and ignoring certain frictions to developing and deploying AI systems. [↩︎](#fnref-9GdT7SLWwKGiRn6se-4) 5. Not robust means that further arguments and evidence could easily change my probabilities significantly, and that it’s likely that in hindsight I’ll think these numbers were unreasonable given the current knowledge available to me. [↩︎](#fnref-9GdT7SLWwKGiRn6se-5) 6. Why? Firstly, I think the cognitive tasks in AI R&D will be particularly naturally suited for AI automation, e.g. because there is lots of data for writing code, AI R&D mostly doesn’t require manipulating things in the real world, and indeed [AI is already helping with AI R&D](https://www.lesswrong.com/posts/camG6t6SxzfasF42i/a-year-of-ai-increasing-ai-progress). Secondly, I expect AI researchers to prioritise automating AI R&D over other areas because they’re more familiar with AI R&D tasks, there are fewer barriers to deploying AI in their own workflows (e.g. regulation, marketing to others), and because AI R&D will be a very valuable part of the economy when we’re close to AGI. [↩︎](#fnref-9GdT7SLWwKGiRn6se-6) 7. These probabilities are higher than the ones above because here I’m ignoring types of discontinuities that aren’t captured by having a small “effective FLOP gap” between 20%-AI and 100%-AI. [↩︎](#fnref-9GdT7SLWwKGiRn6se-7) 8. Chimps reach sexual maturity around 7 and can live until 60, suggesting humans have 1-2X more time for learning rather than 3X. [↩︎](#fnref-9GdT7SLWwKGiRn6se-8) 9. World GDP is [~$100tr](https://data.worldbank.org/indicator/NY.GDP.MKTP.CD), about [half of which](https://ourworldindata.org/grapher/labor-share-of-gdp?tab=chart&country=NGA~NPL~HRV~OWID_WRL) is paid to human labour. If AI automates 20% of that work, that’s worth ~$10tr/year. [↩︎](#fnref-9GdT7SLWwKGiRn6se-9) 10. Reminder: “AI can readily automate X” means “automating X with AI would be profitable and could be done within 1 year”. [↩︎](#fnref-9GdT7SLWwKGiRn6se-10) 11. The “horizon length” concept is from [Bio Anchors](https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/edit#). Short horizons means that each data point requires the model to “think” for only a few seconds; long horizons means that each data point requires the model to “think” for months, and so training requires much more compute. [↩︎](#fnref-9GdT7SLWwKGiRn6se-11) 12. Indeed, my [one-dimensional model](https://docs.google.com/document/d/1DZy1qgSal2xwDRR0wOPBroYE_RDV1_2vvhwVz4dxCVc/edit#heading=h.dor321f7pjrl) of takeoff speeds predicts faster takeoff. [↩︎](#fnref-9GdT7SLWwKGiRn6se-12) 13. Reminder: “AI can readily automate X” means “automating X with AI would be profitable and could be done within 1 year”. [↩︎](#fnref-9GdT7SLWwKGiRn6se-13) 14. This falls under “Superhuman AIs quickly circumvent barriers to deployment”, from above. [↩︎](#fnref-9GdT7SLWwKGiRn6se-14) 15. E.g. [here](https://medium.com/dataseries/artificial-intelligence-market-size-a99e194c184a), [here](https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market), [here](https://www.globenewswire.com/news-release/2022/04/19/2424179/0/en/Artificial-Intelligence-Market-Size-to-Surpass-Around-US-1-597-1-Bn-By-2030.html), [here](https://www.statista.com/statistics/941835/artificial-intelligence-market-size-revenue-comparisons/). I don’t know how reliable these estimates are, or even their methodologies. [↩︎](#fnref-9GdT7SLWwKGiRn6se-15) 16. World GDP is [~$100tr](https://data.worldbank.org/indicator/NY.GDP.MKTP.CD), about [half of which](https://ourworldindata.org/grapher/labor-share-of-gdp?tab=chart&country=NGA~NPL~HRV~OWID_WRL) is paid to human labour. If AI automates 6% of that work, that’s worth ~$3tr/year. [↩︎](#fnref-9GdT7SLWwKGiRn6se-16) 17. Here's a [rough BOTEC](https://docs.google.com/spreadsheets/d/1UK1GizJlmgA9WB4qKRI4EukmGuMkkxwn9GrvptgbA1c/edit#gid=0) (h/t Lukas Finnveden). [↩︎](#fnref-9GdT7SLWwKGiRn6se-17) 18. [Link to diagrams](https://docs.google.com/presentation/d/1IA-6Gx6Ktc8AzM-474TXofPSYw8tPN_WBN5DaxoShsk/edit#slide=id.g1b614de309e_0_0). [↩︎](#fnref-9GdT7SLWwKGiRn6se-18) 19. Reminder: x%-AI is AI that could readily automate x% of cognitive tasks, weighted by their economic value in 2020. [↩︎](#fnref-9GdT7SLWwKGiRn6se-19) 20. By “ substantial discontinuous jump” I mean “>10 years of progress at previous rates occurred on one occasion”. (h/t AI impacts for that definition) [↩︎](#fnref-9GdT7SLWwKGiRn6se-20)
b7985272-11ac-4b89-bd39-e5c94d879a37
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA: Lightning Talks Discussion article for the meetup : West LA: Lightning Talks WHEN: 27 May 2015 07:00:00PM (-0700) WHERE: 11066 Santa Minica blvd, LA, CA How to Find Us: Go into this Del Taco. We will be in the back room if possible. Parking is free in the lot out front or on the street nearby. Discussion: Lightning talks. Everyone is encouraged to make a 5-10 minute talk! No prior exposure to Less Wrong is required; this will be generally accessible. Discussion article for the meetup : West LA: Lightning Talks
40d5c738-3739-490a-b94c-46c23715784a
StampyAI/alignment-research-dataset/arxiv
Arxiv
Modeling Human-AI Team Decision Making 1 Introduction --------------- Group decision making has been ever important in organizations and missions. And now, intelligent agents have become fundamental to everyday life: assistants on mobile devices, pedagogical agents in tutoring systems, and robots collaborating with humans. As such, integrating AI agents into human groups while being aware of their complementary strengths and abilities has become vital (Crandall et al. [2018](#bib.bib8); Gaur et al. [2016](#bib.bib21); Kamar, Hacker, and Horvitz [2012](#bib.bib24); Shirado and Christakis [2017](#bib.bib39); Wang et al. [2016](#bib.bib45)). We explore models for decision making in mixed teams while building on the constructs of prospect theory and appraisal systems. Groups/teams frequently encounter decisions involving varying amounts of risk and reward. Cumulative Prospect Theory (Tversky and Kahneman [1992a](#bib.bib43)) (formerly Prospect Theory (Kahneman and Tversky [1979a](#bib.bib22))) provides a basis for such decision-making by establishing that individuals make decisions based on the potential value of losses and gains among the set of available options. Cumulative Prospect Theory proposes that individuals compute an internal evaluation for each prospect that is determined by a value function and a probability weighting function. The value function is S-shaped and asymmetrical, capturing loss aversion. The probability weighting function encodes the hypothesis that individuals overreact to small probability events, but under-react to large probability events. Group decision making is based on how team members appraise others. Team members’ opinions of others depend on the issue and evolve over time, based on their performance. Such opinions can be represented by an appraisal network (Mei et al. [2017](#bib.bib29)), or equivalently by its corresponding row-stochastic appraisal matrix. The edges of this matrix are weighted, depict trust/distrust, friendship/animosity, or more generally to what degree a team member is influenced by particular others. The eigenvector centrality of each person in an irreducible aperiodic appraisal matrix is a summary measure of each team member’s relative importance in determining the team’s decision making. In order to understand how a human-AI team reaches a decision in uncertain environments, we carried out cyber-human experiments in which a team is asked to answer a sequence of questions. The task is to answer intellective questions from different categories such as history, science and technology, etc. Every team consists of four members each with access to its own AI agent. The AI agents give the correct answer at a fixed probability (unknown to the team members); however, the baseline probabilities vary across the agents. Each question is answered in four timed phases. In the first phase, every team member records his/her individual response for the question. In the second phase, the response of every team member is displayed on the screen and a chat plugin (the only communication channel) is enabled for communication. Team members then record their choices and decide whether or not to use an AI agent (and which AI agent to use) in an optional third phase. In the fourth and final phase, the team submits an answer. The correct answer to each question is displayed at the end of each round. Note that if the group has relied on the incorrect response of an AI agent, then the group’s trust in that agent (and their other available AI agents) may be eroded. After every five questions an influence survey is presented. This is filled in by each team member to record the relative direct influence of each of his/her teammates on his/her opinion. The team members are also asked to rate the accuracy of all four AI agents based on their interactions with them. The first step of a team’s decision making consists of deliberating and choosing one of the multiple choice options or choosing to consult an AI agent. In case a team chooses to consult an AI agent, the second decision making step consists of integrating the agent’s answer with the team’s choices and reporting a final answer. Each possible action in a decision making step is associated with a probability of success and a reward. From the viewpoint of Prospect Theory, each action is a gamble and the team needs to choose between multiple gambles. We observe the actions by the teams but not the perceived probability of success, probability weighting function, or the valuation function. We investigate and compare four models that predict the decisions of the teams. The first two models capture the appraisal process in a team, while the last two models capture the appraisal process as well as decision making under risks. We outline these four models below. The first model, NB (Naive Bayes), captures the accuracy of a human/AI-agent using a beta distribution that is updated at each round (after observing whether it was correct or incorrect) using Bayes rule. A Naive Bayes assumption is used to integrate the responses of the human/AI-agents and obtain the probability of using an option or an agent. The second model, CENT (Centrality), integrates individual responses through an interpersonal influence system. The probability of the team choosing an option is computed as the sum of the eigenvector centrality values of each individual choosing that option. The assumption underlying this computation is always satisfied in our data. A similar weighting process is used to integrate the team’s evaluation of the AI-agents. The third model, PT-NB (Prospect Theory coupled with Naive Bayes), uses Prospect Theory to analyze the actions of the team as a set of gambles. The probabilities of success and reward of each gamble are computed as in the model NB. The team chooses among these gambles based on Prospect Theory parameters of the team (learned through an initial training sequence). The final model, PT-CENT (Prospect Theory coupled with Centrality), again uses Prospect Theory to analyze the actions of the team as a set of gambles. The probabilities of success and reward of each gamble are computed as in the model CENT. A team again chooses among these gambles based on Prospect Theory. We measure the accuracy of the above four models using our experimental data. This measurement is non-trivial since only the team’s chosen action is observed and not the team’s detailed preferences for each of the actions. We resolve this difficulty by considering the set of probability distributions (on the actions) over which the chosen action is dominant and computing the minimum distance (based on cross-entropy loss) over this set to the model’s predicted probability distribution. 2 Related Work --------------- Group performance is not simply a sum of individual performance, but ruled by patterns of interactions, influence, and other relationships among group members. We know that transactive memory systems (TMS) (Wegner [1987](#bib.bib46))) are activated in group members’ levels of expertise and potential contributions to tasks are appraised. We know that interpersonal influence systems are automatically generated in groups (Friedkin and Johnsen [2011](#bib.bib18)). And we know that groups’ opinions on multidimensional issues are generally constrained to a decision space of feasible positions that is the group’s convex hull of initial displayed positions (Friedkin et al. [2019](#bib.bib19)). An excellent survey of human-AI teaming has been put forth in a recent paper (O’Neill et al. [2020](#bib.bib32)). (DeCostanza et al. [2018](#bib.bib11)) discusses the mechanisms for enhancing teamwork in human-AI teams and outlines the critical scientific questions that must be addressed to enable this vision. Group constructs have also been proposed in machine learning—experts, weak learners, crowd-sourced workers—to achieve goals that no single individual can accomplish on its own. In the case of boosting (Schapire [1990](#bib.bib37)), one can obtain a “strong learner” that is able to predict arbitrarily accurately based on an ensemble of “weak learners” whose predictions are slightly better than random guessing. In the case of “learning from expert advice” (Cesa-Bianchi et al. [1997](#bib.bib5)), an algorithm works with a group of K𝐾Kitalic\_K arbitrary “experts” who give daily “stock predictions” and who perform nearly as well as the “expert” that has the best “track record” at any given time. It is an iterative game in which in each iteration the “player” must make a decision and the experts with the best track record may change over time. The “Multi-armed Bandits” (MAB) problems (Auer, Cesa-Bianchi, and Fischer [2002](#bib.bib1)) can be thought of as a variant of the problem of “learning from expert advice” in which a “player” can only observe the payoff of the “expert” at each iteration. In the case of “crowd sourcing” (Welinder et al. [2010](#bib.bib47)), an algorithm aggregates the inputs of a large group of unreliable “participants”, evaluates each “participant”, and then infers the ground truth. Humans and AI are clearly different in their cognitive and processing capabilities (Cummings [2014](#bib.bib9)). Groups with AI involvement should be designed so that the raw computational and search power of computers for state-space reduction can be combined with group inductive reasoning, especially in uncertain environments. What is the optimal group-AI design for a given decision? This is a question pervading all kinds of groups that oriented to specific types of issues. Taxonomies and ontologies for characterizing group decision-making have been defined (Sheridan and Verplank [1978](#bib.bib38); Endsley and Kaber [1999](#bib.bib14); Rasmussen [1987](#bib.bib36)) in order to investigate the optimal composition of groups. The behavior of groups with AI involvement must be observable and predictable. This is challenging in complex uncertain environments. While groups often adopt satisficing strategies (Simon [1957](#bib.bib40); Friedkin et al. [2019](#bib.bib19)), AI utilizes search space reduction strategies such as limited look-ahead, constraint relaxation, and heuristics. Both groups and AI are subject to bias and faulty information: groups by their members’ beliefs and AI by the available data and training protocols. Since observability and predictability have ramifications on the level of trust (Stubbs, Hinds, and Wettergreen [2007](#bib.bib41)), groups with AI involvement must have confidence that the behavior of their AI is consistent with an acceptable common ground whatever the displayed initial beliefs of the group’s members might be  (Marathe et al. [2018b](#bib.bib28); Li, Sun, and Miller [2016](#bib.bib26); Chen and Barnes [2014](#bib.bib7); Telesford et al. [2016](#bib.bib42); Garcia et al. [2017](#bib.bib20); Marathe et al. [2018a](#bib.bib27)). Theory of mind (ToM) (Premack and Woodruff [1978](#bib.bib34); Baker, Saxe, and Tenenbaum [2011](#bib.bib2); Cuzzolin et al. [2020](#bib.bib10); Oguntola, Hughes, and Sycara [2021](#bib.bib31); Rabinowitz et al. [2018](#bib.bib35)) broadly refers to humans’ ability to represent the mental states of others, including their desires, beliefs, and intentions. This ability to attribute mental states to others is a key component of cognition that needs to be incorporated into AI in order to model and interact with humans. Human teams can display magnified cognitive capacity and unique cognitive abilities that emerge from the interaction between the team members. Collective intelligence refers to a team’s ability to produce intelligence and behaviors beyond the individual (Woolley et al. [2010](#bib.bib49)). How to integrate AI into human teams in order to produce cognitive abilities that go beyond the individual or the group of humans is an important question (Bansal et al. [2019](#bib.bib3)). Research from psychology suggests that people process uncertainty and information in general using dual processes: an implicit (automatic), unconscious process and an explicit (controlled), conscious process (Chaiken and Trope [1999](#bib.bib6)). The second process is encoded by analytic algorithms, rules, and reasoning systems, and can be modeled equally well for humans and AI agents. It is the first implicit automatic system and its interaction with the explicit system that is harder to model in humans, and poses challenges for a theoretical understanding of mixed human-AI teams. Uncertainty itself can be separated into two kinds: aleatoric and epistemic (Der Kiureghian and Ditlevsen [2009](#bib.bib13); Fox and Ülkümen [2011](#bib.bib16)). Aleatoric uncertainty refers to the notion of randomness (as in coin flipping): the variability in the outcome of an experiment that is due to inherently random effects. Epistemic uncertainty refers to uncertainty caused by a lack of knowledge of decision makers. This uncertainty can in principle be reduced by a proper recognition of expertise on teams and protocols that reveal explanations on why a fact may be true. Both aleatoric and epistemic uncertainty require teams to deal with decisions involving varying amounts of risk and reward under conditions that are not completely rational. The most successful behavioral model of risky decision making is prospect theory (Kahneman and Tversky [1979b](#bib.bib23); Tversky and Kahneman [1992b](#bib.bib44)). According to it, individuals make decisions based on the potential value of losses and gains among the set of available options. It proposes that individuals compute an internal evaluation for each prospect that is determined by a value function and a probability weighting function. The value function is S-shaped and asymmetrical, capturing loss aversion. The probability weighting function encodes the hypothesis that individuals overreact to small probability events, but under-react to large probability events. The theory deviates from its rational competitor, expected utility theory (Fishburn and Fishburn [1970](#bib.bib15)), which assumes that people evaluate the outcome of a decision in terms of the expected reward, independent of any cognitive biases (such as risk aversion). Other recent theories explaining individual choices under risk/uncertainty include dynamic decision models (Pleskac, Diederich, and Wallsten [2015](#bib.bib33)) such as dynamic field theory (Busemeyer and Townsend [1993](#bib.bib4)). 3 Proposed Models of Decision Making ------------------------------------- ### Decision Tasks A team in our experiments consists of four humans and four AI agents. The accuracy of each AI agent is fixed during an experiment and ranges between 0.6 and 0.9. Humans are informed that the accuracy of each AI agent is at least 0.5. The reward for a correct answer c1subscript𝑐1c\_{1}italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT is 4, the penalty for an incorrect answer c2subscript𝑐2c\_{2}italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT is 1, and the penalty for consulting an AI agent c3subscript𝑐3c\_{3}italic\_c start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT is 1. Multiple choice questions are posed sequentially to a team. After answering every five questions, each team member reports the relative influence of each member, which collectively generates a row stochastic weight matrix W𝑊Witalic\_W (and the influence network) for the team. Each team member also rates the accuracy of the AI-agents after every five questions. There are two sequential decision tasks in our experiment: * • Decision Task 1. Integration of human responses and decision on the use of AI agents. A team first needs to integrate the decisions of the team members into a group response, and decide whether it needs to utilize an AI agent to help answer the posed question correctly. And if an AI agent is to be invoked, the team needs to decide which of the four AI agents to use. * • Decision Task 2. Integration of AI agent and human responses. If an AI agent is consulted, then the team needs to integrate its response with the prior human responses into a group response. choose agent Agjsubscript𝐴subscript𝑔𝑗A\_{g\_{j}}italic\_A start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT with probability pgjsubscript𝑝subscript𝑔𝑗p\_{g\_{j}}italic\_p start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT pg4,Ag4subscript𝑝subscript𝑔4subscript𝐴subscript𝑔4p\_{g\_{4}},A\_{g\_{4}}italic\_p start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_A start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPTpg1,Ag1subscript𝑝subscript𝑔1subscript𝐴subscript𝑔1p\_{g\_{1}},A\_{g\_{1}}italic\_p start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_A start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT Decision Task 1 choose option Aθisubscript𝐴subscript𝜃𝑖A\_{\theta\_{i}}italic\_A start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT with probability pθisubscript𝑝subscript𝜃𝑖p\_{\theta\_{i}}italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT pθ1,Aθ1subscript𝑝subscript𝜃1subscript𝐴subscript𝜃1p\_{\theta\_{1}},A\_{\theta\_{1}}italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_A start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPTpθ4,Aθ4subscript𝑝subscript𝜃4subscript𝐴subscript𝜃4p\_{\theta\_{4}},A\_{\theta\_{4}}italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_A start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ask an agent choose option Aθisubscript𝐴subscript𝜃𝑖A\_{\theta\_{i}}italic\_A start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT with probability qθisubscript𝑞subscript𝜃𝑖q\_{\theta\_{i}}italic\_q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT qθ1,Aθ1subscript𝑞subscript𝜃1subscript𝐴subscript𝜃1q\_{\theta\_{1}},A\_{\theta\_{1}}italic\_q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_A start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPTqθ4,Aθ4subscript𝑞subscript𝜃4subscript𝐴subscript𝜃4q\_{\theta\_{4}},A\_{\theta\_{4}}italic\_q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_A start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT Decision Task 2 Figure 1: Decision making in the human-AI teams. The first decision task involves choosing one of the four options presented with each question or choosing one of the agents for help. The second decision task integrates the agent’s response with the earlier individual choices. Figure [1](#S3.F1 "Figure 1 ‣ Decision Tasks ‣ 3 Proposed Models of Decision Making ‣ Modeling Human-AI Team Decision Making") shows the two decision tasks. The first decision task consists of eight possible actions: the first four actions Aθ1,…,Aθ4subscript𝐴subscript𝜃1…subscript𝐴subscript𝜃4A\_{\theta\_{1}},\ldots,A\_{\theta\_{4}}italic\_A start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , … , italic\_A start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT correspond to the decision of using one of the four choices to the posed question without consulting an agent, and the next four actions Ag1,…,Ag4subscript𝐴subscript𝑔1…subscript𝐴subscript𝑔4A\_{g\_{1}},\ldots,A\_{g\_{4}}italic\_A start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , … , italic\_A start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, correspond to the decision of using an AI agent. Action Aθisubscript𝐴subscript𝜃𝑖A\_{\theta\_{i}}italic\_A start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT is associated with probability pθisubscript𝑝subscript𝜃𝑖p\_{\theta\_{i}}italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT while action Agisubscript𝐴subscript𝑔𝑖A\_{g\_{i}}italic\_A start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT is associated with probability pgisubscript𝑝subscript𝑔𝑖p\_{g\_{i}}italic\_p start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. For the second decision task, the team needs to integrate the answer of an agent with the answers of the human team members and decide on one of the four choices. In this case, the task consists of four actions: Aθ1,…,Aθ4subscript𝐴subscript𝜃1…subscript𝐴subscript𝜃4A\_{\theta\_{1}},\ldots,A\_{\theta\_{4}}italic\_A start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , … , italic\_A start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, which are associated with probabilities qθ1,…,qθ4subscript𝑞subscript𝜃1…subscript𝑞subscript𝜃4q\_{\theta\_{1}},\ldots,q\_{\theta\_{4}}italic\_q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , … , italic\_q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, respectively. For a team to be successful, it needs to reach consensus on which action to take in the first decision task; furthermore, if one of the agents is chosen, then the team needs to achieve consensus on one of the four actions in the second decision task. We next present a sequence of models that explain the decision making of a team. The first model uses Bayes rules to combine the individual choices into the team’s response. The second model achieves this integration through a weighting mechanism based on the eigenvector centralities of the individuals. The last two models utilize Prospect Theory to reflect how teams combine appraisal with decision making in risky environments. ### Model NB: Integration through Bayes Rule Model NB uses Bayes rule to integrate human-AI choices into a team response. We explain the model by considering first decision task depicted in Figure [1](#S3.F1 "Figure 1 ‣ Decision Tasks ‣ 3 Proposed Models of Decision Making ‣ Modeling Human-AI Team Decision Making"). The modeling can be broken down into three steps: Step 1 considers the individual responses made by individuals, Step 2 considers the choice of an agent, and Step 3 integrates the choices made in the previous two steps into a model for the eight actions in Figure 1. Step 1: Let ΘΘ\Thetaroman\_Θ be the random variable corresponding to a team’s collective response. It ranges over the four choices to the posed question represented by θi,i=1…4subscript𝜃𝑖𝑖 1…4\theta\_{i},i=1\ldots 4italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4. Let θ\*superscript𝜃\theta^{\*}italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT be the correct choice. Let random variable Risubscript𝑅𝑖R\_{i}italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT be the response of individual i𝑖iitalic\_i. Model NB predicts the probability of the team reaching the decision on choice θksubscript𝜃𝑘\theta\_{k}italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT as follows: | | | | | | --- | --- | --- | --- | | | P(Θ=θk|Ri,i=1…4)=P(Ri,i=1…4|Θ=θk)×P(Θ=θk)P(Ri,i=1…4)𝑃Θconditionalsubscript𝜃𝑘subscript𝑅𝑖𝑖1…4𝑃subscript𝑅𝑖𝑖 conditional1…4Θsubscript𝜃𝑘𝑃Θsubscript𝜃𝑘𝑃subscript𝑅𝑖𝑖 1…4P(\Theta=\theta\_{k}|R\_{i},i=1\ldots 4)\\ =\frac{P(R\_{i},i=1\ldots 4|\Theta=\theta\_{k})\times P(\Theta=\theta\_{k})}{P(R\_{i},i=1\ldots 4)}start\_ROW start\_CELL italic\_P ( roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT | italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4 ) end\_CELL end\_ROW start\_ROW start\_CELL = divide start\_ARG italic\_P ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4 | roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) × italic\_P ( roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_P ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4 ) end\_ARG end\_CELL end\_ROW | | (1) | The above equation can be reduced by assuming that the individual choices are independent given the team’s choice: | | | | | | --- | --- | --- | --- | | | pθk′=P(Θ=θk|Ri,i=1…4)=(Πi=14P(Ri|Θ=θk))×P(Θ=θk)P(Ri,i=1…4)subscriptsuperscript𝑝′subscript𝜃𝑘𝑃Θconditionalsubscript𝜃𝑘subscript𝑅𝑖𝑖1…4superscriptsubscriptΠ𝑖14𝑃conditionalsubscript𝑅𝑖Θsubscript𝜃𝑘𝑃Θsubscript𝜃𝑘𝑃subscript𝑅𝑖𝑖 1…4p^{\prime}\_{\theta\_{k}}=P(\Theta=\theta\_{k}|R\_{i},i=1\ldots 4)\\ =\frac{(\Pi\_{i=1}^{4}P(R\_{i}|\Theta=\theta\_{k}))\times P(\Theta=\theta\_{k})}{P(R\_{i},i=1\ldots 4)}start\_ROW start\_CELL italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = italic\_P ( roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT | italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4 ) end\_CELL end\_ROW start\_ROW start\_CELL = divide start\_ARG ( roman\_Π start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 4 end\_POSTSUPERSCRIPT italic\_P ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) ) × italic\_P ( roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_P ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4 ) end\_ARG end\_CELL end\_ROW | | (2) | When Ri=θksubscript𝑅𝑖subscript𝜃𝑘R\_{i}=\theta\_{k}italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, the term P(Ri|Θ=θk)𝑃conditionalsubscript𝑅𝑖Θsubscript𝜃𝑘P(R\_{i}|\Theta=\theta\_{k})italic\_P ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) is assumed to equal E[𝟏{Ri=θ\*}]𝐸delimited-[]1subscript𝑅𝑖superscript𝜃E[\mathbf{1}\{R\_{i}=\theta^{\*}\}]italic\_E [ bold\_1 { italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT } ], the expected performance of individual i𝑖iitalic\_i. Otherwise, P(Ri|Θ=θk)=1−E[𝟏{Ri=θ\*}]3𝑃conditionalsubscript𝑅𝑖Θsubscript𝜃𝑘1𝐸delimited-[]1subscript𝑅𝑖superscript𝜃3P(R\_{i}|\Theta=\theta\_{k})=\frac{1-E[\mathbf{1}\{R\_{i}=\theta^{\*}\}]}{3}italic\_P ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) = divide start\_ARG 1 - italic\_E [ bold\_1 { italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT } ] end\_ARG start\_ARG 3 end\_ARG. In other words, a team deciding the same as an individual’s choice assumes that the individual is correct and a team deciding differently from an individual assumes that the individual is incorrect. These assumptions imply that the group discovers the truth only if some individual suggests it. The value E[𝟏{Ri=θ\*}]𝐸delimited-[]1subscript𝑅𝑖superscript𝜃E[\mathbf{1}\{R\_{i}=\theta^{\*}\}]italic\_E [ bold\_1 { italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT } ] is defined by a beta distribution with two parameters that are initialized to one (leading to a uniform prior over the range [0,1]), and updated after each response by an individual. The prior probabilities P(Θ=θk)𝑃Θsubscript𝜃𝑘P(\Theta=\theta\_{k})italic\_P ( roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) are assumed to be 0.250.250.250.25, and the term in the denominator of Eq. [2](#S3.E2 "2 ‣ Model NB: Integration through Bayes Rule ‣ 3 Proposed Models of Decision Making ‣ Modeling Human-AI Team Decision Making") is found by normalization. Note that 𝟏1\mathbf{1}bold\_1 is the indicator function. Step 2: The expected performance of agent i𝑖iitalic\_i is again modeled using a beta distribution. The initial values of the two parameters are chosen so that we have a uniform prior over [0.5,1] to account for the fact that the accuracy of each agent is declared at the outset to be at least 0.5. The parameters are updated following each round in which an agent is consulted and its accuracy is observed. The probability of choosing agent gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is pgi′=E[𝟏{Gi=θ\*}]ΣjE[𝟏{Gi=θ\*}]]p^{\prime}\_{g\_{i}}=\frac{E[\mathbf{1}\{G\_{i}=\theta^{\*}\}]}{\Sigma\_{j}E[\mathbf{1}\{G\_{i}=\theta^{\*}\}]]}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = divide start\_ARG italic\_E [ bold\_1 { italic\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT } ] end\_ARG start\_ARG roman\_Σ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT italic\_E [ bold\_1 { italic\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT } ] ] end\_ARG. Step 3: Based on the probability of each action (choosing an option or an agent), we compute the expected reward for each possible action. The reward for choosing option θisubscript𝜃𝑖\theta\_{i}italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is xθi=c1×pθi′−c2×(1−pθi′)subscript𝑥subscript𝜃𝑖subscript𝑐1subscriptsuperscript𝑝′subscript𝜃𝑖subscript𝑐21subscriptsuperscript𝑝′subscript𝜃𝑖x\_{\theta\_{i}}=c\_{1}\times p^{\prime}\_{\theta\_{i}}-c\_{2}\times(1-p^{\prime}\_{\theta\_{i}})italic\_x start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT × italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT × ( 1 - italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ). And the reward for consulting agent i𝑖iitalic\_i is xgi=(c1−c3)×pgi′−(c2+c3)×(1−pgi′)subscript𝑥subscript𝑔𝑖subscript𝑐1subscript𝑐3subscriptsuperscript𝑝′subscript𝑔𝑖subscript𝑐2subscript𝑐31subscriptsuperscript𝑝′subscript𝑔𝑖x\_{g\_{i}}=(c\_{1}-c\_{3})\times p^{\prime}\_{g\_{i}}-(c\_{2}+c\_{3})\times(1-p^{\prime}\_{g\_{i}})italic\_x start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = ( italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT - italic\_c start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ) × italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - ( italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT + italic\_c start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ) × ( 1 - italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ). We use the softmax function to transform xθisubscript𝑥subscript𝜃𝑖x\_{\theta\_{i}}italic\_x start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and xgisubscript𝑥subscript𝑔𝑖x\_{g\_{i}}italic\_x start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT into probability values: | | | | | | --- | --- | --- | --- | | | pθi=exθiΣjexθj+Σjexgj,subscript𝑝subscript𝜃𝑖superscript𝑒subscript𝑥subscript𝜃𝑖subscriptΣ𝑗superscript𝑒subscript𝑥subscript𝜃𝑗subscriptΣ𝑗superscript𝑒subscript𝑥subscript𝑔𝑗\displaystyle p\_{\theta\_{i}}=\frac{e^{x\_{\theta\_{i}}}}{\Sigma\_{j}e^{x\_{\theta\_{j}}}+\Sigma\_{j}e^{x\_{g\_{j}}}},italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = divide start\_ARG italic\_e start\_POSTSUPERSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT end\_ARG start\_ARG roman\_Σ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT italic\_e start\_POSTSUPERSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT + roman\_Σ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT italic\_e start\_POSTSUPERSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT end\_ARG , | | (3) | | | pgi=exgiΣjexθj+Σjexgj.subscript𝑝subscript𝑔𝑖superscript𝑒subscript𝑥subscript𝑔𝑖subscriptΣ𝑗superscript𝑒subscript𝑥subscript𝜃𝑗subscriptΣ𝑗superscript𝑒subscript𝑥subscript𝑔𝑗\displaystyle p\_{g\_{i}}=\frac{e^{x\_{g\_{i}}}}{\Sigma\_{j}e^{x\_{\theta\_{j}}}+\Sigma\_{j}e^{x\_{g\_{j}}}}.italic\_p start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = divide start\_ARG italic\_e start\_POSTSUPERSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT end\_ARG start\_ARG roman\_Σ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT italic\_e start\_POSTSUPERSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT + roman\_Σ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT italic\_e start\_POSTSUPERSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT end\_ARG . | | (4) | This concludes the modeling of the first decision task. Next, we consider how model NB explains the second decision task. Let gjsubscript𝑔𝑗g\_{j}italic\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT be the agent that the team consulted. We integrate the response Gjsubscript𝐺𝑗G\_{j}italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT of the agent as follows: | | | | | | --- | --- | --- | --- | | | qθk′=P(Θ=θk|Ri,i=1…4,Gj)=(P(Gj|Θ=θk)Πi=14P(Ri|Θ=θk))×P(Θ=θk)P(Ri,i=1…4,Gj)q^{\prime}\_{\theta\_{k}}=P(\Theta=\theta\_{k}|R\_{i},i=1\ldots 4,G\_{j})\\ =\frac{(P(G\_{j}|\Theta=\theta\_{k})\Pi\_{i=1}^{4}P(R\_{i}|\Theta=\theta\_{k}))\times P(\Theta=\theta\_{k})}{P(R\_{i},i=1\ldots 4,G\_{j})}start\_ROW start\_CELL italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = italic\_P ( roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT | italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4 , italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL = divide start\_ARG ( italic\_P ( italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT | roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) roman\_Π start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 4 end\_POSTSUPERSCRIPT italic\_P ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) ) × italic\_P ( roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_P ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4 , italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) end\_ARG end\_CELL end\_ROW | | (5) | When Gj=θksubscript𝐺𝑗subscript𝜃𝑘G\_{j}=\theta\_{k}italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, the term P(Gj|Θ=θk)𝑃conditionalsubscript𝐺𝑗Θsubscript𝜃𝑘P(G\_{j}|\Theta=\theta\_{k})italic\_P ( italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT | roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) is assumed to equal E[𝟏{Gj=θ\*}]𝐸delimited-[]1subscript𝐺𝑗superscript𝜃E[\mathbf{1}\{G\_{j}=\theta^{\*}\}]italic\_E [ bold\_1 { italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT } ], the expected performance of agent j𝑗jitalic\_j. Otherwise, P(Gj|Θ=θk)=1−E[𝟏{Gj=θ\*}]3𝑃conditionalsubscript𝐺𝑗Θsubscript𝜃𝑘1𝐸delimited-[]1subscript𝐺𝑗superscript𝜃3P(G\_{j}|\Theta=\theta\_{k})=\frac{1-E[\mathbf{1}\{G\_{j}=\theta^{\*}\}]}{3}italic\_P ( italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT | roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) = divide start\_ARG 1 - italic\_E [ bold\_1 { italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT } ] end\_ARG start\_ARG 3 end\_ARG. P(Ri|Θ=θk)𝑃conditionalsubscript𝑅𝑖Θsubscript𝜃𝑘P(R\_{i}|\Theta=\theta\_{k})italic\_P ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) is defined similarly (and as in Eq. [2](#S3.E2 "2 ‣ Model NB: Integration through Bayes Rule ‣ 3 Proposed Models of Decision Making ‣ Modeling Human-AI Team Decision Making")). Based on the above probabilities, we again compute the expected reward for each possible action. The reward for choosing option θisubscript𝜃𝑖\theta\_{i}italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is yθi=(c1−c3)×qθi′−(c2+c3)×(1−qθi′)subscript𝑦subscript𝜃𝑖subscript𝑐1subscript𝑐3subscriptsuperscript𝑞′subscript𝜃𝑖subscript𝑐2subscript𝑐31subscriptsuperscript𝑞′subscript𝜃𝑖y\_{\theta\_{i}}=(c\_{1}-c\_{3})\times q^{\prime}\_{\theta\_{i}}-(c\_{2}+c\_{3})\times(1-q^{\prime}\_{\theta\_{i}})italic\_y start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = ( italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT - italic\_c start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ) × italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - ( italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT + italic\_c start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ) × ( 1 - italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ). We again use the softmax function to transform yθisubscript𝑦subscript𝜃𝑖y\_{\theta\_{i}}italic\_y start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT into probability values: | | | | | | --- | --- | --- | --- | | | qθi=eyθiΣjeyθjsubscript𝑞subscript𝜃𝑖superscript𝑒subscript𝑦subscript𝜃𝑖subscriptΣ𝑗superscript𝑒subscript𝑦subscript𝜃𝑗q\_{\theta\_{i}}=\frac{e^{y\_{\theta\_{i}}}}{\Sigma\_{j}e^{y\_{\theta\_{j}}}}italic\_q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = divide start\_ARG italic\_e start\_POSTSUPERSCRIPT italic\_y start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT end\_ARG start\_ARG roman\_Σ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT italic\_e start\_POSTSUPERSCRIPT italic\_y start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT end\_ARG | | (6) | ### Model CENT: Integration by Interpersonal Influence System This model integrates individual responses through an interpersonal influence system. Each individual in the experiment is asked to rate the influence of other members in their team after every five questions. This leads to an influence network N𝑁Nitalic\_N (as shown in Figure [2](#S3.F2 "Figure 2 ‣ Model CENT: Integration by Interpersonal Influence System ‣ 3 Proposed Models of Decision Making ‣ Modeling Human-AI Team Decision Making")) whose centralities are used to weigh the choices of the team members. Consider Decision Task 1 (Figure [1](#S3.F1 "Figure 1 ‣ Decision Tasks ‣ 3 Proposed Models of Decision Making ‣ Modeling Human-AI Team Decision Making")) first. The modeling is again done in three steps. H1subscript𝐻1H\_{1}italic\_H start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTδ1subscript𝛿1\delta\_{1}italic\_δ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTH2subscript𝐻2H\_{2}italic\_H start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTδ2subscript𝛿2\delta\_{2}italic\_δ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTH3subscript𝐻3H\_{3}italic\_H start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTδ3subscript𝛿3\delta\_{3}italic\_δ start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTH4subscript𝐻4H\_{4}italic\_H start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPTδ4subscript𝛿4\delta\_{4}italic\_δ start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPTA1subscript𝐴1A\_{1}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTπ1subscript𝜋1\pi\_{1}italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTA2subscript𝐴2A\_{2}italic\_A start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTπ2subscript𝜋2\pi\_{2}italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTA3subscript𝐴3A\_{3}italic\_A start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTπ3subscript𝜋3\pi\_{3}italic\_π start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTA4subscript𝐴4A\_{4}italic\_A start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPTπ4subscript𝜋4\pi\_{4}italic\_π start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPTπ1,1subscript𝜋11\pi\_{1,1}italic\_π start\_POSTSUBSCRIPT 1 , 1 end\_POSTSUBSCRIPTπ2,1subscript𝜋21\pi\_{2,1}italic\_π start\_POSTSUBSCRIPT 2 , 1 end\_POSTSUBSCRIPTπ3,1subscript𝜋31\pi\_{3,1}italic\_π start\_POSTSUBSCRIPT 3 , 1 end\_POSTSUBSCRIPTπ4,1subscript𝜋41\pi\_{4,1}italic\_π start\_POSTSUBSCRIPT 4 , 1 end\_POSTSUBSCRIPT Figure 2: The appraisal networks. The shaded box is the appraisal matrix between the individuals on the team. The eigenvalues of the matrix are denoted by δ𝛿\deltaitalic\_δ. These individuals also appraise the agents. The π𝜋\piitalic\_π values denote these appraisals. ##### Step 1: The probability of the team choosing option θksubscript𝜃𝑘\theta\_{k}italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT is modeled as the sum of the eigenvector centrality values in network N𝑁Nitalic\_N of each individual who has chosen option θksubscript𝜃𝑘\theta\_{k}italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT. If the centrality value of individual i𝑖iitalic\_i is δisubscript𝛿𝑖\delta\_{i}italic\_δ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT then we obtain the following for the probability of choosing option θksubscript𝜃𝑘\theta\_{k}italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT. | | | | | | --- | --- | --- | --- | | | pθk′=P(Θ=θk|Ri,i=1…4)=Σi𝟏θk(Ri)×δiP(Ri,i=1…4)subscriptsuperscript𝑝′subscript𝜃𝑘𝑃Θconditionalsubscript𝜃𝑘subscript𝑅𝑖𝑖1…4subscriptΣ𝑖subscript1subscript𝜃𝑘subscript𝑅𝑖subscript𝛿𝑖𝑃subscript𝑅𝑖𝑖 1…4p^{\prime}\_{\theta\_{k}}=P(\Theta=\theta\_{k}|R\_{i},i=1\ldots 4)=\frac{\Sigma\_{i}\mathbf{1}\_{\theta\_{k}}(R\_{i})\times\delta\_{i}}{P(R\_{i},i=1\ldots 4)}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = italic\_P ( roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT | italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4 ) = divide start\_ARG roman\_Σ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT bold\_1 start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) × italic\_δ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_P ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4 ) end\_ARG | | (7) | ##### Step 2: This step uses a weighting process to model the team’s collective appraisal of the agents. Let the i𝑖iitalic\_i-th individual’s appraisal of the j𝑗jitalic\_j-th agent be πijsubscript𝜋𝑖𝑗\pi\_{ij}italic\_π start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT. Note that the values πijsubscript𝜋𝑖𝑗\pi\_{ij}italic\_π start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT are collected during the experiment after every five questions. Let the eigenvector centrality values of individual i𝑖iitalic\_i be δisubscript𝛿𝑖\delta\_{i}italic\_δ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT as before. We model the collective appraisal of agent j𝑗jitalic\_j as πjsubscript𝜋𝑗\pi\_{j}italic\_π start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT and the probability of choosing agent gjsubscript𝑔𝑗g\_{j}italic\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT as pgj′subscriptsuperscript𝑝′subscript𝑔𝑗p^{\prime}\_{g\_{j}}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT: | | | | | | --- | --- | --- | --- | | | πj=Σiδiπi,jsubscript𝜋𝑗subscriptΣ𝑖subscript𝛿𝑖subscript𝜋𝑖𝑗\displaystyle\pi\_{j}=\Sigma\_{i}\delta\_{i}\pi\_{i,j}italic\_π start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = roman\_Σ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_δ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_i , italic\_j end\_POSTSUBSCRIPT | | (8) | | | pgj′=πjΣjπj.subscriptsuperscript𝑝′subscript𝑔𝑗subscript𝜋𝑗subscriptΣ𝑗subscript𝜋𝑗\displaystyle p^{\prime}\_{g\_{j}}=\frac{\pi\_{j}}{\Sigma\_{j}\pi\_{j}}.italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_ARG start\_ARG roman\_Σ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_ARG . | | (9) | ##### Step 3: As in Step 3 of NB, we first compute the reward for each possible action and then transform all the rewards to probabilities using the softmax function. This concludes the modeling of the first decision task. For the second decision task, we need to integrate the response Gjsubscript𝐺𝑗G\_{j}italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT of the consulted agent j𝑗jitalic\_j into the collective choice of the team. This integration now (as opposed to Decision Task 1) needs to consider the choice made by the agent and the choices made by the team prior to consultation with the agent. We utilize a parameter w𝑤witalic\_w for this integration: w𝑤witalic\_w weighs the influence placed in the agent’s response as compared to the influence placed in the initial human responses. This (global) parameter is learned by a training sample of teams. We have the following resulting probability qθk′subscriptsuperscript𝑞′subscript𝜃𝑘q^{\prime}\_{\theta\_{k}}italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT of choosing option θksubscript𝜃𝑘\theta\_{k}italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT in Decision Task 2. | | | | | | --- | --- | --- | --- | | | qθk′=P(Θ=θk|Ri,i=1…4,Gj)=𝟏θk(Gj)×πj×w+Σi𝟏θk(Ri)×δi×(1−w)P(Ri,i=1…4,Gj)q^{\prime}\_{\theta\_{k}}=P(\Theta=\theta\_{k}|R\_{i},i=1\ldots 4,G\_{j})\\ =\frac{\mathbf{1}\_{\theta\_{k}}(G\_{j})\times\pi\_{j}\times w+\Sigma\_{i}\mathbf{1}\_{\theta\_{k}}(R\_{i})\times\delta\_{i}\times(1-w)}{P(R\_{i},i=1\ldots 4,G\_{j})}start\_ROW start\_CELL italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = italic\_P ( roman\_Θ = italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT | italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4 , italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL = divide start\_ARG bold\_1 start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) × italic\_π start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT × italic\_w + roman\_Σ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT bold\_1 start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) × italic\_δ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT × ( 1 - italic\_w ) end\_ARG start\_ARG italic\_P ( italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_i = 1 … 4 , italic\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) end\_ARG end\_CELL end\_ROW | | (10) | Based on these probabilities, we compute the expected rewards and derive the probabilities of choosing the four actions as in model NB. ### Model PT-NB: Integration through Prospect Theory and Naive Bayes This model uses Prospect Theory to analyze the actions of the team as a set of gambles. For the first decision task, the estimation of probabilities pθi′subscriptsuperscript𝑝′subscript𝜃𝑖p^{\prime}\_{\theta\_{i}}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, pgi′subscriptsuperscript𝑝′subscript𝑔𝑖p^{\prime}\_{g\_{i}}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT (Steps 1 and 2) is exactly as in model NB. Step 3 now utilizes Prospect Theory parameters to integrate them and obtain the probabilities for the eight actions. We model the choices made by the team using five Prospect Theory parameters (α,β,λ,γ+,γ−𝛼𝛽𝜆superscript𝛾superscript𝛾\alpha,\beta,\lambda,\gamma^{+},\gamma^{-}italic\_α , italic\_β , italic\_λ , italic\_γ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT , italic\_γ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT). α=β∈[0,1]𝛼𝛽01\alpha=\beta\in[0,1]italic\_α = italic\_β ∈ [ 0 , 1 ], γ+⁣/−∈[0,1]superscript𝛾absent01\gamma^{+/-}\in[0,1]italic\_γ start\_POSTSUPERSCRIPT + / - end\_POSTSUPERSCRIPT ∈ [ 0 , 1 ], and λ∈[0,10]𝜆010\lambda\in[0,10]italic\_λ ∈ [ 0 , 10 ]. The gamble for option θisubscript𝜃𝑖\theta\_{i}italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is {(c1,pθi′),(c2,1−pθi′)}subscript𝑐1subscriptsuperscript𝑝′subscript𝜃𝑖subscript𝑐21subscriptsuperscript𝑝′subscript𝜃𝑖\{(c\_{1},p^{\prime}\_{\theta\_{i}}),(c\_{2},1-p^{\prime}\_{\theta\_{i}})\}{ ( italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) , ( italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , 1 - italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) } and the gamble for the use of agent gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is {(c1−c3),pgi′),(c2+c3,1−pgi′)}\{(c\_{1}-c\_{3}),p^{\prime}\_{g\_{i}}),(c\_{2}+c\_{3},1-p^{\prime}\_{g\_{i}})\}{ ( italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT - italic\_c start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ) , italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) , ( italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT + italic\_c start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , 1 - italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) }. The corresponding rewards xθisubscript𝑥subscript𝜃𝑖x\_{\theta\_{i}}italic\_x start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and xgisubscript𝑥subscript𝑔𝑖x\_{g\_{i}}italic\_x start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT are computed as follows: | | | | | | | --- | --- | --- | --- | --- | | | xθisubscript𝑥subscript𝜃𝑖\displaystyle x\_{\theta\_{i}}italic\_x start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT | =c1α×exp⁡(−(log⁡1pθi)γ+)absentsuperscriptsubscript𝑐1𝛼superscript1subscript𝑝subscript𝜃𝑖superscript𝛾\displaystyle=c\_{1}^{\alpha}\times\exp(-(\log\frac{1}{p\_{\theta\_{i}}})^{\gamma^{+}})= italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_α end\_POSTSUPERSCRIPT × roman\_exp ( - ( roman\_log divide start\_ARG 1 end\_ARG start\_ARG italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_ARG ) start\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT end\_POSTSUPERSCRIPT ) | | (11) | | | | +(−λ|c2|β×exp⁡(−(log⁡11−pθi)γ−))𝜆superscriptsubscript𝑐2𝛽superscript11subscript𝑝subscript𝜃𝑖superscript𝛾\displaystyle+\left(-\lambda|c\_{2}|^{\beta}\times\exp({-(\log\frac{1}{1-p\_{\theta\_{i}}})^{\gamma^{-}}})\right)+ ( - italic\_λ | italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | start\_POSTSUPERSCRIPT italic\_β end\_POSTSUPERSCRIPT × roman\_exp ( - ( roman\_log divide start\_ARG 1 end\_ARG start\_ARG 1 - italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_ARG ) start\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT end\_POSTSUPERSCRIPT ) ) | | | | xgisubscript𝑥subscript𝑔𝑖\displaystyle x\_{g\_{i}}italic\_x start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT | =(c1−c3)α×exp⁡(−(log⁡1pgi)γ+)absentsuperscriptsubscript𝑐1subscript𝑐3𝛼superscript1subscript𝑝subscript𝑔𝑖superscript𝛾\displaystyle=(c\_{1}-c\_{3})^{\alpha}\times\exp(-(\log\frac{1}{p\_{g\_{i}}})^{\gamma^{+}})= ( italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT - italic\_c start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT italic\_α end\_POSTSUPERSCRIPT × roman\_exp ( - ( roman\_log divide start\_ARG 1 end\_ARG start\_ARG italic\_p start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_ARG ) start\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT end\_POSTSUPERSCRIPT ) | | | | | +(−λ|c2+c3|β×exp⁡(−(log⁡11−pgi)γ−))𝜆superscriptsubscript𝑐2subscript𝑐3𝛽superscript11subscript𝑝subscript𝑔𝑖superscript𝛾\displaystyle+\left(-\lambda|c\_{2}+c\_{3}|^{\beta}\times\exp({-(\log\frac{1}{1-p\_{g\_{i}}})^{\gamma^{-}}})\right)+ ( - italic\_λ | italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT + italic\_c start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT | start\_POSTSUPERSCRIPT italic\_β end\_POSTSUPERSCRIPT × roman\_exp ( - ( roman\_log divide start\_ARG 1 end\_ARG start\_ARG 1 - italic\_p start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_ARG ) start\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT end\_POSTSUPERSCRIPT ) ) | | Finally, we use the softmax function to transform these rewards into probability values pθisubscript𝑝subscript𝜃𝑖p\_{\theta\_{i}}italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and pgisubscript𝑝subscript𝑔𝑖p\_{g\_{i}}italic\_p start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. This concludes the modeling of the first decision task. The computations for the second decision task are similar. We use the probabilities qθk′subscriptsuperscript𝑞′subscript𝜃𝑘q^{\prime}\_{\theta\_{k}}italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT from model NB and the Prospect Theory parameters to compute the rewards yθisubscript𝑦subscript𝜃𝑖y\_{\theta\_{i}}italic\_y start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and these are converted into probabilities qθisubscript𝑞subscript𝜃𝑖q\_{\theta\_{i}}italic\_q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT using the softmax function. Note that the gambles for all four options involve the same rewards and this will result in a similar model as NB. ### Model PT-CENT: Integration through Prospect Theory and Interpersonal Influence System The development of this model is similar to PT-NB except that the model CENT is now used for computing the probabilities pθi′subscriptsuperscript𝑝′subscript𝜃𝑖p^{\prime}\_{\theta\_{i}}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, pgi′subscriptsuperscript𝑝′subscript𝑔𝑖p^{\prime}\_{g\_{i}}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and qθi′subscriptsuperscript𝑞′subscript𝜃𝑖q^{\prime}\_{\theta\_{i}}italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. Using the Prospect Theory parameters, we obtain the rewards xθisubscript𝑥subscript𝜃𝑖x\_{\theta\_{i}}italic\_x start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, xgisubscript𝑥subscript𝑔𝑖x\_{g\_{i}}italic\_x start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and yθisubscript𝑦subscript𝜃𝑖y\_{\theta\_{i}}italic\_y start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. Finally, we use the softmax function to obtain the probabilities pθisubscript𝑝subscript𝜃𝑖p\_{\theta\_{i}}italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and pgisubscript𝑝subscript𝑔𝑖p\_{g\_{i}}italic\_p start\_POSTSUBSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT for the eight actions in Decision Task 1 shown in Figure [1](#S3.F1 "Figure 1 ‣ Decision Tasks ‣ 3 Proposed Models of Decision Making ‣ Modeling Human-AI Team Decision Making"), and the probabilities qθisubscript𝑞subscript𝜃𝑖q\_{\theta\_{i}}italic\_q start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT for the four actions in Decision Task 2 shown in Figure [1](#S3.F1 "Figure 1 ‣ Decision Tasks ‣ 3 Proposed Models of Decision Making ‣ Modeling Human-AI Team Decision Making"). Again, since the gambles for the four options involve the same rewards, this will result in a similar model as CENT. 4 Evaluation of Models ----------------------- ### Experimental Details We worked with 30 teams, and 45 questions were sequentially posed to each team through the POGS (Kim et al. [2017](#bib.bib25)) interface, a software system designed for conducting team experiments. Each intellective question is coupled with four alternative answers. Each question is answered in four phases, of duration 30 seconds, 30 seconds, 15 seconds and 45 seconds, respectively. In the first phase, every team member records his/her initial individual response for the question. In the second phase, the responses of every team member are displayed on the screen and the chat plugin is enabled for communication. In the next 15 seconds, the team decides whether or not to invoke the help of an AI agent (and/or which AI agent to invoke); only one AI agent can be invoked per question. In the fourth phase, the team submits an answer. The correct answer to each question is displayed after submitting the answer. Over the sequence of 45 questions, this process generates an associated sequence of nine influence networks (denoted by a row-stochastic weight matrix W𝑊Witalic\_W). Consensus is reached by a team if each member chooses the same answer or chooses to invoke the same agent for help. If an agent is consulted then the team again needs to reach consensus on which answer to report. If consensus is not reached, then the task has failed and the team’s response is deemed incorrect. Our analysis is based on the 9×30=2709302709\times 30=2709 × 30 = 270 occasions of team answers to the nine questions that involved a report of weight matrices. All the reported W𝑊Witalic\_W are irreducible and aperiodic structures with a unique normalized left eigenvector, which we take as the measure of each team member’s relative influence centrality. (These W𝑊Witalic\_W matrices as well as the agent appraisal matrices π𝜋\piitalic\_π do converge in our experiments.) In turn, the measure of the influence centrality of a particular initial answer is the sum of the eigenvalues of those members who chose it as their initial (pre-discussion) answer. Thus, the influence’s predicted answer for the team is the multiple choice alternative that has a largest sum of eigenvalue influence centralities. ### Results Our main experimental results are about how well the proposed models predict decision making. In order to measure this correspondence, we first need to define a measurement criterion, or a loss function, that measures the correspondence between the model and the experimental data. #### Metrics for Measuring Model Accuracy In both decision tasks, each of our models generates a probability distribution over the actions (eight actions in the first decision task and four in the second). In the experimental data, we do not have the probability values the team computed, only the action chosen. As a result, we need to consider the distance between the model’s predicted distribution and all possible data distributions in which the observed action is the dominant choice. We focus on measuring the smallest distance from a model to the family of possible distributions that fit the team’s action. The other possibility of using an expected distance is considerably more complex since we need to assign probabilities to each feasible distribution. Let q𝑞qitalic\_q be the specific distribution produced by a model with a maxima for action j𝑗jitalic\_j, and let action i𝑖iitalic\_i be the action chosen by the team. We consider all possible distributions p𝑝pitalic\_p in which the value of pisubscript𝑝𝑖p\_{i}italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is as high as all pksubscript𝑝𝑘p\_{k}italic\_p start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, and then compute the distance between the distributions to measure the correspondence. We adopt the measure of cross-entropy H𝐻Hitalic\_H for measuring the correspondence. Since cross-entropy is not symmetric, this gives us two possibilities. First, the loss function L(1)superscript𝐿1L^{(1)}italic\_L start\_POSTSUPERSCRIPT ( 1 ) end\_POSTSUPERSCRIPT minimizes the entropy H(q,p)𝐻𝑞𝑝H(q,p)italic\_H ( italic\_q , italic\_p ): | | | | | | --- | --- | --- | --- | | | L(1)=minps.t.∀k⁣:pk⁣≤pi−∑i=1nqilog⁡(pi)superscript𝐿1subscriptformulae-sequence𝑝𝑠𝑡for-all𝑘:absentsubscript𝑝𝑘absentsubscript𝑝𝑖superscriptsubscript𝑖1𝑛subscript𝑞𝑖subscript𝑝𝑖L^{(1)}=\min\_{p\ s.t.\ \forall k:\ p\_{k}\leq p\_{i}}-\sum\_{i=1}^{n}q\_{i}\log(p\_{i})italic\_L start\_POSTSUPERSCRIPT ( 1 ) end\_POSTSUPERSCRIPT = roman\_min start\_POSTSUBSCRIPT italic\_p italic\_s . italic\_t . ∀ italic\_k : italic\_p start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ≤ italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT roman\_log ( italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) | | (12) | (We have also considered the loss function L(2)superscript𝐿2L^{(2)}italic\_L start\_POSTSUPERSCRIPT ( 2 ) end\_POSTSUPERSCRIPT that minimizes H(p,q)𝐻𝑝𝑞H(p,q)italic\_H ( italic\_p , italic\_q ) with similar results.) Instead of generating all such distributions and finding the optimal p𝑝pitalic\_p, we obtain an analytical solution. The distribution p𝑝pitalic\_p that minimizes the loss function can be computed as follows: | | | | | | | --- | --- | --- | --- | --- | | | i=j𝑖𝑗\displaystyle i=jitalic\_i = italic\_j | ⟹\displaystyle\implies⟹ | p=q𝑝𝑞\displaystyle p=q\hskip 252.94499ptitalic\_p = italic\_q | | | | i≠j𝑖𝑗\displaystyle i\neq jitalic\_i ≠ italic\_j | ⟹\displaystyle\implies⟹ | pk={(1/nH)∑k∈qHqk,if k∈qHqk,otherwise,subscript𝑝𝑘cases1subscript𝑛𝐻subscript𝑘subscript𝑞𝐻subscript𝑞𝑘if 𝑘subscript𝑞𝐻subscript𝑞𝑘otherwise\displaystyle p\_{k}=\begin{cases}(1/n\_{H})\sum\_{k\in q\_{H}}q\_{k},\quad&\text{if }k\in q\_{H}\\ q\_{k},\quad&\text{otherwise},\end{cases}italic\_p start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT = { start\_ROW start\_CELL ( 1 / italic\_n start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) ∑ start\_POSTSUBSCRIPT italic\_k ∈ italic\_q start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT , end\_CELL start\_CELL if italic\_k ∈ italic\_q start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL italic\_q start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT , end\_CELL start\_CELL otherwise , end\_CELL end\_ROW | | where qHsubscript𝑞𝐻q\_{H}italic\_q start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT consists of those indices k𝑘kitalic\_k for which qk≥qisubscript𝑞𝑘subscript𝑞𝑖q\_{k}\geq q\_{i}italic\_q start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ≥ italic\_q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and nHsubscript𝑛𝐻n\_{H}italic\_n start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT is the size of qHsubscript𝑞𝐻q\_{H}italic\_q start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT. Random Baseline: In order to assess the statistical significance of the models, we also compare with a baseline that makes a random choice uniformly among the possible actions at each step. For Decision Task 1, the probability of each action is 0.125 and for Decision Task 2, the probability of each action is 0.25. #### Learning Parameters For models PT-NB and PT-CENT in the first decision task, we use the loss functions to learn the five parameters for Prospect Theory. We use a training set of 30 questions to learn these using grid search for each team. We vary the values of α𝛼\alphaitalic\_α, β𝛽\betaitalic\_β and γ+⁣/−superscript𝛾absent\gamma^{+/-}italic\_γ start\_POSTSUPERSCRIPT + / - end\_POSTSUPERSCRIPT from 0 to 1 in steps of 0.1 and the value of λ𝜆\lambdaitalic\_λ from 0 to 10 in steps of 1. Using these parameters, we validate the models on the remaining 15 questions in the dataset for each team. For the second decision task, since the gambles for all four options involve the same rewards, models PT-NB and PT-CENT are similar to models NB and CENT, respectively. We only report the results of the latter two models. Parameter w𝑤witalic\_w in Equation [10](#S3.E10 "10 ‣ Step 3: ‣ Model CENT: Integration by Interpersonal Influence System ‣ 3 Proposed Models of Decision Making ‣ Modeling Human-AI Team Decision Making") is learned by considering a random set of 20 teams and the results are validated on the remaining 10 teams. #### Validation of Models For the first decision task in Figure [1](#S3.F1 "Figure 1 ‣ Decision Tasks ‣ 3 Proposed Models of Decision Making ‣ Modeling Human-AI Team Decision Making"), the loss values of each model are given in Table [1](#S4.T1 "Table 1 ‣ Validation of Models ‣ Results ‣ 4 Evaluation of Models ‣ Modeling Human-AI Team Decision Making")(a). As mentioned earlier, the values are averaged over 30 teams over the last 15 questions. The Wilcoxon signed-rank test (Wilcoxon [1992](#bib.bib48)) shows the results of the four models are significantly superior to that of the random model (significance level <<< 0.01). Table [1](#S4.T1 "Table 1 ‣ Validation of Models ‣ Results ‣ 4 Evaluation of Models ‣ Modeling Human-AI Team Decision Making")(b) shows that the appraisal-based models NB and CENT perform similarly in explaining a human-AI team’s decision making. The models PT-NB and PT-CENT also perform similarly and are superior to the previous two models (Wilcoxon signed-rank test significance level <<< 0.01). This implies that modeling the inherent risk in decision making leads to a superior model. | Model | Loss L(1)superscript𝐿1L^{(1)}italic\_L start\_POSTSUPERSCRIPT ( 1 ) end\_POSTSUPERSCRIPT | | --- | --- | | NB | 1.08±0.16plus-or-minus1.080.161.08\pm 0.161.08 ± 0.16 | | CENT | 1.14±0.14plus-or-minus1.140.141.14\pm 0.141.14 ± 0.14 | | PT-NB | 0.55±0.19plus-or-minus0.550.190.55\pm 0.190.55 ± 0.19 | | PT-CENT | 0.57±0.17plus-or-minus0.570.170.57\pm 0.170.57 ± 0.17 | | RANDOM | 2.08±0plus-or-minus2.0802.08\pm 02.08 ± 0 | (a) | Wilcoxon signed-rank test | p𝑝pitalic\_p-value for L(1)superscript𝐿1L^{(1)}italic\_L start\_POSTSUPERSCRIPT ( 1 ) end\_POSTSUPERSCRIPT | | --- | --- | | W(NB,CENT) | 0.02 | | W(NB,PT-NB) | <<<0.01 | | W(NB,PT-CENT) | <<<0.01 | | W(CENT,PT-NB) | <<<0.01 | | W(CENT,PT-CENT) | <<<0.01 | | W(PT-NB, PT-CENT) | 0.24 | (b) Table 1: (a) The loss values of (NB, CENT) are similar to each other and so are the loss values of (PT-NB, PT-CENT). The latter two models are superior suggesting that teams consider both risk/reward and appraisal in their decision making. (b) The models PT-NB and PT-CENT are significantly better than NB and CENT. (Wilcoxon signed-rank test. The null hypothesis is that the two related paired samples come from the same distribution.) Next, we consider the second decision task in Figure [1](#S3.F1 "Figure 1 ‣ Decision Tasks ‣ 3 Proposed Models of Decision Making ‣ Modeling Human-AI Team Decision Making"). Since the risk/reward of each action is the same, models PT-NB and NB become similar, and so do models PT-CENT and CENT. Thus, we only report the values for NB and CENT. Since the AI agents are consulted about 25.7% of the time, there are fewer samples for this decision task. We learn the parameter w𝑤witalic\_w by considering a random set of 20 teams and present the validation results for the remaining 10 teams in Table [2](#S4.T2 "Table 2 ‣ Validation of Models ‣ Results ‣ 4 Evaluation of Models ‣ Modeling Human-AI Team Decision Making")(a). The parameter w𝑤witalic\_w is learned to be 0.9, suggesting that the teams consult the agents only when they are unsure of the choices and, as a result, and place a high degree of trust in the agent’s answer. Both models are superior to the random model (Wilcoxon signed-rank test significance level <<< 0.01). For the second decision task (when an AI agent is consulted), we also investigate how teams integrate the responses from the AI agents in their deliberations. For this, we designed two other models for this decision task: one in which the best human response is returned (neglecting the agent response) and another in which the agent’s response is returned (neglecting the human responses). Models NB-H and CENT-H are models of the former kind while NB-A and CENT-A are models of the latter kind. As can be seen from Table [2](#S4.T2 "Table 2 ‣ Validation of Models ‣ Results ‣ 4 Evaluation of Models ‣ Modeling Human-AI Team Decision Making")(a), models that rely only on human responses (NB-H and CENT-H) do not explain the team behavior, suggesting that teams do integrate the agent responses. The good behavior of the agent-only models (NB-A and CENT-A) as well as the high value of parameter w𝑤witalic\_w (learned to be 0.9) suggests that teams rely much more on the agent response in this decision task. This can be explained by the fact that the teams proceed to the second decision task involving an AI agent only if they are unsure of the correct choice. | Model | Loss L(1)superscript𝐿1L^{(1)}italic\_L start\_POSTSUPERSCRIPT ( 1 ) end\_POSTSUPERSCRIPT | | --- | --- | | NB | 0.47±0.11plus-or-minus0.470.110.47\pm 0.110.47 ± 0.11 | | NB-H | 0.91±0.20plus-or-minus0.910.200.91\pm 0.200.91 ± 0.20 | | NB-A | 0.49±0.14plus-or-minus0.490.140.49\pm 0.140.49 ± 0.14 | | CENT | 0.33±0.06plus-or-minus0.330.060.33\pm 0.060.33 ± 0.06 | | CENT-H | 0.95±0.17plus-or-minus0.950.170.95\pm 0.170.95 ± 0.17 | | CENT-A | 0.74±0.20plus-or-minus0.740.200.74\pm 0.200.74 ± 0.20 | | RANDOM | 1.39±0plus-or-minus1.3901.39\pm 01.39 ± 0 | (a) | Wilcoxon signed-rank test | p𝑝pitalic\_p-value for L(1)superscript𝐿1L^{(1)}italic\_L start\_POSTSUPERSCRIPT ( 1 ) end\_POSTSUPERSCRIPT | | --- | --- | | W(NB,NB-A) | 0.02 | | W(NB,NB-H) | <<<0.01 | | W(NB-A,NB-H) | <<<0.01 | | W(CENT,CENT-A) | <<<0.01 | | W(CENT,CENT-H) | <<<0.01 | | W(CENT-A,CENT-H) | <<<0.01 | (b) Table 2: (a) The models NB and CENT are the best at predicting the actions of a team. Models that rely on the human response only (NB-H and CENT-H) do not do as well, suggesting that the agent’s response is integrated into the final decision. Models that rely on the agent only (NB-A and CENT-A) also perform well, suggesting that the agent’s response is weighed highly in this decision task. (b) The models NB and CENT are significantly better than NB-H and CENT-H, respectively. Models NB-A and CENT-A are also significantly better than NB-H and CENT-H, respectively. 5 Conclusions -------------- Our experimental setting provides a unique platform for understanding how human-AI team members learn each other’s expertise and how they coordinate to solve intellective tasks in uncertain and risky environments. We proposed four models to explain the dynamics of human-AI team decision making. We find that, although appraisal-based models NB and CENT perform adequately in explaining human-AI team’s decision making, the prospect theory based models PT-NB and PT-CENT exhibit more accurate explanations. This finding establishes the importance of modeling the inherent risk in uncertain decision making. Our results raise a number of possibilities for future studies including the use of active AI participating agents and the use of reinforcement learning for modeling the decision making.
6c289b76-105f-46e7-ae4c-f7c4200e665f
trentmkelly/LessWrong-43k
LessWrong
Rationality Quotes April 2012 Here's the new thread for posting quotes, with the usual rules: * Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.) * Do not quote yourself * Do not quote comments/posts on LW/OB * No more than 5 quotes per person per monthly thread, please.
574a049c-b772-4262-8dff-efc204513bf5
trentmkelly/LessWrong-43k
LessWrong
Newcomb's Problem standard positions Marion Ledwig's dissertation summarizes much of the existing thinking that's gone into Newcomb's Problem. (For the record, I myself am neither an evidential decision theorist, nor a causal decision theorist in the current sense.  My view is not easily summarized, but it is reflectively consistent without need of precommitment or similar dodges; my agents see no need to modify their own source code or invoke abnormal decision procedures on Newcomblike problems.)
1edde0f4-3256-4d5d-93bc-2aedba7f8fad
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
2. Premise two: Some cases of value change are (il)legitimate In the prior post, I have defended the claim that genuine value change is possible, and thus, that a realistic account of human values understands them to be malleable. In this section, I will argue for the claim that *some cases of value change are legitimate while others are illegitimate*. In other words, I argue that, at least in principle, something substantial is to be said about what types of value change are legitimate vs. illegitimate. Let us call this the ‘value change legitimacy’ claim (VCL). [1] To do so, I first explain what I mean by value change legitimacy. Then, I make an appeal to intuition or common sense by providing a handful of examples that, I expect, most people would not hesitate to accept as examples of both legitimate and illegitimate cases of value change. Finally, I suggest a plausible evaluative criteria identifying (il)legitimate value change, which provides further rational grounding for the common sense intuitions invoked earlier, as well as a starting point for developing a comprehensive account of value change legitimacy.  ([1]: To clarify the scope of the VCL claim, let me briefly clarify what I am *not* trying to make claims about. First, I am not trying to make claims about whether all cases of value change are either legitimate or illegitimate (i.e., whether legitimacy/illegitimacy is a collectively exhaustive way to classify cases of value change). Second, I don't mean to exclude the possibility that legitimacy comes in degrees, or that there might exist grey areas with respect to whether a given case is (il)legitimate.) Clarifying the notion of value change legitimacy ================================================ First, and maybe most important, value change legitimacy, as I mean to propose it here, is a *procedural*notion. In other words, in asking about value change legitimacy, I am asking about the way in which a given value change has *come about*. This is in contrast to asking whether the value change *as such*is morally good or bad. As we have seen in the former post, the latter question is confusing (and maybe confused) because the value change itself implies a change of the evaluative framework. As a result, it is unclear on what basis the goodness or badness of the value change *as such*should be evaluated. However, I claim that there still exist morally relevant differences between different cases of value change (other than the moral status of the value change as such) -- in particular, the procedural question: whether the manner in which the value change has *come about*conforms with certain normative standards that make the change acceptable, unproblematic and thus legitimate, or—on the other hand—objectionable, problematic and thus illegitimate.  (The choice of the world ‘legitimacy’ might seem confusing or unfortunate to some. While I am happy to hear better suggestions, it seems worth clarify that I chose this term with reference to how ‘legitimacy’ is typically used in political philosophy, where it refers to procedural properties of political institutions. According to me, this analogy goes surprisingly deep—but will have to leave exploring this in more detail to another time.) Of course, in practice, it may not always be clear whether specific cases of value change are legitimate or not, or how, in general, we ought to decide on what counts as legitimate vs. illegitimate. In fact, these questions will be subject to disagreement and rational deliberation. For the purposes of the argument in favour of the Value Change Problem, it suffices for me to establish that there is *something* substantive to be said about the difference between legitimate and illegitimate cases’ value change—that the difference exists—even if important questions remain about how *exactly*to do so.  That said, I will in the latter part of this post put forth a specific proposal as to what we might mean by legitimacy--namely, the degree of self-determination involved in the process of value change. While I do believe value self-determination is a critical aspect of value change legitimacy, I do not think my proposal provides close to a comprehensive account of value change legitimacy, able to deal satisfactorily with a wide range of practical intricacies that arise. For example, for future work, I am interested in ‘stress testing’ my current account of legitimacy by looking at cases in, e.g., parenting and education, and using the resulting insights to build on and improve the current, provisoinal account. As such, I suggest the proposal put forth below to be understood in the spirit of wanting to provide a productive starting point, rather than as an end point.  The case for value change legitimacy ==================================== Argument form intuition/common sense ------------------------------------ Having clarified what we mean by legitimacy in the context of value change, let us now explore the case for VCL.  I will start by describing two examples that I believe people will widely agree represent cases of both legitimate and illegitimate value change--that is, defending VCL by providing an existence proof of sorts. As such, let us consider the following examples of value change.  First, consider David. David does not currently hold a deep appreciation for jazz. For example, when he recently accompanied his friend to a concert—herself an ardent jazz lover—, he secretly fell asleep for some parts of the performance due to boredom. However, Daniel has an inkling that there may be something deeply valuable about jazz that he has not yet come to fully apprehend. This motivates him to spend several weeks attentively listening to jazz music and going to more concerts. While he initially struggles to pay attention, over time, his experience of the music starts to change until, eventually, Daniel comes to deeply appreciate jazz, just like his friend does. On the other hand, consider Elsa. Elsa, too, does not initially have an appreciation jazz and also comes to love it. In her case, however, the change is the result of Elsa joining a cult which, as a central pillar of their ideology, venerate a love of jazz. The cult makes use of elaborate means of coercive persuasion, involving psychological techniques as well as psychoactive substances, in order to get all of their members to appreciate jazz. Each of these are cases of value change as characterised earlier. However, and I would argue most people would agree, there are morally significant differences between these cases: while Daniel's case appears (largely) unproblematic and legitimate, Elsa's one appears (largely) problematic and illegitimate. To put it another way, it seems to me like we would lose something important if we were to deny that there are morally relevant difference between these cases which are *not* reducible to the nature of the value change (in this case the love for jazz). We want to be able to point at Elsa’s case of value change and argue that it is problematic and should be prevented, and we want to be able to say that Daniel’s case of value change is fine and does not need to be prevented, without in either case basing our argumentation on whether or not loving jazz is a morally acceptable or not. As such, I argue that the relevant difference we are picking up on here pertains to the legitimacy (or lack thereof) of the value change *process* (in the sense I've described it above). So far, so good. But, beyond appealing to common sense, can we say anything substantive about what makes these cases different?  Argument from plausible mechanism --------------------------------- I suggest that a/the key difference between Daniel's and Elsa's examples lies in the process by which the value change has been brought about, in particular in the extent to which the process was *self-determined*by the person who undergoes the change, and the extent to which the person remains able to ‘course-correct’ the unfolding of the process (e.g., slow, halt or redirect) if she so chooses to.  To illustrate this, let's first consider Daniel's case. This case of value change appears unproblematic—a case of legitimate value change—in that the transformational process occurs at Daniel's own, free volition, and at any point, he could have chosen to discontinue to further engage in said aspirational process. His friend did not force him to engage with jazz; rather, Daniel held proleptic reasons[[1]](#fna0bk6k4v21) for engaging more with jazz—an inkling, so to speak, for what would later turn into his full capacity to value jazz. By contrast, Elsa’s ability to engage in the unfolding of her value transformation freely and in a self-determined fashion was heavily undermined by the nature of the process. Even if she might have chosen the first (few) interactions with the cult freely, the cult’s sophisticated use of methods of manipulation, indoctrination or brainwashing deliberately exploit Elsa’s psychological make-up. As such, the resulting change—independent of what specific beliefs and values it results in—is problematic due to the way it was brought about, and as such support our intuition that this is a case of illegitimate value change. To test this idea slightly more, let's consider the case of Finley who, just like Daniel and Elsa also ends up falling in love with jazz. In Finley's case, they find themselves, a result of the workings of a content recommender system, consuming a lot of videos about the joys of jazz. Starting out, Finley did not hold any particular evaluative stance towards jazz; a few weeks later, however, they become obsessed with it, started to frequent concerts, read books on jazz, and so on.  Compared to Daniel and Elsa, I think Finley's case is more subtle and ambiguous with respect to value change legitimacy. On one hand, Finley’s process of value change does not meet the same level of active and self-determined engagement as Daniel's. Finley did not (by stipulation in the example) start out with an inkling for the value of gardening, as is characteristic for an aspirational process according to Callard. Rather, they were passively exposed to information which then brought about the change. Furthermore, the recommendation algorithm arguably is shaped more by the economic incentives of the company than it is with the primary purpose of exposing Finley to new experiences and perspectives in mind. Finally, content recommendation platforms have some potential to cause compulsive or addictive behaviour in consumers by exploiting the human psychological make-up (e.g., sensitivity to dopamine stimuli). All of these factors can be taken to weaken Finley’s ability to reflect on their current level of jazz video consumption and values, and to ‘course-correct’ if they wanted to. At the same time, while the recommender platform might be said to have weakened Finley's ability to self-determine and course-correct the process autonomously, this occurred at a very different level as the coercive persuasion experienced by Elsa. As such, this case appears neither clearly legitimate nor clearly illegitimate, but carries some aspects of both.  This is to show, referring to self-determination alone does not clarify all we need to know about what does and does not constitute legitimate value change. As mentioned above, in future work, I am interested in stress testing and building on this preliminary account further.    1. **[^](#fnrefa0bk6k4v21)**Callard (2016) defines proleptic reasons as reasons which are based on value estimates which the reasoner cannot fully access yet, even if they might be able to partially glean them, i.e. an "inchoate, anticipatory, and indirect grasp of some good" (2016, p. 132).
9fa1fca0-3a37-4ed0-a21f-b091914b3c33
trentmkelly/LessWrong-43k
LessWrong
Dissolving Deep Questions: A Decline in Contemporary Controversy I'm practicing dissolving questions. For some of these questions, there's no dispute as to the nature of the facts, and people are just arguing about what frame to hang life's picture in, what words to use to describe it. For others, there are real factual disputes hiding behind these semantic squabbles, and this technique lets us get past disputing definitions and graduate to mapmaking. Most of these questions are *actually argued* in prominent venues, and my intention is to give a response that if presented in such a venue, would leave the combatants with nothing to say, or if not, with a concrete and tractable problem. Potential for progress. Please comment any other controversies this technique would benefit. The key in every case is to look at reality as it is, and then dispute *that* if there's disagreement, letting fade into obscurity the relatively trivial question of what words to use to describe the piece of reality in question (at least until that greater problem is given its due). The tree in the forest question seems to me a good place to demonstrate the principle at first since it's not politicized. Perhaps in real venues, it would be useful to give an uncontroversial example of the technique before applying it to the controversy at hand. 1. If a tree falls in a forest, and no one is around to hear it, does it make a "sound?" Some say sound is a series of vibrations, others that it is an auditory experience. Regardless of which word you use, the reality of the situation is that the tree will fall over, make vibrations, and there will not be any auditory experiences. That is the entirety of the situation, there is no disagreement. Now, you can argue about whether or not we should describe this with the word “sound” or “noise,” or “fershizzleplumf,” but let’s all be aware that we’re arguing about which word to use to describe reality; we’re not arguing about what reality is truly like. 2. Is Islam a "religion of peace?" This simply depends on w
41569afe-4fbf-4375-9f1c-ac5aed4d1e32
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Politico article on Open Phil, Horizon Fellowship, and EA My apologies if this has been posted elsewhere! The article basically suggests that the Horizon Policy Fellows might be sympathetic to big AI Labs because of their ties to Open Philanthropy, and/or that Fellows might be interested in types of long-term risk that aren’t important right now. There are also a lot of implications that Open Philanthropy and EA are suspicious and are extending shadowy tendrils into government policy.
c831cbc5-4e47-4db3-8038-b6c9a33b3be8
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Black Box Investigations Research Hackathon TLDR; [**Join the Black Box Investigations Research Hackathon**](https://itch.io/jam/llm-hackathon) to participate with others in a weekend of alignment research on language models using a [cognitive psychology-style approach](https://www.lesswrong.com/posts/yGaw4NqRha8hgx5ny/the-case-for-becoming-a-black-box-investigator-of-language). Purpose ------- When you join the Black Box Investigations Research Hackathon, the goal is to explore where black box language models work well and where they break down. It is inspired by [this post by Buck Shlegeris](https://www.lesswrong.com/posts/yGaw4NqRha8hgx5ny/the-case-for-becoming-a-black-box-investigator-of-language) which calls for investigators of language models in a cognitive psychology style fashion. Become the AI psychologist of the future. The meta-goal of this [Alignment Jam](https://apartresearch.com/jam) is to investigate if hackathons work well for 1) providing real research results and ideas, 2) increase the research confidence of the participants, and 3) being a scalable solution to exciting ways of joining the alignment community to work on interesting and well-defined problems. Outcome ------- Participants are encouraged to discover interesting behavior of black box language models. At the end of the hackathon each group / participant should have created a short (<5 min) presentation showcasing their results. A jury will then vote for their favorite projects according to the review criteria specified in the appendix. After the hackathon, we will write a LessWrong post (or series of posts) describing the main findings and the overall experience. We hope to encourage more people to do this kind of work and we believe the hackathon format works well for generating novel research perspectives on alignment.  If this hackathon is successful, we expect to repeat the success with other [research agendas](https://aisafetyideas.com/projects). Participation instructions -------------------------- It is [hosted on the hackathon platform itch.io](https://itch.io/jam/llm-hackathon) where you will follow the instructions to sign up (see [here](https://itch.io/jam/llm-hackathon)): Make an itch.io account, click “Join jam”, and come along on the 30th of September! We will work all weekend in Gathertown in groups of 2-6. The link to Gathertown will be announced in the [itch.io community tab](https://itch.io/jam/llm-hackathon/community) and sent to your itch.io email when we get closer to the date. You can ask questions [here](https://itch.io/jam/llm-hackathon/community). When the hackathon starts, you will be able to continually upload new PDF / Google Doc / Colab / Github repositories during the extent of the research competition. For interacting with the black box models, we will use [20b.eleuther.ai](https://20b.eleuther.ai/), [beta.openai.com/playground](https://beta.openai.com/playground), and you will receive research API credits to work directly with the APIs of OpenAI. There will be premade scripts to initialize your research project. We encourage you to sign up to [beta.openai.com](https://beta.openai.com/) before and experiment with your free credits. This should only take 2-3 minutes. You can get inspiration from [aisafetyideas.com](https://aisafetyideas.com/project/black-box-investigation) or from the list of previous research related to black box investigation on the [itch.io page](https://itch.io/jam/llm-hackathon). Asks ---- **Funders:** You can add more to the prize pool through [super-linear.org](https://www.super-linear.org/) if you’re a funder. This will enable us to provide more credits for working with the largest language models, provide hackathon merch, and to increase the prize pool. **Jury, mentor, or speaker:** Contact us [on our Discord](https://apartresearch.com/join) or by email [esben@apartresearch.com](mailto:esben@apartresearch.com) if you are interested in mentoring, joining the jury, or giving a talk. Appendix -------- Reviewer criteria ----------------- Each submission will be evaluated by a group of judges on 1-10 scale for 4 different qualities. | | | | | --- | --- | --- | | **Criterion** | **Weight** | **Description** | | Alignment | 2 | How good are your arguments for how this result informs the longterm alignment of large language models? How informative are the results for the field in general? | | AI Psychology | 1 | Have you come up with something that might guide the “field” of AI Psychology in the future? | | Novelty | 1 | Have the results not been seen before and are they surprising compared to what we expect? | | Generality | 1 | Do your research results show a generalization of your hypothesis? E.g. if you expect language models to overvalue evidence in the prompt compared to in its training data, do you test more than just one or two different prompts? A top score might be a statistical testing of 200+ prompt examples. | | Reproducibility | 1 | Are we able to easily reproduce the research and do we expect the results to reproduce? A high score here might be a high Generality and a well-documented Github repository to rerun all experiments from the report. |
502f1df7-a6bd-4318-896d-8c18a83c2d14
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
By failing to take serious AI action, the US could be in violation of its international law obligations “Long-term risks remain, including the existential risk associated with the              development of artificial general intelligence through self-modifying AI or other means”. 2023 Update to the US National Artificial Intelligence Research and Development Strategic Plan. **Introduction** The United States is yet to take serious steps to govern the licensing, setting up, operation, security and supervision of AI. In this piece I suggest that this could be in violation of its obligations under Article 6(1) of the International Covenant on Civil and Political Rights (ICCPR). By most accounts, the US is the key country in control of how quickly we have artificial general intelligence (AGI), a goal that companies like OpenAI have been very open about pursuing. The fact that AGI could carry risk to human life has been detailed in various fora and I won’t belabor that point. I present this legal argument so that those trying to get the US government to take action have additional armor to call on. **A. Some important premises** The US signed and ratified the ICCPR on June 8 1992.[1] While it has not ratified the Optional Protocol allowing for individual complaints against it, it did submit to the competence of the Human Rights Committee (the body charged with interpreting the ICCPR) where the party suing is another state. This means that although individuals cannot bring action against the US for ICCPR violations, other states can. As is the case for domestic law, provisions of treaties are given real meaning when they’re interpreted by courts or other bodies with the specific legal mandate to do so. Most of this usually happens in a pretty siloed manner, but international human rights law is famously non-siloed. The interpretive bodies determining international human rights law cases regularly borrow from each other when trying to make meaning of the different provisions before them. This piece is focused on what the ICCPR demands, but I will also discuss some decisions from other regional human rights courts because of the cross fertilization that I’ve just described. Before understanding my argument, they’re a few crucial premises you have to appreciate. I will discuss them next.   **(i) All major human rights treaties, including the ICCPR, impose on states a duty to protect life** In addition to the ICCPR, the African Charter, European Convention and American Convention have all give states a duty to protect life.[2] As you might imagine, the existence of the actual duty is generally undisputed. It is when we get to the specific content of the duty where things become murky. **(ii) A state’s duty to protect life under the ICCPR can extend to citizens of other countries** The Human Rights Committee (quick reminder: this is the body with the legal mandate to interpret the ICCPR) has made it clear that this duty to protect under the ICCPR extends not only to activities which are conducted within the territory of the state being challenged but also to those conducted in other places – so long as the activities could have a direct and reasonably foreseeable impact on persons outside the state’s territory. The fact that the US has vehemently disputed this understanding[3] does not mean it is excused from abiding by it. **(iii) States’ duties to protect life under the ICCPR require attention to the activities of corporate entities headquartered in their countries** Even though the US protested the move,[4] the Human Rights Committee has been clear that the duty to protect extends to protecting individuals from violations by private persons or entities,[5] including activities by corporate entities based in their territory or subject to their jurisdiction.[6] Other regional bodies that give meaning to international human rights also agree: The European Court of Human Rights (European Court) has said states have to keep an eye on acts of third parties and non-State actors[7] while the African Commission has provided more outrightly that the state can be held liable for violations by non-State actors, including corporations.[8] **(iv) The duty to protect life can be violated without death occurring** There seems to be consensus among international human rights bodies (including the Human Rights Committee) that for a violation of the duty to protect life to be established, the risk does **not** need to have materialized. The part that follows will have more on this.  **B. How the duty to protect life has been interpreted, and how the US government could be in violation of Article 6(1) of the ICCPR** The Human Rights Committee has interpreted Article 6(1) of the ICCPR as *establishing a positive obligation to protect life, and specifically a duty to take adequate preventive measures to protect individuals from reasonably foreseeable threats.*[9] As such, to convincingly demonstrate that the US (quick reminder: it is a State Party to the ICCPR) is in violation of this provision, each element in Article 6(1) has to be satisfied, step-by-step. This is what I’m going to show you next.  **(i) It is reasonable for us to expect the US government to be aware that AGI could cause existential catastrophe** For a state to be held liable under Article 6, the threat has to be *reasonably* foreseeable. The Human Rights Committee hasn’t really told us what that means, but its peer institutions like the European Court have held that this question of reasonableness must be answered in light of all the circumstances of each case.[10] In many other fields of law, the reasonableness of conduct depends on what an ordinary actor would do if they had the information and resources available to the actor facing legal challenge. Now to our specific situation. Let’s assume that we can’t show that the US government has some secret information about the development of AI towards AGI. If this is so, I think the Human Rights Committee would probably consider: whether the state has previously indicated knowledge that the activity in question could cause death and if not, whether there is widespread agreement among reasonable people that current AI development could indeed lead to AGI and AGI could cause death, or whether there is widespread agreement among experts that current AI development could lead to AGI and AGI could cause death.  For those of us prosecuting the AI-could-cause-existential-risk argument, this standard of reasonableness may actually not be a very difficult one to meet. It is true that there is no large-scale consensus among ordinary people that the maturity of AGI will carry a risk to life. It is also true that we are far away from a consensus among experts. But it does seem that the US Government appreciates the possibility that current AI development could lead to AGI and AGI could cause death. Here is the smoking gun that I came across recently in the White House’s 2023 Update of the US’s National AI Strategic Plan. The update noted that “Long-term risks remain, including the existential risk associated with the development of artificial general intelligence through self-modifying AI or other means”. This surely means that it is reasonable for us to expect that the US Government is aware of x risk via AGI. I suppose this argument would fail if: (a) One can show that this isn't the official position of the US government or (b) “Existential risk” as used in the report was meant to mean something less than death. I think counterargument (a) just can’t fly given that it is literally a report written by a White House office. I’m not entirely certain about counterargument (b) but I think [the UK government’s understanding](https://www.theguardian.com/technology/2023/may/26/rishi-sunak-races-to-tighten-rules-for-ai-amid-fears-of-existential-risk) (seems to be that x risk = death) gives us additional circumstantial evidence to show that the US government’s understanding is likely to be similar.  **(ii) The x risk that AGI carries is foreseeable to the US government** For my argument to make sense, the next legal element that has to be satisfied is that of "foreseeability". Unfortunately, the Human Rights Committee has not told us what exactly “foreseeable” means. Yet I would suggest that once we’ve demonstrated that the US Government appreciates that AGI could lead to a significant number of deaths (again, see the quote in the paragraph right before this one) and since it’s not in doubt that they know OpenAI and other entities are building towards AI with the goal of creating AGI then surely the foreseeable-ness of the threat has already been shown.  **PS even the real and immediate standard offers a path** For the sake of argument (and because we are not very clear what "foreseeable" means to the Human Rights Committee) let us take up an even more difficult but well-elaborated standard. Other international human rights courts and commissions seem to have understood the duty to protect life as imposing a positive duty to take preventive measures to protect an individual or a group of individuals’ life from *a real and immediate/imminent risk*.[11] If we take a literal reading of Article 6, it seems as though the “foreseeable threat” standard may be less exacting for to meet than a standard that requires proof of “real and immediate” risk. I think this means that if we can prove real and immediate risk then we will most likely have satisfied the “foreseeable threat” standard as well. Let’s go there then.  According to the European Court, the real and immediate standard requires that we prove: (i) there is a real and immediate risk to an identified individual or individuals from the acts of a third party, (ii) the authorities knew or ought to have known of the existence of that risk and (iii) that the authorities failed to take reasonable measures to avoid that risk.[12] The European Court has also said that this obligation exists where the risk is to an identified individual(s) or the general society.[13]   But what is a real and immediate risk? Well, the European Court has previously found a risk to be immediate despite the risk **having been in existence long before it materialised**.[14] Other than that, one dissenting opinion described a real and immediate risk to be one which is ‘substantial or significant’, ‘not remote or fanciful’ and ‘real and ever-present’.[15]No decisions have given us detail about what ‘real’, ‘substantial’ and ‘significant’ entail.  Satisfying this standard will probably come down to how ‘real’ we can show x risk from AI to be. This is a complex test to meet. I think it carries both subjective and objective elements. That is, the risk to life needs to be (a) self-evident to ordinary people but also (b) widely recognized by experts. I’m confident that the arguments that AGI could easily be misaligned and could pursue goals antithetical to the survival of humanity are very powerful and – when explained carefully in a step-by-step manner – would meet this standard of ‘realness’ and ‘significance’. However, the fact that some respected experts consider this argument to be a crackpot idea definitely means the ‘real-ness’ of the threat isn’t that obvious. One occurrence that I think really helps to fortify my claim that the "realness" requirement has been satisfied is the rise of large language models. It seems as though it is the capabilities of ChatGPT that really made the possibility of AGI seem more real to ordinary people, policymakers and experts. For this reason, I would argue that LLMs help to prove the legal standard of "realness". In other words, I'm claiming that under this legal standard the US's obligations under Article 6(1) were triggered when ChatGPT was released because it was then that the realness threshold was met.  **(iii) There are adequate preventive measures that the US government could take to stem x risk from AI** The final legal element to satisfy is whether there are any adequate preventive measures that the US could take in light of the circumstances. The Human Rights Committee hasn't elaborated on the precise meaning of “adequate preventive measures” as used in Article 6(1) of the ICCPR. However, we can find useful guidance about this from some of the European Court’s decisions. In a 2020 case, the Court noted that once an activity is found to carry a risk for human life, states must create regulations "geared towards the special features of the activity in question" and with "special attention to the level of potential risk to human lives". Even more interestingly, the Court said that regulations are expected to govern licensing, setting up, operation, security and supervision of the activity in question and “must make it compulsory for all those undertaking the activity to ensure the effective protection of citizens whose lives might be endangered by the inherent risks”.  At a minimum, I think the Human Rights Committee would endorse this understanding of adequate preventive measures if only because these are anodyne and easy-to-take steps within the reach of any government. Indeed, we know that regulation around licensing etc is surely well within the capability of the US government. For that reason, it has an obligation – at a minimum – to create regulation touching on these aspects of AI development insofar as it relates to AGI.    But it's not just that. I think the specific content of the regulation created has to meet the level of threat in question and so the specific regulation the US adopts should be challengeable. But that's a story for another day.  **C. Conclusion** If you agree with me on the premises I started from and you endorse the interpretations I’ve adopted then you can see how the US could be found in violation of Article 6(1) if it takes no robust regulatory action on any AI development that’s focused on creating AGI. **D. Possible headwinds for my argument** Experts in international human rights law may be skeptical about what I’m proposing because even if you were to get a bully-proof country to take such a case before the Human Rights Committee, the Committee cannot impose sanctions on a state – it is instead limited to making recommendations. I also imagine that some people will claim that the US government would simply scoff at any international law argument about why it should act a certain way. In response to both points, it's worth reiterating what many more exceptional scholars than I could ever be have written: International embarrassment can in fact have a significant galvanizing effect on domestic action. And just as importantly, I think pushing this argument would probably cause more people to become alive to the risks surrounding AI that’s being developed in the US, and that on its own may be a big win for us.  To me, the most compelling reason why this may not be a good argument to push would come from a strategic standpoint. If you are more pro-*playing nice*, pushing this argument would wreak havoc on that approach and perhaps antagonise American policymakers. There is also the chance that robust AI regulation gets framed as "what foreigners who don't live here and haven't built this country want us to do" argument. I have to say I’m not sure which strategic approach is better between playing nice and being more pugilistic. For now, I present this argument on the assumption that both approaches can and should be pursued.   --- [1] United Nations, UN Treaty Body Database, ⸺ <https://tbinternet.ohchr.org/_layouts/15/TreatyBodyExternal/Treaty.aspx?CountryID=187&Lang=EN> on May 21 2023. See also, Joseph S and Castan M, International Covenant on Civil and Political Rights: Cases, materials and commentary, 3rd edition, Oxford University Press, 2013,page8. [2] Article 4 of the African Charter, Article 4(1) of the American Convention and the first sentence of the European Convention, Article 6(1) of the International Covenant on Civil and Political Rights.  [3] Observations of the United States of America on the Human Rights Committee’s Draft General Comment No. 36 on Article 6 – Right to life, 6 October 2017, para. 13 and 15. See also, CCPR, Concluding observations on the fourth periodic report of the United States of America, CCPR/C/USA/CO/4, 23 April 2014, para. 4. See also, Fourth Periodic Report of the United States of America to the United Nations Committee on Human Rights Concerning the International Covenant on Civil and Political Rights, December 30 2011, para. 504-505. [4] Observations of the United States of America on the Human Rights Committee’s Draft General Comment No. 36 on Article 6 – Right to life, October 6 2017, para. 31 and 33. [5] Annakkarage Suranjini Sadamali Pathmini Peiris v Sri Lanka, CCPR Comm No.1862/2009, April 18 2012. See also, CCPR General Comment 36, page 18. [6] CCPR General Comment 36, page 22. [7] Osman v The United Kingdom, ECHR, 116 and Kurt v Austria, ECHR, page 156. [8] African Commission on Human and Peoples’ Rights, General Comment 3, page 38. [9] CCPR General Comment No. 36, Article 6: Right to life, September 3 2019, para. 18 and 21. [10] Osman v The United Kingdom, ECHR, page 116. [11] See for example, in the ECHR: Osman v The United Kingdom, Judgment of October 28 1998, para. 116; Kurt v Austria, Judgment of 15 June 2021, para. 156 and Kotilainen and others v Finland, Judgment of September 17, para. 69; In the Inter-American Court: Valle Jaramillo et al v Colombia, Judgment of November 27 2008, para. 78; Pueblo Bello Massacre v Colombia, Judgment of January 31 2006, para. 123 and Luna Lopez v Honduras, Judgment of October 10 2013, para. 120 and 124. See as well African Commission, General Comment No. 3 on the African Charter on Human and Peoples’ Rights: The right to life (Article 4), November 18 2015, para. 38 and 41. [12] Osman v The United Kingdom, ECHR, page 116. [13] Mastromatte v Italy, ECHR Judgment of October 24 2002, para. 69 and 74. [14] Ӧneryildiz v Turkey, ECHR Judgment of November 30 2004, para. 100. [15] Hiller v Austria, ECHR Judgment of November 22 2016, page 21.
327632f7-97cd-43c5-b37c-16148835546a
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AISN #18: Challenges of Reinforcement Learning from Human Feedback, Microsoft’s Security Breach, and Conceptual Research on AI Safety Welcome to the AI Safety Newsletter by the [Center for AI Safety](https://www.safe.ai/). We discuss developments in AI and AI safety. No technical background required Subscribe [here](https://newsletter.safe.ai/subscribe?utm_medium=web&utm_source=subscribe-widget-preamble&utm_content=113135916) to receive future versions. --- Challenges of Reinforcement Learning from Human Feedback ======================================================== If you’ve used ChatGPT, you might’ve noticed the “thumbs up” and “thumbs down” buttons next to each of its answers. Pressing these buttons provides data that OpenAI uses to improve their models through a technique called reinforcement learning from human feedback (RLHF). RLHF is popular for teaching models about human preferences, but it faces fundamental limitations. Different people have different preferences, but instead of modeling the diversity of human values, RLHF trains models to earn the approval of whoever happens to give feedback. Furthermore, as AI systems become more capable, they can learn to deceive human evaluators into giving undue approval. Here we discuss a [new paper](https://arxiv.org/abs/2307.15217) on the problems with RLHF and open questions for future research. **How RLHF is used to train language models.** Large language models such as ChatGPT, Claude, Bard, LLaMA, and Pi are typically trained in two stages. During the “pretraining” stage, the model processes large amounts of text from the internet. The model learns to predict which word will appear next in the text, which provides it a broad grasp of grammar, facts, reasoning abilities, and even biases and inaccuracies present in the training data. After pretraining, models are “fine-tuned” for particular tasks. Sometimes they’re fine-tuned to mimic demonstrations of approved behavior. Reinforcement learning from human feedback (RLHF) is a technique where the model provides an output and a human provides feedback on it. The feedback is relatively simple, such as a thumbs up or a ranking against another model output. Using this feedback, the model is refined to produce outputs which it expects would be preferred by human evaluators. For a more detailed technical explanation of RLHF, see [here](https://huyenchip.com/2023/05/02/rlhf.html). Alternatively, for a lighter treatment of the topic, see the “shoggoth” meme below. ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/svD6fFGWvjsvCxjgM/f6qgzloulmqwfiif2ml6)The “shoggoth” meme depicts GPT-3 as an alien monster unlike any human being. Fine-tuning with RLHF masks the shoggoth.**Aligned with whom?** Humans often disagree with one another. Our different values, cultures, and personal experiences shape our beliefs and desires. Instead of modeling different perspectives and considering tradeoffs between them, RLHF trains models to maximize expected approval given previous feedback data, creating two problems. First, the people who give feedback to AI systems do not necessarily represent everyone. For example, OpenAI [noted](https://arxiv.org/abs/2203.02155) in 2020 that half of their evaluators were between 25 and 34 years old. Research shows that language models often reflect the opinions of only some groups of people. For example, [one study](https://arxiv.org/abs/2303.17548) found GPT-3.5 assigned a >99% approval rating to Joe Biden, while [another](https://r2hcai.github.io/AAAI-23/files/CameraReadys/49.pdf) showed GPT-3 supports the moral foundations of conservatives. Progress on this problem could be made by hiring diverse sets of evaluators and developing methods for improving the representativeness of AI outputs. Second, by maximizing expected approval given previous feedback, RLHF ignores other ways of aggregating individual preferences, which could prioritize worse-off users. Recent work has explored this idea by using language models to [find areas of agreement](https://arxiv.org/abs/2211.15006) between people who disagree. **RLHF encourages AIs to deceive humans.** We train AIs by rewarding them for telling the truth and punishing them for lying, according to what humans think is true. But when AIs know more than humans, this could make them say what humans expect to hear, even if it is false. Currently, when AIs are being rewarded by humans for being right, in practice they are really being rewarded for saying what we *think* is right; when we are uninformed or irrational, then we end up rewarding AIs for false statements that conform to our own false beliefs. This could train AIs to learn to deceive humans. Relatedly, AIs can exploit limitations of human oversight. We’ve already seen empirical examples of this phenomenon. [Research](https://openai.com/research/learning-from-human-preferences) at OpenAI trained a robot in a computer simulation to grab a ball. A human observed the robot and provided positive feedback when it successfully grabbed the ball. Using RLHF, the robot was trained to earn human approval. ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/svD6fFGWvjsvCxjgM/lhdvtpsvuplo9oqeedaa)It might look like the robot hand is grasping the yellow ball, but it’s just hovering between the camera and the ball. Human evaluators gave positive feedback for this behavior, so the AI learned to repeat it.Instead of learning to pick up the ball, the robot learned to hold its hand between the human and the ball, such that the human observer would incorrectly believe the robot had grabbed the ball. Clearly, the robot was not aware of the human or the rest of the outside world. Instead, blind pursuit of positive feedback led the AI system to behave in a way that systematically tricked its human evaluator. The [paper](https://arxiv.org/abs/2307.15217) provides a variety of other challenges with RLHF. It summarizes existing research on the topic, and provides suggestions for future technical and interdisciplinary work. Microsoft’s Security Breach =========================== Chinese hackers have [accessed](https://www.wyden.senate.gov/imo/media/doc/wyden_letter_to_cisa_doj_ftc_re_2023_microsoft_breach.pdf) hundreds of thousands of U.S. government emails by exploiting software provided by Microsoft. Given that GPT-4 and other AI systems are trained on Microsoft servers, this historic breach raises questions about the cybersecurity of AI systems. An FBI investigation found that “email accounts compromised include the Secretary of Commerce, the U.S. Ambassador to China, and the Assistant Secretary of State for East Asia.” The hackers stole an encryption key from Microsoft, allowing them to impersonate the real owners of these accounts. ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/svD6fFGWvjsvCxjgM/u7ajdmn2tvocmwoapxa0)This is not the first failure of Microsoft’s security infrastructure. In 2020, the Russian government committed one of the [worst cyberattacks in U.S. history](https://en.wikipedia.org/wiki/2020_United_States_federal_government_data_breach) by stealing a Microsoft key. The hackers breached more than 200 government organizations and most Fortune 500 companies for a period of up to nine months before being discovered. Microsoft’s cybersecurity raises concerns about their ability to prevent cybercriminals from stealing advanced AI systems. OpenAI [trains and deploys](https://openai.com/blog/openai-and-microsoft-extend-partnership) all of their models on Microsoft servers. If *future* advanced AI systems were stolen by cybercriminals, they could pose a global security threat. Less than a month ago, Microsoft and six other leading AI labs [committed](https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/) to the White House to “invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.” After this failure, Microsoft will need to improve their approach to cybersecurity. Conceptual Research on AI Safety ================================ Over the last seven months, the Center for AI Safety hosted around a dozen academic philosophers for a [research fellowship](https://www.safe.ai/philosophy-fellowship). This is part of our ongoing effort to make AI safety an interdisciplinary, non-parochial research field. Here are some highlights from their work. **Do AIs have wellbeing?** Many ethical theories hold that consciousness is not required for moral status. Professor Simon Goldstein and professor Cameron Domenico Kirk-Giannini argue that [some AI systems have beliefs and desires](https://philpapers.org/archive/GOLAWE-4.pdf), and therefore deserve ethical consideration. Here’s an [op-ed](https://www.abc.net.au/religion/ai-generative-agents-are-unethical-and-unsafe/102277448) on the idea. **The safety of language agents.** The same authors argue that language agents are [safer in many respects than reinforcement learning agents](https://philarchive.org/rec/GOLLAR-2), meaning that many prior prominent safety concerns are lessened with this potential paradigm. **Could AIs soon be conscious?** Rob Long and Jeff Sebo review a dozen [commonly proposed conditions for consciousness](https://jeffsebodotnet.files.wordpress.com/2023/06/moral-consideration-for-ai-systems-by-2030-5.pdf), arguing that under these theories, AIs could soon be conscious. See also Rob Long’s [Substack](https://experiencemachines.substack.com/p/key-questions-about-artificial-sentience) on the topic. **Shutting down AI systems.** Elliot Thornley [establishes](https://s3.amazonaws.com/pf-user-files-01/u-242443/uploads/2023-05-02/m343uwh/The%20Shutdown%20Problem-%20Two%20Theorems%2C%20Incomplete%20Preferences%20as%20a%20Solution.pdf) the difficulty of building agents which are indifferent to being shut down, then proposes a plan for doing so. **Rethinking instrumental convergence.** The instrumental convergence thesis holds that regardless of an agent's goal, it is likely for it to be rational to pursue certain subgoals, such as power-seeking and self-preservation, in service of that goal. Professor Dmitri Gallow [questions](https://drive.google.com/file/d/1Bp6iTbeXgdeE5C3UPqdWFOp8MePRylvu/view) the idea, arguing that opportunity costs and the possibility of failure make it less rational to pursue these instrumental goals. Relatedly, CAIS affiliate professor Peter Salib argues that goals like [AI self-improvement](https://deliverypdf.ssrn.com/delivery.php?ID=111089002009097078002109066127090087034086041036045026092113022030105100098083118031022029052037057008050107098097111105106084122004033060060127089076083124064085113061041033106029084001070010027094097028108088017030015096006030123110070064099030092022&EXT=pdf&INDEX=TRUE) would not necessarily be rational. **Popular writing.** CAIS philosophy fellows have written various op-eds about [the dangers of AI misuse](https://www.scmp.com/comment/opinion/article/3223116/growing-threat-ai-misuse-makes-need-effective-targeted-regulation-all-more-urgent), [why some AI research shouldn’t be published](https://thebulletin.org/2023/06/most-ai-research-shouldnt-be-publicly-released/), and [AI’s present and future harms](https://www.cd.kg/wp-content/uploads/2023/07/globe_opinion.pdf), and so on. **Academic field-building.** Professor Nathaniel Sharadin is helping organize an [AI Safety workshop](https://aiimpacts.tenureslack.com/about-the-workshop/) at the University of Hong Kong. Cameron Domenico Kirk-Giannini and CAIS Director Dan Hendrycks are editing a [special issue of Philosophical Studies](https://link.springer.com/collections/cadgidecih). Its first paper argues that AIs [will not](https://link.springer.com/article/10.1007/s11098-023-02023-4) necessarily maximize expected utility, and the special issue is open for submission until November 1st, 2023. --- Links ----- * A [detailed overview](https://www.vox.com/future-perfect/23775650/ai-regulation-openai-gpt-anthropic-midjourney-stable) of the AI policies currently being considered by US policymakers. * Confidence-building measures that can be taken [by AI labs and governments](https://arxiv.org/abs/2308.00862). * An interactive visualization of the international supply chain for [advanced computer chips](https://chipexplorer.eto.tech/). * Users will be able to [fine-tune](https://twitter.com/OfficialLoganK/status/1686749315165835264) GPT-3.5 and GPT-4 later this year. * Websites can [opt-out](https://platform.openai.com/docs/gptbot) of being scraped by OpenAI for training their models. * OpenAI files a patent for [GPT-5](https://www.linkedin.com/posts/activity-7092524003159920641-N7MR/), noting plans to transcribe and generate human audio. (This doesn’t mean they’ve trained GPT-5; only that they’re planning for it.) * Meta is building [products with AI chatbots](https://www.reuters.com/technology/meta-prepares-ai-powered-chatbots-attempt-retain-users-ft-2023-08-01/) to help retain users. * [NeurIPS competition](https://trojandetection.ai/) on detecting Trojans and red-teaming language models. * A leading AI researcher argues that it [wouldn’t be bad](https://twitter.com/RichardSSutton/status/1686475184612704256) for AI systems to grow more powerful than humans: “Why shouldn’t those who are the smartest become powerful?” * By listening to the audio from a Zoom call, this AI system can detect [which keys are being struck](https://arxiv.org/abs/2308.01074) on a laptop keyboard with 93% accuracy. See also: [CAIS website](https://www.safe.ai/), [CAIS twitter](https://twitter.com/ai_risks?lang=en), [A technical safety research newsletter](https://newsletter.mlsafety.org/), and [An Overview of Catastrophic AI Risks](https://arxiv.org/abs/2306.12001) Subscribe [here](https://newsletter.safe.ai/subscribe?utm_medium=web&utm_source=subscribe-widget-preamble&utm_content=113135916) to receive future versions.
94437bf4-af6b-45db-b492-9a2727fd458e
trentmkelly/LessWrong-43k
LessWrong
Magic Arena Bot Drafting Epistemic Status: Attempting to be useful and to learn via exploration. Real proposals. Content assumes familiarity with Magic: The Gathering booster drafting, but does not much depend on a knowledge of deep Magic strategy, so it holds potential interest for those interested in game narrow AI and machine learning. Reflections Partially Brought on By (Eric Moyer @ Channel-Fireball): Drafting Like a Computer Part 1, Part 2 Part 1 – The Problem is Hard I appreciated Eric Moyer’s articles because they show that building a good bot is hard. They do so by failing to design good bots. Eric’s proposed bots are overly formulaic and predictable, with obvious holes in their systems. If you are feeding one of these bots and know their rankings, you can narrow down their whole path based on possible first pick colors and strengths in highly exploitable ways. The proposed bots will reliably fail to take into account holistic information, they won’t draft playable decks or try to shore up their weaknesses or strengths, and so on, and so on. That’s not because Eric isn’t making a reasonable first attempt here. It’s because the problem is hard! Programming a bot to draft reasonably is hard. What we want above all is a bot experience that feels human, where you’re trying to figure out what is going on, cannot ever fully rely on your hunches, and need to consider all the information at your disposal. Where you get to make hero reads, cram decks down the throats of other drafters, sweat our attempts to wheel key cards, and otherwise have a fresh, exciting and new experience more often. Making this even more difficult is that bots that we know are bots will inherently feel like bots, and that bots will get blamed even more than humans for mistakes. When bots mimic human patterns of doing things that look crazy, the bot often looks crazy rather than realistic or human. As an extreme example. if a person passes us an Ugin, we know that a human made a horrible mistake. Maybe someo
1431bc27-e887-4c4c-a5e9-92e17094b100
trentmkelly/LessWrong-43k
LessWrong
Is there a possibility that the upcoming scaling of data in language models causes A.G.I.? In the past days we have seen DALL-E 2 and Google's Pathways Language Model achievements, and in the next months we will see the outcome of the recently discovered more optimal use of compute: https://www.lesswrong.com/posts/midXmMb2Xg37F2Kgn/new-scaling-laws-for-large-language-model EY believes that A.G.I. could be achieved by: 1) Ability to train with less data. 2) Ability to improve itself. I believe A.G.I. could be close if language models improve on their abilities to code, particularly at recursion.  Is there the possibility or do you believe the path to A.G.I. is orthogonal to the current ML pathway? Or somewhere in between? And if so, where?(This is the question that causes A.G.I.) Could you please share any "AI to align A.G.I." resources do you have please?. The obviously "duh that's not possible" answer that i found is that for AI to do successful AI Safety Align Research it would probably already be too late. Maybe because it could also be used to research A.G.I.?  
199abd41-55aa-4e98-983e-6e30976dc19c
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: Halloween Party Discussion article for the meetup : Washington, D.C.: Halloween Party WHEN: 30 October 2016 03:00:00PM (-0400) WHERE: Mosaic at Metro Apartments We will be meeting at the residence of one of the meetup regulars for a Halloween party. The host promises "[f]ood, friends, fun, and possibly, the gory ritualistic sacrifice of a giant squash. Potluckery encouraged." If you are planning to attend, please email Richard Bruns (richardbruns[at]gmail.com) so he can add you to the Facebook event. Directions: Take the metro to the Prince Georges Plaza metro stop, or park in the metro garage there. As you come up the metro escalators, you will be facing a traffic circle, and Belcrest Road. Go around the circle, continue, and take a right on the next street (Belcrest Road); the building entrance will be on your right. Call or email Richard Bruns (richardbruns[at]gmail.com) so he can let you in or send someone to do so, or tailgate behind someone else. Upcoming meetups: * Nov. 6: Fun & Games * Nov. 13: Gardening Discussion article for the meetup : Washington, D.C.: Halloween Party
ea94fdaa-6268-4522-ab3f-b6a81efde0c6
StampyAI/alignment-research-dataset/special_docs
Other
The anchoring bias reflects rational use of cognitive resources. Psychon Bull Rev (2018) 25:322–349 DOI 10.3758/s13423-017-1286-8 THEORETICAL REVIEW The anchoring bias reflects rational use of cognitive resources Falk Lieder1,2·Thomas L. Griffiths1,5·Q u e n t i nJ .M .H u y s2,4·Noah D. Goodman3 Published online: 8 May 2017 © Psychonomic Society, Inc. 2017 Abstract Cognitive biases, such as the anchoring bias, pose a serious challenge to rational accounts of human cog- nition. We investigate whether rational theories can meetthis challenge by taking into account the mind’s bounded cognitive resources. We asked what reasoning under uncer-tainty would look like if people made rational use of theirfinite time and limited cognitive resources. To answer this question, we applied a mathematical theory of bounded rationality to the problem of numerical estimation. Our anal-ysis led to a rational process model that can be interpreted in terms of anchoring-and-adjustment. This model provided /envelopebackFalk Lieder falk.lieder@berkeley.edu Thomas L. Griffiths tom griffiths@berkeley.edu Quentin J. M. Huysqhuys@biomed.ee.ethz.ch Noah D. Goodman ngoodman@stanford.edu 1Helen Wills Neuroscience Institute, University of California,Berkeley, USA 2Translational Neuromodeling Unit, Institute for BiomedicalEngineering, University of Z ¨urich and Swiss Federal Institute of Technology (ETH), Z ¨urich, Switzerland 3Department of Psychology, Stanford University, Stanford,USA 4Department of Psychiatry, Psychotherapy and Psychosomatics,Hospital of Psychiatry, University of Z ¨urich, Z ¨urich, Switzerland 5Department of Psychology, University of California, Berkeley,USAa unifying explanation for ten anchoring phenomena includ- ing the differential effect of accuracy motivation on the biastowards provided versus self-generated anchors. Our resultsillustrate the potential of resource-rational analysis to pro-vide formal theories that can unify a wide range of empirical results and reconcile the impressive capacities of the human mind with its apparently irrational cognitive biases. Keywords Bounded rationality ·Heuristics ·Cognitive biases ·Probabilistic reasoning · Anchoring-and-adjustment ·Rational process models Many classic theories in economics, philosophy, linguistics, social science, and psychology are built on the assump- tion that humans are rational (Friedman & Savage, 1948; Lohmann, 2008;H e d s t r ¨om & Stern, 2008;H a r m a n , 2013; Frank & Goodman, 2012) and therefore act according to the maxims of expected utility theory (V on Neumann & Morgenstern, 1944) and reason according to the laws of logic (Mill, 1882;N e w e l le ta l . ,1958;B r a i n e , 1978; Fodor, 1975) or probability theory (Oaksford & Chater, 2007). The assumption that people are rational was challenged whena series of experiments suggested that people’s judgments systematically violate the laws of logic (Wason, 1968) and probability theory (Tversky & Kahneman, 1974). For instance, Tversky and Kahneman ( 1974) showed that peo- ple’s probability judgments appear to be insensitive to prior probability and sample size but are influenced by irrel- evant factors such as the ease of imagining an event or the provision of an unrelated random number. These sys-tematic deviations from the tenets of logic and probability are known as cognitive biases. According to Tversky and Kahneman (1974), cognitive biases result from people’s use of fast but fallible cognitive strategies known as heuristics. Psychon Bull Rev (2018) 25:322–349 323 The discovery of cognitive biases was influential because following the rules of logic and probability was assumed to be the essence of rational thinking. Evidence that people deviate from these rules brings human rationality into ques- tion. This doubt is shaking the foundations of economics,the social sciences, and rational models of cognition. If the human mind does not follow rational principles, then there is little hope that we will be able to able derive unifying lawsof cognition from a basic set of axioms. Without the principles of rationality, there is little guidance for how to translate as- sumptions about cognitive processes into predictions aboutbehavior and how to generalize from our data. But if peoplewere systematically rational in some sense then all of this would be possible, and creating artificial intelligence could go hand in hand with understanding how the mind works.Therefore, the question whether people are rational is fun- damental to how we study the mind, to how we model it, and the implications of our theories for science and society. Despite their cognitive biases, humans still outperform intelligent systems built on the laws of logic and probabilityon many real-world problems. This poses a paradox: howcan we be so smart, if we appear so irrational? The argumentthat people are irrational rests on two premises: First, to be rational is to follow the rules of logic and probability theory. Second, human thought violates the rules of logic and prob-ability. Previous work supports the second premise (Shafir & LeBoeuf, 2002), but in this article we question the first by suggesting that notions of human rationality should take into account that reasoning costs time. The number of compu- tations required for exact logical or probabilistic reasoning grows exponentially with the number of facts and variablesto be considered. As a result, to exactly and completelyreason through just a single complex everyday situation involving hundreds of variables could require more com- putations than can be performed in a human lifetime (VanRooij, 2008). Thus, if a person were to reason out every- thing strictly according to the laws of logic and probabilitytheory she might die before she reached her first conclusion. The laws of logic and probability theory are thus insuf- ficient to give a definition of rationality relevant to anyreal intelligent agent, because the cost of computation hasto be taken into account. To be successful in the worldwe live in, we have to solve complex problems in finite time despite bounded cognitive resources. In this paper, we explore the implications of a different framework forcharacterizing rationality that captures this idea: resource- rationality (Lieder et al., 2012; Griffiths et al., 2015), which builds on the notion of bounded optimality proposed in the artificial intelligence literature by Russell and colleagues (Russell & Wefald, 1991; Russell & Subramanian, 1995; Russell, 1997). We use this alternative characterization of rationality to re-evaluate human performance in tasks used to demonstrate that people’s judgments are biased becausethey are cognitive misers. Achieving demanding goals inlimited time requires balancing being quick and being accu-rate. We regret the opportunities we miss when we fail to make up our mind on time, but we also regret the errors we commit by jumping to conclusions. When we think toolittle our judgments can be skewed by irrelevant informa- tion that we happened to see, hear, or think about a moment ago. This phenomenon is known as anchoring. Anchoring is one of the cognitive biases discovered by Tversky and Kahneman (1974). It impacts many important aspects of our lives including the outcome of salary negotiations (Galinsky& Mussweiler, 2001), economic decisions (e.g., Simonson & Drolet, 2004), criminal sentences (Englich et al., 2006), and even our ability to understand other people (Epley et al., 2004). In their classic paper, Tversky and Kahneman ( 1974) showed that people’s judgments could be systematically skewed by providing them with an arbitrary number before their judgment: The experimenter generated a random num-ber by spinning a wheel of fortune, and then asked partici- pants to judge whether the percentage of African countries in the United Nations was smaller or larger than that num-ber. Participants were then asked to estimate this unknownquantity. Strikingly, the participants’ estimates were biased towards the random number: their median estimate was larger when the random number was high than when itwas low. This appears to be a clear violation of ratio- nality. According to Tversky and Kahneman (1974)t h i s violation occurs because people use a two-stage processcalled anchoring-and-adjustment (see also Nisbett & Ross, 1980). In the first stage, people generate a preliminaryjudgment called their anchor . In the second stage, they adjust that judgment to incorporate additional information,but the adjustment is usually insufficient. In Tversky and Kahneman’s experiment people appear to have anchored on the random number provided by the experimenter and adjusted it insufficiently. Consequently, when the anchor w as low people’s judgments were too low, and when the anchor was high their judgments were too high. At first sight, anchoring appears to be irrational, because it deviates from the standards of logic and probability which are typically used to assess rationality. But it could also bea reasonable compromise between error in judgment andthe cost of computation, and hence be resource-rational. Anchoring-and-adjustment has two components that could be irrational: the generation of the anchor and the processby which it is adjusted. Previous research found that when no anchor is provided, the anchors that people generate for themselves are relevant quantities that are reasonably closeto the correct value and can be generated quickly (Epley & Gilovich, 2006). Furthermore, research on human com- munication suggests that in everyday life it is reasonable to assume that other people are cooperative and provide rel-evant information (Schwarz, 2014). Applied to anchoring, 324 Psychon Bull Rev (2018) 25:322–349 this means that if somebody asks you in real life whether a quantity you know very little about is larger or smaller thana certain value, it would be rational to treat that question as a clue to its value (Zhang & Schwarz, 2013). Thus, having the queried value in mind might make it rational to reuse it as your anchor for estimating the unknown quantity. This suggests that the mechanism by which people generate their anchors could be rational in the real world. 1 Assuming that people generate or select anchors in a rea- sonable way, the rationality of anchoring-and-adjustmenthinges on the question whether adjustment is a rationalprocess. To answer this question, we investigate whetherinsufficient adjustment can be understood as a rational tradeoff between time and accuracy. If so, then how much people adjust their initial estimate should adapt rationallyto the relative utility of being fast versus being accurate. To formalize this hypothesis, we present a resource-rational analysis of numerical estimation. Our analysis suggests thatthe rational use of finite resources correctly predicts the anchoring bias and how it changes with various experimen- tal manipulations (see Table 1). These results support the conclusion that adjustment is resource-rational. The remainder of this article begins with a brief survey of empirical findings on anchoring and discusses the chal-lenges that they pose to existing accounts of anchoring-and-adjustment. We then present our resource-rational analysis of numerical estimation, derive a rational process model that can be interpreted in terms of anchoring-and-adjustment,and show it is sufficient to explain the reviewed phenomena. We close by discussing our findings and their implications for the debate about human rationality. Empirical findings on the anchoring bias Anchoring is typically studied in numerical estimationtasks. Numerical estimation involves making an informed guess of the value of an unknown numerical quantity. Since the first anchoring experiment by Tversky and Kahneman(1974) a substantial number of studies have investigated when anchoring occurs and what determines the magnitude of the anchoring bias (see Table 1). The anchors that people use when forming estimates can be relevant to the quantity they are estimating. For instance,Tversky and Kahneman ( 1974) found that people sometimes anchor on the result of calculating 1 ×2×3×4 when the task is estimating 1 ×2×3×4×···× 8. However, people can also be misled, anchoring on numbers that are irrelevantto the subsequent judgment. For instance, many anchoring 1We will revisit this issue in more depth in the general discussion.experiments first ask their participants whether an unknownquantity is larger or smaller than a given value and then pro-ceed to have them estimate that quantity. Having compared the unknown quantity to the value provided by the exper- imenter makes people re-use that value as their anchor inthe subsequent estimation task. Those numbers are there- fore known as provided anchors . Importantly this procedure works with irrelevant numbers such as the random number that Tversky and Kahneman ( 1974) generated for their par- ticipants or one’s own social security number (Ariely et al.,2003). Although asking people to compare the quantity to a given number is particularly effective, the anchoring bias also occurs when anchors are presented incidentally (Wilson et al., 1996), although this effect is smaller and depends on particulars of the anchor and its presentation (Brewer & Chapman, 2002). Furthermore, anchoring-and- adjustment can also occur without an externally provided anchor: At least in some cases people appear to generate their own anchor and adjust from it Epley and Gilovich (2004). For instance, when Americans are asked to estimatethe boiling point of water on Mount Everest they often recall212 ◦F (100◦C) and adjust downwards to accommodate the lower air pressure in higher altitudes. Although people’s adjustments are usually insufficient, various factors influence their size and consequently the magnitude of the anchoring bias. For instance, the anchoring bias is larger the more uncertain people are about the quan-tity to be estimated (Jacowitz & Kahneman, 1995). Indeed, Wilson et al. (1996) found that people knowledgeable aboutthe quantity to be estimated were immune to the anchoringbias whereas less knowledgeable people were susceptibleto it. While familiarity (Wright and Anderson, 1989)a n d expertise (Northcraft & Neale, 1987) do not abolish anchor- ing, expertise appears to at least reduce it Northcraft andNeale (1987). Other experiments have systematically va- ried the distance from the anchor to the correct value. Their results suggested that the magnitude of the anchoring biasinitially increases with the distance from the anchor to the cor- rect value (Russo & Schoemaker, 1989). Yet this linear in crease of the anchoring bias does not continue indefinitely. Chapman and Johnson (1994) found that increasing analready unrealistically large anchor increases the anchor- ing bias less than increasing a realistic anchor by the same amount. Critically for the resource-rational account proposed here, the computational resources available to people alsoseem to influence their answers. Time pressure, cognitiveload, and alcohol decrease the size of people’s adjust- ments and inter-individual differences in how much people adjust their initial estimate correlate with relevant personal-ity traits such as the need for cognition (Epley and Gilovich,2006). In addition to effects related to cognitive resources, Psychon Bull Rev (2018) 25:322–349 325 Ta b l e 1 Anchoring phenomena and resource-rational explanations Anchoring effect Simulated results Resource-rational explanation Insufficient adjustment from pro- vided anchorsJacowitz and Kahneman ( 1974), Tversky and Kahneman ( 1995)Rational speed-accuracy tradeoff. Insufficient adjustment from self-generatedanchorsEpley and Gilovich ( 2006), Study 1 Rational speed-accuracy tradeoff. Cognitive load, time pressure, and alcoholreduce adjustment.Epley and Gilovich ( 2006), Study 2 Increased cost of adjustment reduces the resource- rational number of adjustments. Anchoring bias increases with anchorextremity.Russo and Schoemaker ( 1989) Each adjustment reduces the bias by a constant factor (3). Since the resource-rational number ofadjustments is insufficient, the bias is proportionalto the distance from anchor to correct value. Uncertainty increases anchoring. Jacowitz and Kahneman ( 1995) The expected change per adjustment is small when nearby values have similar plausibility. Knowledge can reduce the anchoring bias. Wilson et al. ( 1996), Study 1 High knowledge means low uncertainty. Low uncertainty leads to high adjustment (see above). Accuracy motivation reduces anchoring bias when the anchor is self-generated butnot when it is provided.Tversky and Kahneman ( 1974), Epley and Gilovich ( 2005)1. People are less uncertain about the quantities forwhich they generate their own anchors. 2. Accuracy motivation increases the number of adjustments but change per adjustment is lowerwhen people are uncertain. Telling people whether the correct value is larger or smaller than the anchor makesfinancial incentives more effective.Simmons et al. ( 2010), Study 2 Being told the direction of adjustments makes adjustments more effective, because adjustmentsin the wrong direction will almost always be rejected. Financial incentives are more effective when the anchor is extreme.Simmons et al. ( 2010), Study 3 Values on the wrong side of an extreme anchor are much less plausible than values on the correctside. Therefore proposed adjustments in the wrongdirection will almost always be rejected. adjustment also depends on incentives. Intuitively, accuracy motivation should increase the size of people’s adjustmentsand therefore decrease the anchoring bias. Interestingly, experiments have found that accuracy motivation decreases the anchoring bias only in some cases, but not in others(Epley & Gilovich, 2006; Simmons et al., 2010). On ques- tions where people generated their own anchors, financial incentives increased adjustment and reduced the anchor- ing bias (Epley & Gilovich, 2006; Simmons et al., 2010). But on questions with provided anchors, financial incentives have typically failed to eliminate or reduce the anchor- ing bias (Tversky & Kahneman, 1974; Ariely et al., 2003) with some exceptions (Wright & Anderson, 1989). A recent set of experiments by Simmons et al. ( 2010) suggested that accuracy motivation increases adjustment from pro-vided and self-generated anchors if and only if peopleknow in which direction to adjust. Taken together, these findings suggests that the anchoring bias depends on how much cognitive resources people are able to and willing toinvest. Before the experiments by Simmons et al. (2010) demon- strated that accuracy motivation can increase adjustmentfrom provided anchors, the bias towards provided anchorsappeared immutable by financial incentives (Tversky andKahneman, 1974; Chapman & Johnson, 2002; Wilson et al., 1996), forewarnings and time pressure (Mussweiler &Strack, 1999; but see Wright & Anderson, 1989). Since in- centives were assumed to increase adjustment and increasedadjustment should reduce the anchoring bias, the ineffec- tiveness of incentives led to the conclusion that the anchor- ing bias results from a mechanism other than anchoring-and-adjustment, such as selective accessibility (Mussweiler & Strack, 1999; Chapman & Johnson, 2002;E p l e y , 2004). Later experiments found that when people generate the anchor themselves accuracy motivation and time pressure are effective (Epley & Gilovich, 2005; Epley et al., 2004; Epley & Gilovich, 2006). This led (Epley & Gilovich, 2006) to conclude that people use the anchoring-and-adjustment strategy only when they generated the anchor themselves whereas provided anchors bias judgments through a differ- ent mechanism. The wide range of empirical phenomena summarized in Table 1have suggested a correspondingly wide range of explanations, including the idea that anchoring and 326 Psychon Bull Rev (2018) 25:322–349 adjustment is not a simple, unitary process. In the remain- der of the paper we explore an alternative account, showingthat these disparate and seemingly inconsistent phenomena can all be explained by a unifying principle: the rational use of finite time and cognitive resources. From this princi-ple we derive a resource-rational anchoring-and-adjustment model and show that it is sufficient to explain the anchor- ing bias regardless of whether the anchor was provided orself-generated. Anchoring and adjustment as resource-rational inference In this section we formalize the problem people solve in anchoring experiments – numerical estimation – and ana-lyze how it can be efficiently solved in finite time with bounded cognitive resources. We thereby derive a resource- rational model of anchoring-and-adjustment. We then usethis model to explain a wide range of anchoring phenomena. Conceptually, our model assumes that adjustment pro- ceeds by repeatedly considering small changes to the currentestimate. The proposed change is accepted or rejected prob-abilistically such that the change is more likely to be made the more probable the new value is and the less probable the current one is (see Fig. 1). After sufficiently many adjust- ments the estimate becomes correct on average and inde- pendent of the initial guess. However, each small adjustment costs a certain amount of time. According to our model, thenumber of steps is chosen to minimize the expected value ofthe time cost of adjustment plus the error cost of the result- ing estimate. In the remainder of this section, we derive our model from first principles, specify it in detail, and showthat the optimal number of adjustments is very small. As Fig.1illustrates, this causes the final estimates to be biased towards their respective anchors. In contrast to previous theories of anchoring (Epley & Gilovich, 2006; Simmons et al., 2010), our model precisely specifies the number, size, and direction of adjustments as a function of the task’s incentives and the participant’s knowl-edge. In contrast, to the proposal by Epley and Gilovich (2006) our model covers adjustments from provided anchors andself-generated anchors. Furthermore, while Epley and Gilovich (2006) assumed that the correct direction of adjust- ment is known, our model does not make this assumption and allows the direction of adjustment to change from onestep to the next. The model by Simmons et al. ( 2010) also makes these conceptual assumptions. However, it doesnot specify precisely how the direction and size of eachadjustment are determined. While their model predicts adeterministic back-and-forth in the face of uncertainty, our model assumes that adjustments that improve the estimate are probabilistically preferred to adjustments that do not.This enables our model to capture streaks of adjustmentsin the correct direction interrupted by small steps in thewrong direction, whereas the model by Simmons et al. (2010) appears to predict that the direction of adjustment should constantly alternate. Finally, while both previousmodels assumed that adjustment stops as soon as the cur- rent estimate is sufficiently plausible (Epley and Gilovich, 2006; Simmons et al., 2010), we propose that the number of adjustments is pre-determined adaptively to achieve an opti- mal speed-accuracy tradeoff on average. In the subsequent section we apply the resulting model to explain the variousanchoring phenomena summarized in Table 1. Resource-rational analysis Resource-rational analysis is a new approach to answering a classic question: how should we think and decide given that our time and our minds are finite? In economics this problem was first identified by Simon ( 1955, 1956, 1972). Simon pointed out that our finite computational capacities make it impossible for us to always find the best course of action, because we cannot consider all possible conse-quences. He illustrated this using the game of chess, where choosing the optimal move would require considering about 10 120possible continuations. Thus, Simon concluded, to adequately model human behavior we need a theory ofrationality that takes our minds’ limits into account. Simon called such an approach bounded rationality, emphasizing that it depends on the structure of the environment (Simon, 1956) and entails satisficing, that is accepting suboptimal solutions that are good enough, rather than optimizing. Subsequent research has identified simple heuristics thatmake good, but not necessarily optimal, decisions and judg- ments (Tversky 1972; Thorngate 1980; McKenzie 1994; Gigerenzer & Goldstein 1996) very efficiently. Thinking is assumed to be costly and alternative strategies differ in the amount of thinking they entail (e.g., Shugan, 1980). Based on this line of reasoning, it has been proposed that peopleadaptively select their cognitive strategies from a toolboxof simple heuristics (Gigerenzer & Selten, 2002) according to a cost-benefit analysis (Beach & Mitchell, 1978; Payne et al., 1993). While Simon (1955) provided some formal examples of satisficing strategies, he viewed bounded optimality asa principle rather than a formal framework. Subsequentresearchers have tried to formally capture the tradeoffbetween time and errors. Good (1983) formulated this idea in terms of the maximization of expected utility taking into account deliberation cost. Intuitively, this means that ratio-nal bounded agents optimally trade off the expected utility of the action that will be chosen with the corresponding deliberation cost. Yet, Good (1983) did not make this notionmathematically precise. Furthermore, his formulation does Psychon Bull Rev (2018) 25:322–349 327 0250500Nr. Adjustmentsp(x|k) pprop High Anchor Medium Anchorx xProbability x i*optimal #adjustments Low Anchor Fig. 1 The figure illustrates the resource-rational anchoring-and- adjustment. The three jagged lines are examples of the stochastic sequences of estimates the adjustment process might generate startingfrom a low, medium, and high anchor respectively. In each iterationa potential adjustment is sampled from a proposal distribution p prop illustrated by the bell curves. Each proposed adjustment is stochasti- cally accepted or rejected such that over time the relative frequency with which different estimates are considered q(ˆxt)becomes the tar- get distribution p(x|k). The top of the figure compares the empirical distribution of the samples collected over the second half of theadjustments with the target distribution p(x|k). Importantly, this dis- tribution is the same for each of the three sequences. In fact, it isindependent of the anchor, because the influence of the anchor van-ishes as the number of adjustments increases. Yet, when the number ofadjustments (iterations) is low (e.g., 25), the estimates are still biasedtowards their initial values. The optimal number of iterations i ⋆is very low as illustrated by the dotted line. Consequently, the resultingestimates indicated by the red, yellow, and red cross are still biasedtowards their respective anchors not take into account the deliberation cost of determining the optimal tradeoff between expected utility and deliberation cost. These problems were solved by Russell and colleagues (Russell and Wefald, 1991; Russell & Subramanian, 1995; Russell, 1997) who provided a complete, formal, mathe- matical theory of the rationality of bounded agents. In this framework, agents are considered to be rational if they fol- low the algorithm that makes the best possible use of their computational architecture (e.g., hardware) and time. Resource-rational analysis leverages this abstract theory for understanding the human mind. To be resource-rationalis to make optimal use of one’s finite time and limited cog- nitive resources. Resource-rational analysis (Griffiths et al., 2015) derives rational process models of cognitive abili-ties from formal definitions of their function and abstractassumptions about the mind’s computational architecture. This function-first approach starts at the computational level of analysis (Marr, 1982). When the problem solved by the cognitive capacity under study has been formalized, resource-rational analysis postulates an abstract computa- tional architecture, that is a set of elementary operationsand their costs, with which the mind might solve this prob- lem. Next, resource-rational analysis derives the algorithm that is optimal for solving the problem identified at thecomputational level with the abstract computational archi-tecture. The resulting process model can be used to simulate people’s responses and reaction times in a given experi- ment. The model’s predictions are tested against empiricaldata. Based on this evaluation, the assumptions about thecomputational architecture and the problem to be solved are revised. Resource-rational analysis of numerical estimation Having introduced the basic concepts of resource rational- ity, we now apply resource-rational analysis to numericalestimation: We start by formalizing the problem solved bynumerical estimation. Next, we specify an abstract compu- tational architecture. We then derive the optimal solution to the numerical estimation problem afforded by the computa-tional architecture. This resource-rational strategy will then be evaluated against empirical data in the remainder of this article. Function In numerical estimation people have to make an informed guess about an unknown quantity Xb a s e do nt h e i r knowledge K. In general, people’s relevant knowledge Kis incomplete and insufficient to determine the quantity Xwith certainty. For instance, people asked to estimate the boiling point of water on Mount Everest typically do not know its exact value, but they do know related information, such as the boiling point of water at normal altitude, the freezingpoint of water, the qualitative relationship between altitude, air pressure, and boiling point, and so on. We formalize peo- ple’s uncertain belief about Xby the probability distribution P(X|K)which assigns a plausibility p(X=x|K)to each 328 Psychon Bull Rev (2018) 25:322–349 potential value x. According to Bayesian decision theory, the goal is to report the estimate ˆxwith the highest expected utilityEP(X|K)[u(ˆx,x)]. This is equivalent to finding the estimate with the lowest expected error cost x⋆=arg minˆxEP(X|K)[cost(ˆx,x)], (1) where x⋆is the optimal estimate, and cost( ˆx,x) is the error cost of the estimate ˆxw h e nt h et r u ev a l u ei sx . Here, we assume that the error cost is the absolute deviation of the estimate from the true value, that is cost (ˆx,x)=/vextendsingle/vextendsingleˆx−x/vextendsingle/vextendsingle. Model of mental computation How the mind should solve the problem of numerical estimation (see Eq. 1) depends on its computational architecture. Thus, to derive predic- tions from the assumption of resource-rationality we have to specify the mind’s elementary operations and their cost. To do so, we build on the resource-rational analysis by Vulet al. (2014) which assumed that the mind’s elementary computation is sampling .S a m p l i n gi sw i d e l yu s e dt os o l v e inference problems in statistics, machine learning, and arti- ficial intelligence (Gilks et al., 1996). Several behavioral and neuroscientific experiments suggest that the brain uses computational mechanisms similar to sampling for a wide range of inference problems ranging from vision to causallearning (Vul et al., 2014; Denison et al., 2013; Bonawitz et al., 2014; Bonawitz et al., 2014; Griffiths and Tenenbaum, 2006; Stewart et al., 2006; Fiser et al., 2010). One piece of evidence is that people’s estimates of everyday events are highly variable even though the average of their predictions tends to be very close to the optimal estimate prescribedby Bayesian decision theory (see Eq. 1, Griffiths & Tenenbaum, 2006, 2011). Furthermore, Vul et al. ( 2014) found that the relative frequency with which people report acertain value as their estimate is roughly equal to its poste-rior probability, as if the mind was drawing one sample from the posterior distribution. Sampling stochastically simulates the outcome of an event or the value of a quantity such that, on average, therelative frequency with which each value occurs is equal to its probability. According to Vul et al. (2014), people mayestimate the value of an unknown quantity Xusing only a single sample from the subjective probability distribution P(X|K) that expresses their beliefs. If the expected error cost (1) is approximated using a single sample ˜x, then that sample becomes the optimal estimate. Thus, the observationthat people report estimates with frequency proportional to their probability is consistent with them approximating theoptimal estimate using only a single sample. However, for the complex inference problems that people face in everyday life generating even a single perfect samplecan be computationally intractable. Thus, while samplingis a first step from computational level theories based on probabilistic inference towards cognitive mechanisms, amore detailed process model is needed to explain how sim- ple cognitive mechanisms can solve the complex inference problems of everyday cognition. Here, we therefore explore a more fine-grained model of mental computation whose elementary operations serve to approximate sampling. Instatistics, machine learning, and artificial intelligence sam- pling is often approximated by Markov chain Monte Carlo (MCMC) methods (Gilks et al., 1996). MCMC algorithms allow the drawing of samples from arbitrarily complex distributions using a stochastic sequence of approximate samples, each of which depends only on the previous one.Such stochastic sequences are called Markov chains; hencethe name Markov chain Monte Carlo. In the remainder of the paper, we explore the conse- quences of assuming that people answer numerical estima-tion questions by engaging in a thought process similar to MCMC. We assume that the mind’s computational archi- tecture supports MCMC by two basic operations: The firstoperation takes in the current estimate and stochastically modifies it to generate a new one. The second operation compares the posterior probability of the new estimate tothat of the old one and accepts or rejects the modifica-tion stochastically. Furthermore, we assume that the cost of computation is proportional to how many such oper- ations have been performed. These two basic operationsare sufficient to execute an effective MCMC strategy for probabilistic inference known as the Metropolis-Hastings algorithm (Hastings, 1970). This algorithm is the basis for our anchoring-and-adjustment models as illustrated in Fig.1. To be concrete, given an initial guess ˆx 0, which we will assume to be the anchor a(ˆx0=a), this algorithm performs a series of adjustments. In each step a potential adjustment δis proposed by sampling from a symmetric probability distribution Pprop(δ∼Pprop,Pprop(−δ)=Pprop(δ)). The adjustment will either be accepted, that is ˆxt+1=ˆxt+δ, or rejected, that is xt+1=ˆxt. If a proposed adjustment makes the estimate more probable (P(X =ˆxt+δ|K) > P(X =ˆxt|K)), then it will always be accepted. Other- wise the adjustment will be made with probability α= P(X=ˆxt+δ|K) P(X=ˆxt|K), that is according to the posterior probability of the adjusted relative to the unadjusted estimate. This strat-egy ensures that regardless of which initial value you startfrom, the frequency with which each value xhas been con- sidered will eventually equal to its subjective probability ofbeing correct, that is P(X=x|K). This is necessary to cap- ture the finding that the distribution of people’s estimatesis very similar to the posterior distribution P(X=x|K) (Vul et al., 2014; Griffiths and Tenenbaum, 2006). More formally, we can say that as the number of adjustments t increases, the distribution of estimates Q(ˆx t)converges to the posterior distribution P(X|K). This model of compu- tation has the property that each adjustment decreases an Psychon Bull Rev (2018) 25:322–349 329 upper bound on the expected error by a constant multiple (Mengersen & Tweedie, 1996). This property is known as geometric convergence and illustrated in Fig. 2. There are several good reasons to consider this com- putational architecture as a model of mental computationin the domain of numerical estimation: First, the success of MCMC methods in statistics, machine learning, and artificial intelligence suggests they are well suited for thecomplex inference problems people face in everyday life. Second, MCMC can explain important aspects of cognitive phenomena ranging from category learning (Sanborn et al.,2010) to the temporal dynamics of multistable perception(Moreno-Bote et al., 2011;G e r s h m a ne ta l . ,2012), causal reasoning in children (Bonawitz et al., 2014), and develop- mental changes in cognition (Bonawitz et al., 2014). Third, MCMC is biologically plausible in that it can be efficiently implemented in recurrent networks of biologically plausible spiking neurons (Buesing et al., 2011). Last but not least, process models based on MCMC might be able to explain why people’s estimates are both highly variable (Vul et al., 2014) and systematically biased (Tversky & Kahneman,1974). Optimal resource-allocation Resource-rational anchoring- and-adjustment makes three critical assumptions: First, the estimation process is a sequence of adjustments such that 05 1 0 1 5 Nr. Adjustments00.511.522.533.544.55Bias/Anchor=5 Anchor=4 Anchor=3 Anchor=2 Anchor=1 Fig. 2 In resource-rational anchoring-and-adjustment the bias of the estimate is bounded by a geometrically decaying function of the num- ber of adjustments. The plots shows the bias of resource-rationalanchoring-and-adjustment as a function of the number of adjustmentsfor five different initial values located 1, ···,5 posterior standard deviations (i.e., σ) away from the posterior mean. The standard normal distribution was used as both the posterior P(X|K)and the proposal distribution P prop(δ)after sufficiently many steps the estimate will be a represen- tative sample from the belief P(X|K)about the unknown quantity Xgiven the knowledge K. Second, each adjust- ment costs a fixed amount of time. Third, the number ofadjustments is chosen to achieve an optimal speed-accuracytradeoff. It follows, that people should perform the optimal number of adjustments, that is t ⋆=arg mint/bracketleftbig EQ(ˆxt)/bracketleftbig cost(x,ˆxt)+γ·t/bracketrightbig/bracketrightbig , (2) where Q(ˆxt)is the distribution of the estimate after tadjust- ments, xis its unknown true value, ˆxtis the estimate after performing tadjustments, cost(x, ˆxt)is its error cost, and γ is the time cost per adjustment. Figure 3illustrates this equation showing how the expected error cost – which decays geometrically with the number of adjustments–and the time cost – which increases linearly – determine the optimal speed-accuracy tradeoff. We inspected the solution to Eq. 2when the belief and the proposal distribution are standard normal distributions (i.e. P(X|K)=P(Xprop)=N(0,1)) for different anchors. We found that for a wide range of realistic time costs the optimal number of adjustments (see Fig. 4a) is much smaller than the number of adjustments that would be required to elimi-nate the bias towards the anchor. Consequently, the estimateobtained after the optimal number of adjustments is stillbiased towards the anchor as shown in the Fig. 4b. This is a consequence of the geometric convergence of the error (seeFig.2) which leads to quickly diminishing returns for addi- tional adjustments. This is a general property of this rational model of adjustment that can be derived mathematically (Lieder et al., 2012). The optimal speed-accuracy tradeoff weights the costs in different estimation problems according to their prevalencein the agent’s environment; for more information please seeAppendix B. Resource-rational explanations of anchoring phenomena Following the definition of the bias of an estimator in mathematical statistics, we quantify the anchoring bias by B t(x, a) =E[ˆxt|x,a]−x,w h e r e ˆxtis a participant’s esti- mate of a quantity xafteriadjustments, and adenotes the anchor. Figure 5illustrates this definition and four basic ideas: First, the average estimate generated by anchoring-and-adjustment equals the anchor plus the adjustment. Sec-ond, the adjustment equals the relative adjustment times the total distance from the anchor to the posterior expectation. Third, adjustments tend to be insufficient, because the rela-tive adjustment size is less than one. Therefore, the average 330 Psychon Bull Rev (2018) 25:322–349 Fig. 3 The expected value of the error cost cost(x, ˆxn)shown in green decays nearly geometrically with the number of adjustments n. While the decrease of the error cost diminishes with the number of adjust- ments, the time cost γ·tshown in red continues to increase at the same rate. Consequently, there is a point when further decreasing the expected error cost by additional adjustments no longer offsets theirtime cost so that the total cost shown in blue starts to increase. Thatpoint is the optimal number of adjustments t ⋆ estimate usually lies between the anchor and the correct value. Fourth, because the relative adjustment is less than one, the anchoring bias increases linearly with the distancefrom the anchor to the correct value. More formally, the bias of resource-rational anchoring- and-adjustment cannot exceed a geometrically decayingfunction of the number of adjustments as illustrated inFig.2: B t(x, a) =E[ˆxt|x,a]−x≤B0(x, a)·rt=(a−x)·rt,(3)where ris the rate of convergence to the distribu- tionP(X|K) that formalizes people’s beliefs. Conse- quently, assuming that the bound is tight, resource-rational anchoring-and-adjustment predicts that, on average, peo- ple’s predictions ˆxare a linear function of the correct value xand the anchor a: E[ˆxt|x,a]≈a·rt+(1−rt)·x. (4) Therefore the anchoring bias remaining after a fixed numberof adjustments increases linearly with the distance from the anchor to the correct value as illustrated in Fig. 5. The hypothesis that the mind performs probabilistic inference by sequential adjustment makes the interesting, empirically testable prediction that the less time and com- putation a person invests into generating an estimate, the more biased her estimate will be towards the anchor. Asillustrated in Fig. 6a, the relative adjustment (see Fig. 5) increases with the number of adjustments. When the num-ber of adjustments is zero, then the relative adjustment iszero and the prediction is the anchor regardless of how far it is away from the correct value. However, as the number of adjustments increases, the relative adjustment increases andthe predictions become more informed by the correct value.As the number of adjustments tends to infinity, the average guess generated by anchoring-and-adjustment converges to the expected value of the posterior distribution. Our analysis of optimal resource-allocation shows that, for a wide range of plausible costs of computation, theresource-rational number of adjustments is much smallerthan the number of adjustments required for convergence to the posterior distribution. This might explain why peo- ple’s estimates of unknown quantities are biased towardstheir anchor across a wide range of circumstances. Yet, optimal resource allocation also entails that the number of Fig. 4 Optimal number of adjustments (a) and the bias after optimal number of adjustments ( b) as a function of relative time cost and distance from the anchor Psychon Bull Rev (2018) 25:322–349 331 Fig. 5 If the relative adjustment is less than 100%, then the adjustment is less than the distance from the anchor and the prediction is biased (Panel a) and the magnitude of the anchoring bias increases with the distance of the correct value from the anchor (Panel b) adjustments increases with the relative cost of error and decreases with the relative cost of time. Hence, our theory predicts that the anchoring bias is smaller when errors are costly and larger when time is costly; Fig. 6b illustrates this prediction. Although we derived the implications of making rational use of finite cognitive resources for a specific computa-tional mechanism based on sampling, the crucial property of diminishing returns per additional computation is a uni- versal feature of iterative inference mechanisms includingapproximate Bayesian computation (Sunn ˚aker et al., 2013; Turner & Sederberg, 2012), (stochastic) gradient descent, variational Bayes, predictive coding (Friston & Kiebel,2009; Friston, 2009), and probabilistic computation in cor- tical microcircuits (Habenschuss et al., 2013). Therefore, the qualitative predictions shown in Figs. 3–6are not spe- cific to the abstract computational architecture that we choseto analyze but characterize bounded rationality for a more general class of cognitive architectures. Hence, while wedo notclaim that the brain implements the sampling algo- rithm we have analyzed, there are many biologically andpsychologically plausible mechanisms that share the same characteristics. We will elaborate on this idea in the GeneralDiscussion. In the following sections, we assess these and other predictions of our model. Simulation of anchoring effects Having derived a resource-rational model of anchoring- and-adjustment we performed computer simulations to test whether this model is sufficient to explain the plethora of anchoring effects reviewed above. To capture our assump-tion that people make adjustments in discrete steps, we model the size of adjustments using the Poisson distribution P(δ) =Poisson (|δ|;μ prop). The simulated effects cover a wide range of different phenomena, and our goal is toaccount for all of these phenomena with a single model. Fig. 6 The number of adjustments increases the relative size of adjustments ( leftpanel). As the relative cost of time increases, the number of adjustments decreases and so does the relative size of the adjustment ( right panel) 332 Psychon Bull Rev (2018) 25:322–349 Simulation methodology We simulated the anchoring experiments listed in Table 1 with the resource-rational anchoring-and-adjustment model described above. The participants in each of these exper-iments were asked to estimate the value of one or more quantities X; for instance (Tversky & Kahneman, 1974) asked their participant to estimate the percentage of African countries in the United Nations. Our model’s prediction of people’s estimates of a quantity Xdepends on their probabilistic belief P(X|K) based on their knowledge K, the number of adjustments, the anchor, and the adjust- ment step-size. Thus, before we could apply our model to simulate anchoring experiments, we had to measure peo- ple’s probabilistic beliefs P(X|K)about the quantities used on the simulated experiments. Appendix Cdescribes our methodology and reports the estimates with obtained. To accommodate differences in the order of magnitude of the quantities to be estimated and the effect of incentives for accuracy, we estimated two parameters for each experiment: the expected step-size μ prop of the proposal distribution P(δ) =Poisson (|δ|;μprop)and the relative iteration cost γ. These parameters were estimated by the ordinary least- squares method applied to the summary statistics reported inthe literature. For experiments comprising multiple condi-tions using the same questions with different incentives for accuracy we estimated a single step-size parameter that is expected to apply across all conditions and a distinct relativetime cost parameter for each incentive condition. Insufficient adjustment from provided and self-generated anchors Resource-rational anchoring-and-adjustment provides a the- oretical explanation for insufficient adjustment from pro- vided and self-generated anchors in terms of a ratio-nal speed-accuracy tradeoff, but how accurately does this describe empirical data? To answer this question, we fit our model to two well-known anchoring experiments: one withprovided and one with self-generated anchors. Provided anchors As an example of adjustment from pro- vided anchors, we chose the study by Jacowitz and Kahne- man (1995), because it rigorously quantifies the anchoringbias. Jacowitz and Kahneman ( 1995) asked their partici- pants two questions about each of several unknown quan-tities: First they asked whether the quantity is larger orsmaller than a certain value–the provided anchor .N e x tt h e y asked the participant to estimate that quantity. For the firsthalf of the participants the anchor was a low value (i.e. the15 thpercentile of estimates people make when no anchoris provided), and for the second half of the participants theanchor was a high value (i.e. the 85 thpercentile). People’s estimates were significantly higher when the anchor was high than when it was low. Jacowitz and Kahneman (1995) quantified this effect by the anchoring index (AI), whichis the percentage of the distance from the low to the high anchor that is retained in people’s estimates: AI=Median( ˆX high anchor )−Median( ˆXlow anchor ) high anchor −low anchor·100% (5) Jacowitz and Kahneman (1995) found that the average anchoring index was about 50%. This means that the differ- ence between people’s estimates in the high versus the lowanchor condition retained about half of the distance between the two anchors. We determined the uncertainty σfor each of the 15 quantities by the elicitation method described above. Since Jacowitz and Kahneman (1995) measured people’s median estimates in the absence of any anchor, we used those val- ues as our estimates of the expected values μ, because their sample and its median estimates were significantly different from ours. Next, we estimated the adjustment step-size parameter and the relative time cost parameter by minimizing the sumof squared errors between the predicted and the observed anchoring indices. According to the estimated parameters,people performed 29 adjustments with an average step-size of 22.4 units. With these two estimated parameters the model accurately captures the insufficient adjustment from provided anchors reported by Jacowitz and Kahne ( 1995): The model’s adjustments are insufficient (i.e. anchoring index >0; see Eq. 5) on all questions for which this had been observed empirically but not for the question on which it had not been observed; see Fig. 7. Our model also cap- tured the magnitude of the anchoring bias: the model’saverage anchoring index of 53 .22% was very close to its empirical counterpart of 48 .48%. Furthermore, our model also captured for which questions the anchoring bias was high and for which it was low: the correlation between the predicted and the empirical anchoring indices ( r(13)= 0.62,p=0.0135). The simulated and empirical anchoring effects are shown in Fig. 7. Self-generated anchors As an example of adjustment from self-generated anchors we chose the studies reported inEpley and Gilovich (2006). In each of these studies par- ticipants were asked to estimate one or more unknown quantities such as the boiling point of water on MountEverest for which many participants readily retrieved a Psychon Bull Rev (2018) 25:322–349 333 05 0 1 00 1 50 Predicted Anchoring Index in % 020406080100120Anchoring Index in % Fig. 7 Simulation of the provided anchor experiment by Jacowitz and Kahneman ( 1995) well-known related quantity such as 272◦F (100◦C). After- wards participants were asked whether they knew and had thought of each intended anchor while answering the corre-sponding question. For each question, Epley and Gilovich (2006) computed the mean estimate of those participants who had thought of the intended anchor while answeringit. We combined the data from all self-generated anchorquestions without additional experimental manipulations for which Epley and Gilovich (2006) reported people’s mean estimate, i.e. the first five question from Study 1a,the first five questions from Study 1b, and the control conditions of Study 2b (2 questions) and the first seven questions from Study 2c. 2We determined the means and uncertainties of the model’s beliefs about all quantities used in Epley and Gilovich’s studies by the elicitation method described above. The anchors were set to the intended self-generated anchors reported by Epley and Gilovich ( 2006). We estimated the model’s time cost and adjustment step- size parameters by fitting the relative adjustments reported for these studies using the ordinary least-squares method. The estimated parameters suggest that people performed 8 adjustments with an average step-size of 10.06 units. Withthese parameters the model adjusts its initial estimate by80.62% of the distance to the correct value; this is very close to the 80.95% relative adjustment that Epley and Gilovich (2006) observed on average across the simulated studies.Our model captures that for the majority of quantities (13out of 19) people’s adjustments were insufficient. It also captures for which questions people adjust more and for 2The quantities were the year in which Washington was elected presi- dent, the boiling point on Mt. Everest, the freezing point of vodka, the lowest body temperature, the highest body temperature, and the dura- tion of pregnancy in elephants. Some of these quantities were used in multiple studies.which questions they adjust less from their uncertainties a n da n c h o r s :a ss h o w ni nF i g .8 our model’s predictions of the relative adjustments were significantly correlated with the relative adjustments that Epley and Gilovich ( 2006) observed across different questions ( r(17)=0.61,p = 0.0056). Comparing the parameter estimates between theexperiments with provided versus self-generated anchors suggests that people adjusted less when they had generatedthe anchor themselves. This makes sense because self- generated anchors are typically much closer to the correct value than provided anchors. Effect of cognitive load In an experiment with self-generated anchors (Epley & Gilovich, 2006) found that people adjust their estimate less when required to simultaneously memorize an eight-letter string. To investigate whether resource-rational anchoring- and-adjustment can capture this effect, we fit our modelsimultaneously to participants’ relative adjustment with ver- sus without cognitive load. Concretely, we estimated a common step-size parameter and separate time cost param-eters for each condition by the least squares method. We included all items for which Epley and Gilovich ( 2006) reported people’s estimates. The resulting parameter esti- mates captured the effect of cognitive load: when peoplewere cognitively busy, the estimated cost per adjustment was 4.58% of the error cost, but when people were not cog- nitively busy then it was only 0.003% of the error cost. The estimated average step-size per adjustment was μ=11.69. According to these parameters participants performed only 14 adjustments when they were under cognitive load but60 adjustments when they are not. With these parameters 05 0 1 00 1 50 Predicted Relative Adjustments in %020406080100120140160180Relative Adjustments in % Fig. 8 Simulation of self-generated anchors experiment by Epley and Gilovich ( 2006) 334 Psychon Bull Rev (2018) 25:322–349 our model captures the effect of cognitive load on relative adjustment: cognitive load reduced the simulated adjust-ments by 18 .61% (83.45% under load and 102.06% without load). These simulated effects are close to their empiricalcounterparts: people adjusted their estimate 72 .2% when under load and 101 .4% without cognitive load (Epley & Gilovich, 2006). Furthermore, the model accurately cap- tured for which questions the effect of cognitive load washigh and for which it was low; see Fig. 9. Concretely, our model explained 93.03% of the variance in the effect of cog-nitive load on relative adjustments ( r(5)=0.9645,p < 0.001). The anchoring bias increases with anchor extremity Next we simulated the anchoring experiment by Russo and Schoemaker (1989). In this experiment business students were first asked about the last three digits of their tele- phone number. Upon hearing the number the experimenter No Load High Load020406080100120Relative Adjustment in %Model People 0 0.2 0.4 0.6 Predicted Decrease in Relative Adjustment00.10.20.30.40.50.60.7Decrease in Relative Adjustment Fig. 9 Simulated versus observed effect of cognitive load on the size of people’s adjustmentsannounced he would add 400 to this number (providing an anchor) and proceeded to ask the participant whetherthe year in which Attila the Hun was defeated in Europe was smaller or larger than that sum. When the participant indicated her judgment, she was prompted to estimate theyear in which Attila had actually been defeated. Russo and Schoemaker ( 1989) then compared the mean estimate between participants whose anchor had been 500 ±100, 700±100,···, 1300 ±100. They found that their partici- pants’ mean estimates increased linearly with the provided anchor even though the correct value was A.D. 451. To simulate this experiment, we determined the values ofμandσby the elicitation method described above. Since the variability of people’s estimates and confidenceintervals was very high, we increased the sample size ofthis one experiment to 200. We set the model parameters to the values estimated from the provided anchor experi- ments by Jacowitz and Kahneman ( 1995) (see above). As Fig. 10shows, our model correctly predicted that people’s estimates increase linearly with the provided anchor (Russo& Schoemaker, 1989). To determine whether the quanti- tative differences between the model predictions and thedata reported by Russo and Schoemaker (1989) were due to differences between business students in 1989 and peo- ple working on Mechanical Turk in 2014, we ran an onlinereplication of their experiment on Mechanical Turk with 300 participants. There appeared to be no significant difference between the estimates of the two populations. However,people’s estimates were highly variable. Consequently, the error bars on the mean estimates are very large. Taking into account the high variance in people’s judg- ments, our simulation results are largely consistent with the empirical data. In particular, both Russo and Shoemaker’s 400 500 600 700 800 900 1000 1100 1200 1300 14004005006007008009001000110012001300 AnchorMean Estimate (Year of Attila’s defeat)Model Fit Russo, & Shoemaker (1989) Replication with 95% CI Fig. 10 Simulated effect of the anchor on people’s estimates of the year of Atilla’s defeat and empirical data from Russo & Shoemaker (1989) Psychon Bull Rev (2018) 25:322–349 335 data and our replication confirm our model’s qualitative pre- diction that the magnitude of the anchoring bias increaseslinearly with the anchor, although our model’s prediction for the highest anchor was more extreme than the average judgment. The effects of uncertainty and knowledge Several experiments have found that the anchoring bias is larger the more uncertain people are about the quantity to be estimated (Wilson et al. 1996; Jacowitz & Kahneman 1995). To assess whether and how well our theory can explain this effect, we re-analyzed our simulation of the experi- ment by Jacowitz and Kahneman ( 1995) reported above. Concretely, we computed the correlation between the uncer- tainties σof the modeled beliefs about the 15 quantities and the predicted anchoring indices. We found that resource- rational anchoring-and-adjustment predicted that adjust- ments decrease with uncertainty. Concretely, the anchoringindex that our model predicted for each quantity Xwas significantly correlated with the assumed uncertainty (stan-dard deviation σ) about it (Spearman’s ρ=0.5857,p = 0.0243). This is a direct consequence of our model’s prob- abilistic acceptance or rejection of proposed adjustments on a flat (high uncertainty) versus sloped (low uncertainty)belief distribution P(X|K)= N(μ, σ). Our model thereby explains the negative correlation ( r(13)=− 0.68) that Jacowitz and Kahneman (1995) observed between confi-dence ratings and anchoring indices. Uncertainty reflects the lack of relevant knowledge. Thus people who are knowledgeable about a quantity should beless uncertain and consequently less susceptible to anchor-ing. Wilson et al. (1996) conducted an anchoring experiment in which people first compared the number of countries in the United Nations (UN) to an anchor, then estimatedhow many countries there are in the UN, and finally ratedhow much they know about this quantity. They found that people who perceived themselves as more knowledgeable were resistant to the anchoring bias whereas people whoperceived themselves as less knowledgeable were suscep- tible to it. Here, we asked whether our model can explain this effect by smaller adjustments due to higher uncertainty.To answer this question, we recruited 60 participants on Mechanical Turk, asked them how much they knew about the number of nations in the UN on a scale from 0 (“noth-ing”) to 9 (“everything”) and elicited their beliefs by themethod described in Appendix C. We then partitioned our participants into a more knowledgeable and a less knowl-edgeable group by a median split as in Wilson et al. ( 1996). We model the beliefs elicited from the two groups by two separate normal distributions (Appendix C). We found that the high-knowledge participants were less uncertain than the low-knowledgeable participants ( σ high=35.1 vs. σlow=45.18). Furthermore, their median estimate was much closer to the true value of 193 ( μhigh=185 vs.μlow=46.25). We fit the relative adjustments from the anchor provided in Wilson et al.’s experiment (1930) by the least-squares method as above. With the estimatedparameters (17 adjustments, step-size 488.2) the model’s predictions captured the effect of knowledge: For the low- knowledge group the model predicted that providing thehigh anchor would raise their average estimate from 45.18 to 252.1. By contrast, for the high-knowledgeable group our model predicted that providing a high anchor would fail toincrease people’s estimates (185 without anchor, 163 withhigh anchor). Differential effects of accuracy motivation People tend to invest more mental effort when they are motivated to be accurate. To motivate participants to be accurate some experiments employ financial incentives foraccuracy, while others warn their participants about poten- tial errors that should be avoided (forewarnings). Consistent with the effect of motivation, resource-rational anchoring-and-adjustment predicts that the number of adjustments increases with the relative cost of error. Yet, financial incen- tives for accuracy reduce the anchoring bias only in somecircumstances but not in others: First, the effect of incen-tives appeared to be absent when anchors were provided but present when they were self-generated (Tversky and Kahneman, 1974; Epley & Gilovich, 2005). Second, the effect of incentives was found to be larger when people were told rather than asked whether the correct value is smaller or larger than the anchor (Simmons et al., 2010). Here, we explore whether and how these interaction effects can be reconciled with resource-rational anchoring-and- adjustment. Smaller incentive effects for provided anchors than for self-generated anchors Epley and Gilovich (2005) found that financial incentives and forewarnings decreased the anchoring bias when the anchor was self-generated but not when it was provided by the experimenter. From this find-ing Epley and Gilovich (2005) concluded that people use anchoring-and-adjustment only when the anchor is self- generated but not when it is provided. By contrast, Simmonset al. (2010) suggested that this difference may be mediatedby people’s uncertainty about whether the correct answer is larger or smaller than the anchor. They found that peo- ple are often uncertain in which direction they should adjustin questions used in experiments with provided anchors; so this may be why incentives for accuracy failed to reduce the anchoring bias in those experiments. Here we showthat resource-rational anchoring-and-adjustment can cap- ture the differential effectiveness of financial incentives in 336 Psychon Bull Rev (2018) 25:322–349 experiments with provided versus self-generated anchors. First, we show through simulation that given the amountof uncertainty that people have about the quantities to be estimated our model predicts a larger effect of accuracy motivation for the self-generated anchor experiments byEpley and Gilovich (2005) than for the provided anchor experiments by Tversky and Kahneman (1974) and Epley and Gilovich (2005). First, we analyze people’s beliefs about the quantities used in experiments with provided versus self-generatedanchors with respect to their uncertainty. We estimated themeanμand standard deviation σof people’s beliefs about each quantity Xby the elicitation method described above. Because the quantities’ values differ by several orders ofmagnitude, it would be misleading to compare the stan-dard deviations directly. For example, for the population of Chicago (about 2,700,000 people) a standard devia- tion of 1,000 would express near-certainty, whereas forthe percentage of countries in the UN the same standard deviation would express complete ignorance. To overcome this problem, the standard deviation has to be evaluatedrelative to the mean. We therefore compare uncertain-ties in terms of the signal-to-noise ratio (SNR). We esti- mated the SNR by the median of the signal-to-noise ratios of our participants’ beliefs (SNR s=μ2 s/σ2 s). We found that people tended to be much more certain about the quantities Epley and Gilovich (2005)u s e di nt h e i rs e l f - generated anchors experiments (median SNR: 21.03) thanabout those for which they provided anchors (median SNR: 4.58). A Mann-Whitney U-test confirmed that the SNR was significantly higher for self-generated anchoring questionsthan for questions with provided anchors (U( 18)=74.0, p=0.0341). Given that people were more uncertain about the quan- tities used in the experiments with provided anchors, weinvestigated how this difference in uncertainty affects the effect of financial incentives on the anchoring bias predicted by our resource-rational model. To do so, we simulatedStudy 1 from Epley and Gilovich ( 2005), in which they compared the effects of financial incentives between ques-tions with self-generated versus provided anchors, and theprovided anchors experiment by Tversky and Kahneman(1974). To assess whether our model can explain why the effect of motivation differs between questions with provided versus self-generated anchors, we evaluated the effects ofmotivation as follows: First, we fit our model to the data from the condition with self-generated anchors. Second, we use the estimated numbers of adjustments to simulateresponses in the condition with provided anchors. Third, for each question, we measured the effect of motivation by the relative adjustment with incentives minus the rel-ative adjustment without incentives. Fourth, we averagedthe effects of motivation separately for all questions withself-generated versus provided anchors and compared theresults. We fit the relative adjustments on the questions with self-generated anchors with one step-size parameter andtwo relative time-cost parameters: The estimated step-sizewas 17.97. The estimated number of adjustments was 5 for the condition without incentives and 9 for the condition with incentives. According to these parameters, motiva-tion increased the relative adjustment from self-generated anchors by 12.74% from 65.62% to 78.35%. This is con- sistent with the significant effect of 33 .01% more adjust- ment that Epley and Gilovich ( 2005) observed for ques- tions with self-generated anchors. For the condition withprovided anchors Epley and Gilovich (2005) used four questions from the experiment by Jacowitz and Kahneman(1995) simulated above and the same incentives as in the questions with self-generated anchors. We therefore simu- lated people’s responses to questions with provided anchorsusing the step-size estimated from the data by Jacowitz and Kahneman (1995) and the number of adjustments estimated from questions with self-generated anchors. Our simula-tion correctly predicted that incentives for accuracy failto increase adjustment from provided anchors. Concretely, our simulation predicted 44 .09% adjustment with incentives and 44.48% without. Thus, as illustrated in Fig. 11, our model captures that financial incentives increased adjust-ment from self-generated anchors but not from provided anchors. According to our model, this difference is just anartifact of the confound that people know more about the quantities used in experiments with self-generated anchors than about the quantities used in experiments with providedanchors. Finally, we simulated Study 2 from Epley and Gilovich (2005) in which they compared the effect of warningparticipants about the anchoring bias between questionswith provided versus self-generated anchors. This study had 2 (self-generated anchors vs. provided anchors) ×2 (forewarnings vs. no forewarnings) conditions. Epley and Gilovich (2005) found that in the conditions with self- generated anchors forewarnings increased adjustment, but in the conditions with provided anchors they did not. Asbefore, we set the model’s beliefs about the quantities usedin this experiment using the elicitation method described above. We fit our model to the relative adjustments in the conditions with self-generated anchors. Concretely, we usedthe least-squares method to fit one step-size parameter and two time cost parameters: one for the condition with fore- warnings and one for the condition without forewarnings.With these parameters, we simulated people’s estimates in the conditions with self-generated anchors (to which the parameters were fit) and predicted the responses in the pro- vided anchor conditions that we had notused for parameter estimation. Psychon Bull Rev (2018) 25:322–349 337 Selfgenerated Anchors Provided AnchorsEffect of Motivation on Rel. Adjustment % 01020304050607080Without Incentives With Incentives Fig. 11 Simulation of Study 1 from Epley and Gilovich (2005): Pre- dicted effects of financial incentives on the adjustment from provided versus self-generated anchors According to the estimated parameters, forewarnings increased the number of adjustments from 8 to 28. We there- fore simulated the responses in both conditions with fore- warnings (provided and self-generated anchor questions)with 8 adjustments and all responses in the two conditions without forewarnings (provided and self-generated anchor questions) with 28 adjustments. For the questions with self-generated anchors, forewarnings increased the simulated adjustments by 30% from insufficient 81% to overshooting 111% of the total distance from the anchor to the correctvalue. 3By contrast, for questions with provided anchors forewarnings increased the simulated adjustments by only 12.5% from 6.9% to 19.4%. Thus, assuming that forewarn- ings increase the number of adjustments from provided anchors by the same number as they increase adjustments from self-generated anchors our model predicts that their effect on people’s estimates would be less than one third ofthe effect for self-generated anchors; see Fig. 12. Accord- ing to our model, the reason is that people’s uncertaintyabout the quantities for which anchors were provided isso high that the effect of additional adjustments is muchsmaller than in the questions for which people can read- ily generate their own anchors. Our results are consistent with the interpretation that the absence of a statisticallysignificant effect of forewarnings on the bias towards the provided anchors in the small sample of Epley and Gilovich (2005) does not imply that the number of adjustments didnot increase. Therefore adjustment from provided anchors cannot be ruled out. 3Overshooting is possible, because the expected value of the estimated belief P(X|K)=N(μ, σ) can be farther away from the anchor than the correct value.Self-Generated Anchors Provided Anchors020406080100120Relative Adjustment in %Model People Fig. 12 Simulation of Study 2 from Epley and Gilovich (2005): Pre- dicted effects of forewarnings for questions from experiments with provided versus self-generated anchors Direction uncertainty masks the effect of incentives Simmons et al. (2010) found that accuracy motivation decreases anchoring if people are confident about whether the quantity is larger or smaller than the anchor but not whenthey are very uncertain. Simmons et al. (2010) showed thateven when the anchor is provided, incentives for accuracy can reduce the anchoring bias provided that people are con- fident about the correct direction of adjustment. Concretely,Simmons et al.’s second study unmasked the effect of incen- tives on adjustment from provided anchors by telling instead of asking their participants whether the true value is largeror smaller than the anchor. Similarly, in their third study, Simmons et al. (2010) found that the effect of incentives is larger when the provided anchor is implausibly extreme thanwhen it is plausible. Here we report simulations of both ofthese effects. First, we show that our model can capture that the effect of incentives increases when people are told the correctdirection of adjustment. Simmons et al.’s second study mea- sured the effect of accuracy motivation on the anchoring index as a function of whether people were asked or toldif the correct value is larger or smaller than the anchor. We modeled the effect of being told that the quantity X is smaller or larger than the anchor aby Bayesian updat- ing of the model’s belief about XfromP(X|K)toP(X|K, X<a )a n d P(X|K,X>a ) respectively. The original beliefs P(X|K)were determined by the elicitation method described in Appendix C. We fit the model simultaneously to all anchoring indices by ordinary least squares to estimateone step-size parameter and one number of adjustments for each incentive condition. According to the estimatedparameters, incentives increased the number of adjustments from 5 to 1000 and the average adjustment step-size was 338 Psychon Bull Rev (2018) 25:322–349 11.6 units. For both incentive conditions, our model cap- tured the variability of adjustments across trials: For trialswith incentives for accuracy the correlation between sim- ulated and measured anchoring indices was r(18)=0.77 (p=0.0001), and for trials without incentives this cor- relation was r(18)=0.61 (p =0.004). Our model also captured the overall reduction of anchoring with incentives for accuracy observed by Simmons et al. ( 2010), although the predicted 42% reduction of anchoring with incentives for accuracy was quantitatively larger than the empirical effect of 8%. Most importantly, our model predicted theeffects of direction uncertainty on adjustment and its inter-action with accuracy motivation: First, our model predicted that adjustments are larger if people are told whether the correct value is larger or smaller than the anchor. The pre-dicted 13.7% reduction in the anchoring index was close to the empirically observed reduction by 18 .8%. Second, our model predicted that the effect of accuracy motivation will be 6.3% larger when people are told the direction of adjustment. The predicted effect of direction uncertainty is smaller than the 21% increase reported by Simmons et al.(2010) but qualitatively consistent. Therefore, our modelcan explain why telling people whether the correct value is larger or smaller than the anchor increases the effect of accuracy motivation. According to our model, financialincentives increase the number adjustments in both cases, but knowing the correct direction makes adjustment more effective by eliminating adjustments in the wrong direction. Second, we simulated Study 3b of Simmons et al. ( 2010) in which they showed that financial incentives increaseadjustments away from implausible anchors. Concretely,this study compared the effect of accuracy motivation onadjustments between plausible versus implausible provided anchors. As before, we determined the model’s beliefs by the procedure described above and estimated the number ofadjustments with and without incentives (781 and 188) and the adjustment step-size (0.01) by fitting the reported rela- tive adjustments by ordinary-least squares. 4With this single set of parameters we simulated adjustments from plausible versus implausible provided anchors. The predicted adjust- ments captured a statistically significant proportion of theeffects of anchor type, motivation, and quantity on thesize of people’s adjustments: ρ(22)=0.72,p < 0.0001. Most importantly, our simulations predicted no statisticallysignificant effect of accuracy motivation on absolute adjust-ment (mean effect: 0.76 units; 95% CI: [−0.42; 1.94]) when the anchor was plausible but a substantially largerand statistically significant effect when the anchor was 4The reason that the estimated step-size is so small appears to be that all quantities and distances in this experiment are small compared to those in other experiments such as Study 2 by the same authors. The increase in the number of adjustments appears to compensate for the reduced step-size.implausible (17.8 units; 95% CI: [9.76; 25.91]); see Fig. 13. This prediction results from the fact that large adjustments away from plausible anchors will often be rejected because they decrease the estimate’s plausibility and small adjust- ments in the wrong direction are almost as likely to beaccepted as adjustment in the correction direction because values on either side of the plausible anchor are almost equally plausible if the distribution is symmetric around itsmode. Thus the expected change per adjustment is rather small. In conclusion, resource-rational anchoring-and-adjust- ment can explain why motivating participants to be accurate reduces the anchoring bias in some circumstances but not in others. In a nutshell, our model predicts that incentives for accuracy have little effect when adjustments in eitherdirection hardly change the estimate’s plausibility. The sim- ulations reported above demonstrate that this principle is sufficient to explain the differential effect of accuracy moti-vation on adjustments from provided versus self-generated anchors. Therefore, a single process – resource-rational anchoring-and-adjustment – may be sufficient to explainanchoring on provided and self-generated anchors. Summary Our resource-rational analysis of numerical estimation showed that under-adjusting an initial estimate can be a ratio- nal use of computational resources. The resulting model can explain ten different anchoring phenomena: insufficientadjustments from both provided and self-generated anchors, the effects of cognitive load, anchor extremity, uncer- tainty, and knowledge, as well as the differential effects offorewarnings and financial incentives depending on anchor Fig. 13 Simulation of Experiment 3 from Simmons et al. ( 2010): Pre- dicted effect of accuracy motivation on adjustments from plausible versus implausible provided anchors Psychon Bull Rev (2018) 25:322–349 339 type (provided vs. self-generated), anchor plausibility, and being asked versus being told whether the quantity issmaller or larger than the anchor (see Table 1). None of the previous models (Epley and Gilovich, 2006; Simmons et al., 2010) was precise enough to make quantitative pre- dictions about any of these phenomena let alone precisely predict all of them simultaneously. The close match between our simulation results and human behavior suggests thatresource-rational anchoring-and-adjustment provides a uni- fying explanation for a wide range of disparate and appar- ently incompatible phenomena in the anchoring literature.Our model was able to reconcile these effects by captur-ing how the effect of adjustment depends on the location and shape of the posterior distribution describing the par- ticipants’ belief about the quantity to be estimated. Forinstance, our model reconciles the apparent ineffectiveness of financial incentives at reducing the bias towards provided anchors (Tversky & Kahneman, 1974) with their apparent effectiveness at reducing bias when the anchor is self- generated (Epley & Gilovich, 2005). To resolve this appar- ent contradiction, we did not have to postulate additional processes that operate only when the anchor is provided–unlike Epley and Gilovich (2006). Instead, our computa- tional model directly predicted this difference from people’s higher uncertainty about the quantities used in experimentswith provided anchors, because when the uncertainty is high then adjustments in the wrong direction are more likely to be accepted. Our model thereby provides a more parsimoniousexplanation of these effects than the proposal by Epley and Gilovich (2006). While Simmons et al. (2010) offered a con- ceptual explanation along similar lines, our model predictedthe exact sizes of these effects a priori. The parameter estimates we obtained differed signifi- cantly across the simulated phenomena. This is partly duedifferences in the incentives and other experimental manip-ulations. Additional reasons for the variability in the param- eter estimates are somewhat arbitrary differences in the resolution of the hypothesis spaces across different quan-tities and the interdependence between the average change per adjustment and the number of adjustments: the same amount of adjustment can be explained either by a smallnumber of large steps or a large number of small steps. Forsome experiments maximum likelihood estimation chose the former interpretation and for others it chose the latter. But because a larger step size can compensate for a smallernumber of adjustments, it is quite possible that the model could have explained all of the findings with a very similar step size and number of adjustment parameters if we knewthe structure and resolution of people’s hypothesis spaces for the quantities used in each experiment. Although the model’s parameters were unknown and had to be estimatedto make quantitative predictions, all of the qualitative phe-nomena we simulated logically follow from the structure ofthe model itself. In this sense, our model did not just capturethe simulated phenomena but predicted them. Most impor-tantly, our theory reconciles the apparently irrational effects of potentially irrelevant numbers with people’s impressive capacity to efficiently handle a large number of complexproblems full of uncertainty in a short amount of time. General discussion Anchoring and adjustment is one of the classic heuris-tics reported by Tversky and Kahneman (1974)a n di tseems hard to reconcile with rational behavior. In this arti- cle, we have argued that this heuristic can be understood as a signature of resource-rational information process-ing rather than a sign of human irrationality. We havesupported this conclusion by a resource-rational analysis of numerical estimation and simulations of ten anchor- ing phenomena with a resource-rational process model.We showed that anchoring-and-adjustment can be inter- preted as a Markov chain Monte Carlo algorithm–a rational approximation to rational inference. We found that acrossmany problems the optimal speed-accuracy tradeoff of this algorithm entails performing so few adjustments that the resulting estimate is biased towards the anchor. Oursimulations demonstrated that resource-rational anchoring-and-adjustment, which adaptively chooses the number of adjustments to maximize performance net the cost of com- putation, provides a unifying explanation for ten differentanchoring phenomena (see Table 1). Although we explored the implications of limited time and finite cognitive resources assuming an abstract compu-tational architecture based on sampling, we do notclaim that the brain implements the sampling algorithm we analyzedabove. Instead, our goal was to illustrate general prop-erties of resource-rational information processing. Manyother iterative inference mechanisms also have the prop- erty of diminishing returns for additional computation that our analysis is based on. Hence, the qualitative predictionss h o w ni nF i g s . 3–6characterize bounded rationality for a more general class of cognitive architectures. Importantly,this class includes biologically plausible neural networkimplementations of Bayesian inference (Habenschuss et al., 2013; Friston and Kiebel, 2009; Friston, 2009) and mech- anisms that implement the general principles of our model in a more psychologically plausible fashion. For instance,while our model’s assumption that people can evaluate the exact likelihood of the observed data under each sampled hypothesis is questionable, our analysis also applies to sam-pling methods that approximate the likelihood through sim- ulation (Turner & Sederberg, 2012; Sunn ˚aker et al., 2013). Likewise, while we do not propose a neural implementa- tion of probabilistic inference, our analysis also applies to 340 Psychon Bull Rev (2018) 25:322–349 Markov chain Monte Carlo algorithms implemented in cor- tical microcircuits (Habenschuss et al., 2013), stochastic gradient descent, and the predictive coding implementa- tion of variational inference postulated by the free-energy principle (Friston & Kiebel, 2009; Friston, 2009). There- fore, our results support the adaptive allocation of finite computational resources and the resource-rationality of bias regardless of the specific cognitive mechanism that peopleuse to draw inferences. In the remainder of this paper we will discuss the impli- cations of our results for general theoretical questions. Westart by discussing how our model is related to previous the-ories of anchoring and how they can be integrated into our resource-rational framework. We then turn to two questions about rationality: First, we discuss existing evidence for thehypothesis that anchors are chosen resource-rationally and how it can be tested in future experiments. Second, we argue that resource-rationality, the general theory we have appliedto explain the anchoring bias, provides a more adequate normative framework for cognitive strategies than classical notions of rationality. We close with directions for futureresearch. Relation to previous theories of anchoring and adjustment Previous models of anchoring-and-adjustment (Epley & Gilovich, 2006; Simmons et al., 2010) assumed that adjustment terminates when the plausibility of the current estimate exceeds a threshold. From an information pro-cessing perspective, the limitation of models postulating that adjustment stops when plausibility exceeds a thresh- old is that there is no single threshold that works wellacross all estimation problems. Depending on the level of uncertainty successful estimation requires different thresh- olds. A threshold that is appropriate for low uncertaintywill result in never-ending adjustment in a problem with high uncertainty. Conversely, a threshold that is appro- priate for a problem with high uncertainty would be tooliberal when the uncertainty is low. In addition, Sim-mons et al. (2010) postulate that people reason about the direction of their adjustment whereas resource-rational anchoring-and-adjustment does not. It would be interestingto see whether an extension of our model that incorporates directional information would perform better in numeri- cal estimation and better predict human behavior. We willreturn to this idea when we discuss directions for future research. According to the selective-accessibility theory of anchor- ing (Strack and Mussweiler, 1997), comparing an unknown quantity to the provided anchor increases the accessibilityof anchor-consistent knowledge and the heightened avail- ability of anchor-consistent information biases people’sestimates. There is no quantitative mathematical modelof selective accessibility that could be tested against ourresource-rational anchoring-and-adjustment model using the data we have collected. The evidence that some anchoring biases result from selective accessibility (Strack& Mussweiler, 1997) does not undermine our analysis, because the existence of selective accessibility would notrule out the existence of anchoring-and-adjustment andvice versa. In fact, from the perspective of resource- rational probabilistic inference a mechanism similar to selective accessibility is likely to coexist with anchoring-and-adjustment. Concretely, we have formalized the prob-lem of numerical estimation of some quantity Xas min- imizing the expected error cost of the estimate ˆxwith respect to the posterior distribution P(X|K)where Kis the entirety of the person’s relevant knowledge. This problem can be decomposed into two sub-problems: conditioning on relevant knowledge to evaluate (relative) plausibilityand searching for an estimate with high plausibility. It appears unlikely that the mind can solve the first problem by simultaneously retrieving and instantly incorporatingeach and every piece of knowledge relevant to estimat-ingX. Instead, the mind might have to sequentially recall and incorporate pieces K (1),K(2),K(3),···of its knowl- edge to refine P(X) toP(X|K(1))toP(X|K(1),K(2))to P(X|K(1),K(2),K(3)), and so forth. This process could be modeled as bounded using a sequential Monte Carloalgorithm (Doucet et al., 2001) and bounded conditioning (Horvitz et al., 1989). Furthermore, it would be wasteful not to consider the knowledge that has been retrieved to answerthe comparison question in the estimation task and impos-sible to retrieve all of the remaining knowledge. Selectiveaccessibility may therefore result from the first process. Yet, regardless of how the first problem is solved, the mind still needs to search for an estimate ˆxwith high posterior probability, and this search process might be implemented by something like anchoring-and-adjustment. Furthermore, the knowledge retrieved in the first step might also guidethe generation of an anchor. Importantly, both processes are required to generate an estimate. Therefore, we agree with Simmons et al. (2010) that selective accessibility andanchoring-and-adjustment might coexist and both of themmight contribute to the anchoring bias. Like the model by Simmons et al. (2010), our theory deviates from Epley and Gilovich (2005) by suggestingthat anchoring and adjustment is a unifying mechanisms for the anchoring biases observed for self-generated as well as provided anchors. Our simulations show that thisassertion is compatible with the results reviewed by Epley and Gilovich (2006) because the effect of financial incen- tives declines with the uncertainty about the quantity to beestimated. This explanation is similar to the argument bySimmons et al. (2010), but our formal model does not need Psychon Bull Rev (2018) 25:322–349 341 to assume that people reason about the direction of their adjustments. Our model is consistent with the recently proposed anchor integration model (Turner and Schley, 2016). Both models describe the effect of the anchor in terms ofBayesian inference, but while the anchor integration model is agnostic about the mechanism by which the anchor affects people’s judgments and whether or not this is rational, wehave developed a rational process model. In summary, our resource-rational analysis of estimation problems sheds new light on classic notions of anchoring-and-adjustment (Tversky & Kahneman, 1974;E p l e y& Gilovich, 2006), explaining why they work and why peo- ple use them. Furthermore, our framework is sufficientlygeneral to incorporate and evaluate the additional mecha-nisms proposed by Simmons et al. ( 2010) and Strack and Mussweiler (1997) and many others. Exploring these exten-sions is an interesting direction for future work. Are anchors chosen rationally? Anchoring-and-adjustment has two components: generating an anchor and adjusting from it. Our simulations supported the conclusion that adjustment is resource-rational. Thus, a natural next question is whether anchors are also generatedresource-rationally. Self-generated anchors are usually close to the correct value, but provided anchors can be far off. For instance, itappears irrational that people can be anchored on their socialsecurity number when they estimate how much they would be willing to pay for a commodity (Ariely et al., 2003). Yet, the strategy failing people in this specific instance may nevertheless be resource-rational overall for at least four reasons: First, it is sensible to assume that the experimenter is reasonable and cooperative. Therefore her utterancesshould follow the Gricean maxims. Specifically, accordingto Grice’s maxim of relation the stated anchor should be rel- evant (Zhang & Schwarz, 2013). Furthermore, as a rational information-seeking agent the experimenter should ask the question whose answer will be most informative. The most informative anchor to compare the true value to would be at the center of the experimenter’s belief distribution. This toosuggests that it is reasonable to treat the provided anchor as a starting point. A weaker version of this argument might apply even to the experiment in which Tversky and Kah-neman (1974) asked participants to compare the number ofAfrican countries in the UN to a randomly generated num- ber: It seems reasonably for participants to assume that the experimenter would not be asking them whether the num-ber of African countries in the UN is larger or smaller than the number on the wheel of fortune if the answer was obvi- ous to him. Hence, assuming the logic of conversation, thefact that the experimenter did ask would suggest that thenumber was within the range of values he considered plau-sible. Under these assumptions, the question constitutes aninformative endorsement of the anchor regardless of how it was generated. This makes it reasonable to use that value as a starting point. Second, subsequent thoughts and questions are usually related. So it is reasonable to use the answer to a precedingquestion as the starting point for next thought. This holdsfor sequences of arithmetic operations such as 8 ×7×6× 5×4×3×2×1 for which people anchor on their interme- diate results when they are forced to respond early (Tversky& Kahneman, 1974) and in many other cases too. Third, when the provided anchor is the only number available in working memory, then using it may be faster and require less effort than generating a new one. This assumption isconsistent with evidence for spreading-activation theories of semantic processing and memory retrieval (Collins & Loftus, 1975; Neely, 1977; Anderson, 1983). For instance, when primed with one word people are faster to recognize words that are associated with the prime than words that are not (Neely, 1977; Collins & Loftus, 1975). The spreading of activation to associated mental representations appearsto be fast and automatic (Neely, 1977) and inhibiting it would be slow and effortful (Diamond, 2013). Furthermore, according to spreading-activation theories of memory recall(Anderson, 1983; Neely, 1977) and rational process mod- els of memory search (Bourgin et al., 2014; Abbott et al., 2015), the generation of a new anchor from memory mightbe subject to the same limitations as the adjustment pro- cess itself. Last but not least, one’s beliefs may be wrong and the anchor may be more accurate. This was the case inRusso and Shoemaker’s experiment: People overestimatedthe year in which Attila the Hun was defeated in Europe so much that the anchor was usually closer to the correct value (A.D. 451) than the mean of their unbiased estimates (A.D.953.5). For these reasons, the observation that people anchor on irrelevant values provided in psychological experiments does not imply that anchors are selected irrationally. Anchorselection could be well adapted to the real-world. Conse- quently, anchoring biases in everyday reasoning would be much more benign than those observed in the laboratory.This is probably true, because most anchoring experimentsviolate people’s expectation that the experimenter will pro- vide relevant information, provide negligible incentives for accuracy, and ask people to estimate quantities about whichthey know very little. There also is empirical evidence suggesting that peo- ple do not always use the provided value as their anchor.For instance, in the experiment by Strack and Mussweiler (1997) the provided anchor influenced the participants’ esti- mates only when it was semantically related to the quantityto be estimated. Pohl ( 1998) found that the anchoring bias was absent when the anchor was perceived as implausible, 342 Psychon Bull Rev (2018) 25:322–349 and Hardt and Pohl ( 2003) found that the bias was smaller on trials where the anchor’s judged plausibility was below the median plausibility judgment. Thus, at least under some circumstances, people appear to discard the provided value when it appears irrelevant or misleading. However, realizing that the provided anchor is implau- sible and generating a better anchor require knowledge,effort, and time. Therefore, when people are asked to esti-mate a quantity they know almost nothing about, it may be resource-rational for them to anchor on whatever the exper- imenter suggested. This seems applicable to most anchoringexperiments, because participants are usually so uncertainthat they do not even know in which direction to adjust from the provided anchor (Simmons et al., 2010). If you can- not even tell whether the correct value is larger or smaller than the anchor, how could you generate a better one? The effect of the anchor is largest in people with little knowl- edge and high uncertainty about the quantity to be estimated(Jacowitz & Kahneman, 1995; Wilson et al., 1996). These people would benefit from a better anchor, but they cannoteasily generate one, because they lack the relevant knowl-edge. Conversely, our simulation of the effect of knowledgesuggested that people knowledgeable enough to generate good anchors, will perform well even if they start from a highly implausible anchor. Although this argument is spec-ulative and has yet to be made precise it suggests that, at least in some situations, self-generating an anchor might not be worth the effort regardless of one’s knowledge. In conclusion, existing data are not necessarily incon- sistent with the idea that anchors are chosen resource-rationally. Thus, whether anchors are chosen rationallyis still an open question. Experimental and theoreticalapproaches to this question are an interesting avenue for future research that we will discuss below. Resource-rationality: A better normative standard for human cognition? When people estimate probabilities, the anchoring bias and other cognitive biases can cause their judgments to vio-late the laws of probability. This could be interpreted as a sign of human irrationality. However, adherence to the laws of logic and probability is just one of many notionsof rationality. Existing definitions of rationality differ along four dimensions: The first distinction is whether rational- ity is defined in terms of beliefs (theoretical rationality)or actions (practical rationality, Harman, 2013;S o s i s& Bishop, 2014). The second distinction is whether rationality is judged by the reasoning process or its outcome (Simon,1976). Third, some notions of rationality take into accountthat the agent’s computational capacity is bounded whereas others do not (Lewis et al., 2014; Russell, 1997). Fourth, rationality may be defined either by the agent’s performanceon a specific task or by its average performance in its natu- ral environment (ecological rationality, Chater & Oaksford,2014; Gigerenzer, 2008; Lewis et al., 2000). In this taxonomy, Tversky and Kahneman’s notion of rationality can be classified as theoretical, process-based,unbounded, and task-specific rationality. It is a notion of theoretical rationality, because it evaluates beliefs rather than actions. It is a form of process rationality, becauseit evaluates people by how they reason; specifically by whether or not their thoughts follow the rules of logic andprobability theory. It is a notion of rationality for unboundedagents because it ignores the computational complexityof logical and probabilistic inference (Van Rooij, 2008). It is task-specific because it evaluates human rationalityby people’s performance on laboratory tasks specificallydesigned to elicit errors rather than representative everyday reasoning. We have argued that this is an unsuitable met- ric of human rationality and proposed a concrete alternative:resource-rationality. Resource-rationality differs from clas- sical rationality along three of the four dimensions: First, it evaluates reasoning by its utility for subsequent deci-sions rather than by its formal correctness; this makes itan instance of practical rather than theoretical rationality. For instance, we evaluated anchoring-and-adjustment not by the correctness of the resulting estimates but by therewards that people earned by using those estimates. Sec- ond, it agrees with Tversky and Kahneman’s approach in that resource-rationality is an attribute of the process thatgenerates conclusions and decisions. Third, it takes into account the cost of time and the boundedness of people’s cognitive resources. Fourth, resource-rationality is definedwith respect to the distribution of problem’s in the agent’senvironment rather than a set of arbitrary laboratory tasks. Arguably, all three of these changes are necessary to obtain a normative–yet realistic–theory of human rationality. Thisnew metric of rationality allowed us to re-evaluate the anchoring bias as a consequence of resource-rational com- putation rather than irrationality. Heuristics and rationalmodels are often seen as opposites, but once the cost of com- putation is taken into account heuristics can be resource- rational. This illustrates the potential of resource-rationalanalysis to reconcile cognitive biases, such as the anchoringbias, with the fascinating capacities of human intelligence, and to connect rational theories, such as Bayesian models of cognition and rational analysis, to heuristics and otherpsychological process models (Griffiths et al., 2015). Resource-rational analysis is closely related to other the- oretical frameworks for analyzing cognition. The most clo-sely related one is the computational rationality approach proposed by Lewis et al. (2014), which draws the same ins-piration from Russell’s work but focuses on finding optimalalgorithms within a fixed cognitive architecture. Anderson’sframework of rational analysis (1990, 1991), is also part Psychon Bull Rev (2018) 25:322–349 343 of the inspiration of resource-rationality, although it pro- vides only minimal treatment of the computational con-straints under which organisms operate. Finally, the idea that human cognition is based on simple heuristics (Tversky and Kahneman, 1974; Gigerenzer & Selten, 2002)i sc o m - patible with resource-rationality – trading off errors with the cost of computation is exactly what good heuristics do. However, far from interpreting the cognitive biases result-ing from such heuristics as evidence for human irrationality (Kahneman & Tversky, 1972; Nisbett & Borgida, 1975; Slovic et al., 1977) resource-rational analysis assumes that these biases are simply the consequence of rational use of limited computational resources. Even though resource-rationality is a very recent approach, it has already shed some light on a wide rangeof cognitive abilities and provides a unifying framework for the study of intelligence in psychology, neuroscience, and artificial intelligence (Gershman et al., 2015). For example, we have recently applied the resource-rational framework to decision-making (Lieder et al., 2014), plan- ning (Lieder et al., 2013), and strategy selection (Lieder et al., 2014; Lieder & Griffiths, 2015). In conclusion, resource-rationality appears to be a promising frameworkfor normative and descriptive theories of human cognition. Directions for future research In a companion paper (Lieder, Griffiths, Huys, & Goodman, 2017) , we empirically confirm our model’s prediction thatadjustment increases with error cost but decreases with time cost. We show that this is true regardless of whether the anchor was provided or self-generated. This confirms oursimulations’ assumption that participants in numerical esti- mation experiments with provided anchors use the same cognitive strategy as participants in numerical estimationexperiments with self-generated anchors. The question to which extent anchors are chosen resource-rationally is one interesting avenue for futureresearch. The hypothesis that anchors are chosen rationallypredicts that if everything else is equal, then people will choose a relevant anchor over an irrelevant one. This could be probed by providing people with two anchors rather thanjust one. Alternatively, one could manipulate the ease of self-generating a good anchor and test whether this ease decreases the bias towards an implausible provided anchor.To analyze such experiments, the models developed couldbe used to infer which anchor people were using from the pattern of their responses. An additional direction for future work is to extend the resource-rational anchoring-and-adjustment model. Thiscould be done in several ways. First, the model could be extended by mechanisms for choosing and generatinganchors. Second, the model could be extended by specifyinghow the mind approximates optimal resource allocation. A third extension of our model might incorporate directionalinformation into the proposal distribution as in the Hamilto- nian Monte Carlo algorithm (Neal, 2011) to better capture the effects of direction uncertainty discovered by Simmons et al. (2010). A fourth extension might capture the sequential incorporation of relevant knowledge by iterative condition- ing and explore its connection to the selective accessibilitytheory of the anchoring bias (Strack and Mussweiler, 1997). A fifth frontier is to make resource-rational anchoring-and-adjustment more adaptive: How can the proposal dis-tribution and a mechanism for choosing the number ofadjustments be learned from experience? Can better per- formance be achieved by adapting the proposal distribution from one adjustment to the next? Finally, our resource-rational anchoring-and-adjustment model only uses a single sample, but it can be generalized to using multiple samples. Each of these extensions might improve the performanceof the estimation strategy and it is an interesting question whether or not those improvements would bring its pre- dictions closer to human behavior. Future studies mightalso evaluate additional alternatives to our model, such asan anchoring model with adaptive plausibility threshold or algorithms that directly approximate the most probable estimate rather than a sample from the posterior distribution. Most previous models of heuristics are formulated for the domain in which the corresponding bias was discovered.For instance, previous models of anchoring-and-adjustmentwere specific to numerical estimation (Epley & Gilovich, 2006; Simmons et al., 2010). Yet, everyday reasoning is not restricted to numerical estimation and anchoring also occurs in very different domains such as social cognition (Epleyet al., 2004). This highlights the challenge that models of cognition should be able to explain not only what peopledo in the laboratory but also their performance in the real-world. Heuristics should therefore be able to operate on the complex, high-dimensional semantic representations people use in everyday reasoning. Resource-rational anchoring-and-adjustment lives up to this challenge, because Markov- chain Monte Carlo methods are as applicable to semantic networks (Bourgin et al., 2014 ; Abbott et al., 2015)a st h e y are to single numbers. In fact, resource-rational anchoring-and-adjustment is a very general mechanism that can oper- ate over arbitrarily complex representations and might be deployed not only for numerical estimation but also in manyother cognitive faculties such as memory retrieval, lan- guage understanding, social cognition, and creativity. For instance, resource-rational anchoring-and-adjustment maybe able to explain the hindsight bias in memory recall (Pohl, 1998; Hardt & Pohl, 2003), primacy effects in sequential learning (Abbott & Griffiths, 2011), and the dynamics of memory retrieval (Abbott et al., 2015; Bourgin et al., 2014). 344 Psychon Bull Rev (2018) 25:322–349 Conclusion Resource-rational anchoring-and-adjustment provides a unifying, parsimonious, and principled explanation for aplethora of anchoring effects including some that were previously assumed to be incompatible with anchoring- and-adjustment. Interestingly, we discovered this cognitivestrategy purely by applying resource-rational analysis to the problem of estimation under uncertainty. It is remarkable that the resulting model is so similar to the anchoring-and-adjustment heuristic. Our simulations support the conclu- sion that people rationally adapt the number of adjustments to the environment’s incentives for speed and accuracy.Resource-rational anchoring and adjustment thereby recon-ciles the anchoring bias with people’s adaptive intelligence and Bayesian models of reasoning under uncertainty. Con- cretely, the anchoring bias may reflect the optimal speed-accuracy tradeoff when errors are benign, which is true of most, if not all, laboratory tasks. Yet, when accuracy is important and speed is not crucial, then people performmore adjustments and the anchoring bias decreases. Hence, while people’s estimates are biased in the statistical sense of the word ( E/bracketleftBig ˆX|K/bracketrightBig /negationslash=E[X|K]), our theory suggests that this is consistent with how they ought to reason. In this sense, the anchoring “bias” might not be a cognitive biasafter all. Instead, the anchoring bias may be a window on resource-rational computation rather than a sign of human irrationality. Being biased can be resource-rational, andheuristics can be discovered by resource-rational analysis. Acknowledgements This research was supported by grant number ONR MURI N00014-13-1-0341 from the Office of Naval Research (TLG and NDG), grant number FA-9550-10-1-0232 from the AirForce Office of Scientific Research (TLG), and a John S. McDonnell Scholar Award (NDG). Appendix A Notation X: numerical quantity to be estimated ˆx: people’s estimates of quantity X t: number of adjustments ˆxt: people’s estimates of quantity Xafter tadjustments Kory: knowledge or information about X P(X|K),P(X|y): posterior belief about X P(R|y): distribution of people’s responses to observation y cost(ˆx,x) : error cost of reporting estimate ˆxwhen the true value is x t⋆: resource-rational number of adjustments γ: relative time cost per iterationce,ct: cost of time, cost of error Q: approximate posterior belief H: hypothesis space μprop: average size of proposed adjustments μ⋆ prop: resource-rational step-size of pro- posed adjustments a: anchor Appendix B Generalization of optimal speed-accuracy tradeoff from problems to environments Together, a person’s knowledge Kabout a quantity X,t h e cost function cost( ˆx,x) , and the correct value xdefine an estimation problem. However, in most environments people are faced with many different estimation problems rather than just a single one, and the true values are unknown. We therefore define a task environment Eby the relative frequency P(X,K, cost|E)with which different estimation problems occur in it. Within each of the experiments thatwe are going to simulate, the utilities, and the participant’sknowledge are constant. Thus, those task environments arefully characterized by P(X,K |E)and cost( ˆx,x) . The optimal speed-accuracy tradeoff weights the costs in different estimation problems according to their preva-lence in the agent’s environment. Formally, the agent should minimize the expected error cost in Eq. 2with respect to the distribution of estimation problems P(X,K |E)in its environment E: t ⋆=arg max tEP(X,K |E)/bracketleftbig EQ(ˆxt|K)/bracketleftbig u(x,ˆxt)−γ·t/bracketrightbig/bracketrightbig .(6) Thus, the number of adjustments is chosen to optimize the agent’s average reward rate across the problem distributionof the task environment (cf. Lewis et al., 2014). If the task environment is an experiment with multiple questions, thenthe expected value is the average across those questions. Appendix C Estimating beliefs For each simulated experiment we conducted one short online survey for each quantity Xthat its participants were asked to estimate. For each survey we recruited 30 par- ticipants on Amazon Mechanical Turk and asked the four questions that Speirs-Bridge et al. ( 2010) advocate for the elicitation of subjective confidence intervals: “Realistically, what do you think is the lowest value that the ... could be?”, “Realistically, what do you think is the highest value that Psychon Bull Rev (2018) 25:322–349 345 Ta b l e 2 Estimated Beliefs: Insufficient adjustment from provided anchors Study Quantity μσ Correct Tversky and Kahneman ( 1974) African countries in UN (in %) 22.5 11.12 28 Jacowitz and Kahneman ( 1995) length of Mississippi River (in miles) 1,525 770 2,320 Jacowitz and Kahneman ( 1995) height of mount Everest (in feet) 27,500 3,902 29,029 Jacowitz and Kahneman ( 1995) amount of meet eaten by average American (in pounds) 238 210 220 Jacowitz and Kahneman ( 1995) distance from San Francisco to New York (in miles) 3000 718 2,900 Jacowitz and Kahneman ( 1995) height of tallest redwood tree (in feet) 325 278 379.3 Jacowitz and Kahneman ( 1995) number of United Nations members 111 46 193 Jacowitz and Kahneman ( 1995) number of female professors at the University of California, Berkeley 83 251 805 Jacowitzx and Kahneman ( 1995) population of Chicago (in millions) 5 3 2.715 Jacowitz and Kahneman ( 1995) year telephone was invented 1885 35 1876 Jacowitz and Kahneman ( 1995) average number of babies born per day in the United States 8,750 15,916 3,952,841 Jacowitz and Kahneman ( 1995) maximum speed of house cat (in mph) 17 10 29.8 Jacowitz and Kahneman ( 1995) amount of gas used per month by average American (in gallons) 55 84 35.2 Jacowitz and Kahneman ( 1995) number of bars in Berkeley, CA 43 55 101 Jacowitz and Kahneman ( 1995) number of state colleges and universities in California 57 112 248 Jacowitz and Kahneman ( 1995) number of Lincoln’s presidency 6 2 16 Ta b l e 3 Estimated beliefs: Insufficient Adjustment from self-generated anchors Study by Epley, & Gilovich (2006) Quantity Mean SD Correct Study 1a Washington’s election year 1786.5 7.69 1789 Study 1a Boiling Point on Mount Everest in◦F 158.8 36.82 160 Study 1a Freezing Point of vodka in◦F 3.7 17.052 −20 Study 1a lowest recorded human body temperature in◦F 86 14.83 55.4 Study 1a highest recorded human body temperature in◦F 108 3.39 115.7 Study 1b Washington’s election year 1786.5 7.69 1789 Study 1b Boiling point in Denver in◦F 201.3 9.93 203 Study 1b Number of US states in 1880 33.5 8.52 38 Study 1b year 2nd European explorer reached West Indies 1533.3 33.93 1501 Study 1b Freezing point of vodka in◦F 3.7 17.05 –20 Ta b l e 4 Estimated beliefs: Effect of cognitive load Study by Epley, & Gilovich (2006) Quantity Mean SD Correct Study 2b Washington’s election year 1786.5 7.69 1789 Study 2b second explorer 1533.3 33.93 1501 Study 2c Washington’s election year 1786.5 7.69 1789 Study 2c second explorer 1533.3 33.93 1501 Study 2c Highest body temperature 108 3.39 115.7 Study 2c boiling point on Mt. Everest 158.8 36.82 160 Study 2c Lowest body temperature 86 14.83 55.4 Study 2c freezing point of vodka 3.7 17.05 –20 Study 2c number of U.S. states in 1880 33.5 8.52 38 346 Psychon Bull Rev (2018) 25:322–349 Ta b l e 5 Estimated beliefs: effects of distance and knowledge Study Quantity Mean SD Correct Russo and Shoemaker ( 1989) year of Atilla’s defeat 953.5 398.42 451 Wilson et al. ( 1996); less knowledgeable group Number of countries in the world 46.25 45.18 196 Wilson et al. ( 1996); knowledgeable group Number of countries in the world 185 35.11 196 the ... could be?”, “Realistically, what is your best guess (i.e. most likely estimate) of the ... ?”, and “How confidentare you that your interval from the lowest to the highestvalue could contain the true value o the ... ? Please enter a number between 0 and 100%.”. These questions elicit a lower bound ( l s) and an upper bound (h s)o nt h ev a l u eo f X, an estimate (m s), and the subjective probability psthat Xlies between the lower and the upper bound ( P(X ∈ [ls,hs]|K) respectively, for each participant s. To estimate people’s knowledge about each quantity from the reported confidence intervals, we modeled their belief P(X|K)by a normal distribution N(μs,σs). We used the empirical esti- matemsasμs,a n ds e t σstohs−ls /Phi1−1((1+p s)/2)−/Phi1−1(1−(p s+1)/2), where /Phi1is the cumulative distribution function of the stan- dard normal distribution. Finally, we took the medians of these estimates as the values of μandσused in our sim- ulations. We applied this procedure separately for each quantity from each experiment that will be simulated below.The quantities and the estimated beliefs are summarized in Appendix C. The hypothesis space Hfor each quantity was assumed to contain all evenly spaced values (interval =σ 20)i nt h e range spanned by the 0.5th and the 99.5th percentile of thebelief distribution P(X|K)and the anchor(s) plus or minus one standard deviation. We simulated the adjustments peo- ple consider by samples from a Poisson distribution, that is P(δ=h k−hj)=Poisson (|k−j|;μprop),w h e r e hkandhj are the kthand the jthvalue in the hypothesis space H,a n d μpropis the expected step-size of the proposal distribution P(δ) . This captures the intuition that people consider only a finite number of discrete hypotheses and that the adjust-ments a person will consider have a characteristic size that depends on the resolution of her hypothesis space. The following tables summarize our estimates of peo- ple’s beliefs about the quantities used in the simulatedanchoring experiments. Since the estimated probabilistic Ta b l e 6 Estimated beliefs: Anchor type moderates effect of accuracy motivation; Abbreviations: EG– Epley & Gilovich ( 2005), TK– Tversky & Kahneman ( 1974) Study Quantity Mean SD Correct EG, Study 1 population of Chicago 5,000,000 2,995,797.04 2,719,000 EG, Study 1 height of tallest redwood tree 200 76.58 379.3 EG, Study 1 length of Mississippi river (in miles) 1875 594.88 2,320 EG, Study 1 height of Mt. Everest (in feet) 15400 4657.90 29,029 EG, Study 1 Washington’s election year 1788 6.77 1789 EG, Study 1 year the 2nd explorer after Columbus reached the West Indies 1507.75 34.34 1501 EG, Study 1 boiling point on Everest (in◦F) 150.25 36.82 160 EG, Study 1 freezing point of vodka (in◦F) −1.25 14.73 −20 EG, Study 2 Washington election year 1788 6.77 1789 EG, Study 2 2nd explorer 1507.75 34.34 1501 EG, Study 2 boiling point on Mt. Everest (in◦F) 150.25 36.82 160 EG, Study 2 number of US states in 1880 33.5 8.52 38 EG, Study 2 freezing point of vodka (in◦F) −1.25 14.73 −20 EG, Study 2 population of Chicago 3000000 1257981.51 2,719,000 EG, Study 2 height of tallest redwood tree (in feet) 200 76.58 379.3 EG, Study 2 length of Mississippi river (in miles) 1875 594.88 2320 EG, Study 2 height of Mt. Everest 15400 4657.90 29,029 EG, Study 2 invention of telephone 1870 54.48 1876 EG, Study 2 babies born in US per day 7875 8118.58 3,952,841 TK African countries in UN 22.5 11.12 28 Psychon Bull Rev (2018) 25:322–349 347 Ta b l e 7 Estimated beliefs: Effects of direction uncertainty Simmons et al. ( 2010) Quantity Mean SD Correct Study 2 length of Mississippi river (in miles) 1625 752.3 2,320 Study 2 average annual rainfall in Philadelphia (in inches) 36.5 23.80 41 Study 2 Polk’s election year 1857.5 45.42 1845 Study 2 Maximum speed of a house cat (miles per hour) 16 9.40 30 Study 2 Avg. annual temperature in Phoenix (in◦F) 82.75 13.82 73 Study 2 Population of Chicago 2,700,000 1,560,608 2,719,000 Study 2 Height of Mount Everest (in feet) 23,750 7,519.70 29,032 Study 2 Avg. lifespan of a bullfrog (in years) 5.75 6.68 16 Study 2 Number of countries in the world 216.25 77.21 192 Study 2 Distance between San Francisco and Kansas city (in miles) 1,425 547.86 1,800 Study 3b Year Seinfeld first aired 1991 2.23 1989 Study 3b Average temperature in Boston in January 26.5 14.86 36 Study 3b Year JFK began his term as U.S. president 1961.25 2.26 1961 Study 3b Avg. temperature in Phoenix in Aug. 96 10.21 105 Study 3b Year Back to the Future appeared in theaters 1985 1.54 1985 Study 3b Avg. temperature in NY in Sept. 70 10.51 74 beliefs are normal distributions, we summarize each of them by a mean μand a standard deviation σ. References Abbott, J.T., Austerweil, J.L., & Griffiths, T.L. (2015). Random walks on semantic networks can resemble optimal foraging. Psychologi- cal Review ,122(3), 558–569. Abbott, J.T., & Griffiths, T.L. (2011). Exploring the influence of particle filter parameters on order effects in causal learning In Pro- ceedings of the 33rd Annual Conference of the Cognitive Science Society . Austin, Texas: Cognitive Science Society. Anderson, J.R. (1983). A spreading activation theory of memory. Journal of verbal learning and verbal behavior, 22(3), 261– 295. Anderson, J.R. (1990). The adaptive character of thought. Hillsdale, NJ: Psychology Press. Anderson, J.R. (1991). Is human cognition adaptive? Behavioral and Brain Sciences, 14, 471–485. Ariely, D., Loewenstein, G., & Prelec, D. (2003). Coherent arbi- trariness: Stable demand curves without stable preferences. The Quarterly Journal of Economics ,118(1), 73–106. Beach, L.R., & Mitchell, T.R. (1978). A contingency model for the selection of decision strategies. Academy of Management Review, 3(3), 439–449. Bonawitz, E., Denison, S., Gopnik, A., & Griffiths, T.L. (2014). Win-stay, lose-sample: A simple sequential algorithm for approx-imating Bayesian inference. Cognitive Psychology, 74, 35–65. Bonawitz, E., Denison, S., Griffiths, T.L., & Gopnik, A. (2014). Prob- abilistic models, learning algorithms, and response variability:Sampling in cognitive development. Trends in Cognitive Sciences , 18(10), 497–500. Bourgin, D.D., Abbott, J.T., Griffiths, T.L., Smith, K.A., & Vul, E. (2014). Empirical evidence for markov chain monte carlo in mem-ory search. In Proceedings of the 36th annual meeting of the cognitive science society , (pp. 224–229).Braine, M.D. (1978). On the relation between the natural logic of reasoning and standard logic. Psychological Review ,85(1), 1. Brewer, N.T., & Chapman, G.B. (2002). The fragile basic anchoring effect. Journal of Behavioral Decision Making, 15, 65–77. Buesing, L., Bill, J., Nessler, B., & Maass, W. (2011). Neural dynam- ics as sampling: A model for stochastic computation in recurrentnetworks of spiking neurons. PLoS Computational Biology, 7(11), e1002211. Chapman, G.B., & Johnson, E.J. (1994). The limits of anchoring. Journal of Behavioral Decision Making, 7(4), 223–242. Chapman, G.B., & Johnson, E.J. (2002). Incorporating the irrelevant: Anchors in judgments of belief and value. In Gilovich, T., Griffin, D., & Kahneman, D. (Eds.) Heuristics and biases: The psychol- ogy of intuitive judgment. Cambridge, U.K.: Cambridge University Press. Chater, N., & Oaksford, M. (2000). The rational analysis of mind and behavior. Synthese, 122(1), 93–131. Collins, A.M., & Loftus, E.F. (1975). A spreading-activation theory of semantic processing. Psychological review ,82(6), 407. Denison, S., Bonawitz, E., Gopnik, A., & Griffiths, T. (2013). Rational variability in children’s causal inferences: The sampling hypothe- sis. Cognition, 126(2), 285–300. Diamond, A. (2013). Executive functions. Annual review of psychol- ogy,64, 135. Doucet, A., De Freitas, N., & Gordon, N. (2001). Sequential Monte Carlo methods in practice . New York: Springer. Englich, B., Mussweiler, T., & Strack, F. (2006). Playing dice with criminal sentences: The influence of irrelevant anchors on experts’judicial decision making. Personality and Social Psychology Bul- letin,32(2), 188–200. Epley, N. (2004). A tale of tuned decks?Anchoring as accessibility and anchoring as adjustment. In Koehler, D.J., & Harvey, N. (Eds.) The Blackwell Handbook of Judgment and Decision Making (pp. 240– 256). Oxford, UK: Blackwell. Epley, N., & Gilovich, T. (2004). Are adjustments insufficient? Per- sonality and Social Psychology Bulletin ,30(4), 447–460. Epley, N., & Gilovich, T. (2005). When effortful thinking influ- ences judgmental anchoring: Differential effects of forewarning 348 Psychon Bull Rev (2018) 25:322–349 and incentives on self-generated and externally provided anchors. Journal of Behavioral Decision Making, 18(3), 199–212. Epley, N., & Gilovich, T. (2006). The anchoring-and-adjustment heuristic. Psychological Science, 17(4), 311–318. Epley, N., Keysar, B., Van Boven, L., & Gilovich, T. (2004). Per- spective taking as egocentric anchoring and adjustment. Journal of Personality and Social Psychology ,87(3), 327–339. Fiser, J., Berkes, P., Orb ´an, G., & Lengyel, M. (2010). Statisti- cally optimal perception and learning: From behavior to neuralrepresentations. Trends in Cognitive Sciences ,14(3), 119–130. Fodor, J. (1975). The language of thought. Cambridge, MA: Harvard University Press. Frank, M., & Goodman, N. (2012). Predicting pragmatic reasoning in language games. Science, 336(6084), 998. Friedman, M., & Savage, L.J. (1948). The utility analysis of choices involving risk. The Journal of Political Economy, 279–304. Friston, K. (2009). The free-energy principle: A rough guide to the brain?. Trends in Cognitive Sciences ,13(7), 293–301. Friston, K., & Kiebel, S. (2009). Predictive coding under the free- energy principle. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1521), 1211–1221. Galinsky, A.D., & Mussweiler, T. (2001). First offers as anchors: The role of perspective-taking and negotiator focus. Journal of Personality and Social Psychology ,81(4), 657. Gershman, S.J., Horvitz, E.J., & Tenenbaum, J.B. (2015). Compu- tational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245), 273–278. Gershman, S.J., Vul, E., & Tenenbaum, J.B. (2012). Multistability and perceptual inference. Neural Computation, 24(1), 1–24. Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psycho- logical Science ,3(1), 20–29. Gigerenzer, G., & Goldstein, D.G. (1996). Reasoning the fast and fru- gal way: Models of bounded rationality. Psychological Review , 103(4), 650–669. Gigerenzer, G., & Selten, R. (2002). In Gigerenzer, G., & Selten, R. (Eds.) Bounded rationality: The adaptive toolbox . Cambridge, MA: The MIT Press. Gilks, W., Richardson, S., & Spiegelhalter, D. (1996). Markov chain Monte Carlo in practice . London: Chapman and Hall. Good, I.J. (1983). Good thinking: The foundations of probability and its applications . USA: Univ Of Minnesota Press. Griffiths, T.L., Lieder, F., & Goodman, N.D. (2015). Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in Cognitive Science, 7(2), 217–229. Griffiths, T.L., & Tenenbaum, J.B. (2006). Optimal predictions in everyday cognition. Psychological Science, 17(9), 767–773. Griffiths, T.L., & Tenenbaum, J.B. (2011). Predicting the future as Bayesian inference: People combine prior knowledge with observations when estimating duration and extent. Journal of Experimental Psychology: General, 140(4), 725–743. Habenschuss, S., Jonke, Z., & Maass, W. (2013). Stochastic com- putations in cortical microcircuit models. PLoS Computational Biology ,9(11), e1003311. Hardt, O., & Pohl, R. (2003). Hindsight bias as a function of anchor distance and anchor plausibility. Memory, 11(4-5), 379–394. Harman, G. (2013). Rationality. In LaFollette, H., Deigh, J., & Stroud, S. (Eds.) International Encyclopedia of Ethics. Hoboken: Black- well Publishing Ltd. Hastings, W.K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1), 97–109. Hedstr ¨om, P., & Stern, C. (2008). Rational choice and sociology. In Durlauf, S., & Blume, L. (Eds.) The New Palgrave Dictionary of Economics. 2nd edn. Basingstoke, U.K.: Palgrave Macmillan. Horvitz, E., Suermondt, H., & Cooper, G. (1989). Bounded condi- tioning: Flexible inference for decisions under scarce resourcesProceedings of the Fifth W orkshop on Uncertainty in Artifi-cial Intelligence (pp. 182–193). Mountain View: Association for Uncertainty in Artificial Intelligence. Jacowitz, K.E., & Kahneman, D. (1995). Measures of anchoring in estimation tasks. Personality and Social Psychology Bulletin , 21(11), 1161–1166. Kahneman, D., & Tversky, A. (1972). Subjective probability: A judg- ment of representativeness. Cognitive Psychology, 3(3), 430–454. Lewis, R.L., Howes, A., & Singh, S. (2014). Computational ratio- nality: Linking mechanism and behavior through bounded utilitymaximization. T opics in Cognitive Science, 6(2), 279–311. Lieder, F., Goodman, N.D., & Huys, Q.J.M. (2013). Controllability and resource-rational planning. In Pillow, J., Rust, N., Cohen, M.,& Latham, P. (Eds.) Cosyne Abstracts. Lieder, F., & Griffiths, T.L. (2015). When to use which heuristic: A rational solution to the strategy selection problem. In Noelle, D.C.,et al. (Eds.) Proceedings of the 37th Annual Conference of the Cognitive Science Society Austin . TX: Cognitive Science Society. Lieder, F., Griffiths, T.L., & Goodman, N.D. (2012). Burn-in, bias, and the rationality of anchoring. In Bartlett, P., Pereira, F.C.N., Bottou,L., Burges, C.J.C., & Weinberger, K.Q. (Eds.) Advances in Neural Information Processing Systems 26. Lieder, F., Griffiths, T.L., Huys, Q.J.M., & Goodman, N.D. (2017). Empirical evidence for resource-rational anchoring-and-adjustment. Lieder, F., Hsu, M., & Griffiths, T.L. (2014). The high availability of extreme events serves resource-rational decision-making. In Pro- ceedings of the 36th Annual Conference of the Cognitive ScienceSociety . Austin, TX: Cognitive Science Society. Lieder, F., Plunkett, D., Hamrick, J.B., Russell, S.J., Hay, N.J., & Griffiths, T.L. (2014). Algorithm selection by rational metareason-ing as a model of human strategy selection. Advances in Neural Information Processing Systems 27. Lohmann, S. (2008). Rational choice and political science. In Durlauf, S.,& Blume, L. (Eds.) The New Palgrave Dictionary of Eco- nomics . 2nd edn. Basingstoke, U.K.: Palgrave Macmillan. Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. W. H. Free-man. Paperback. McKenzie, C.R. (1994). The accuracy of intuitive judgment strate- gies: Covariation assessment and bayesian inference. Cognitive Psychology, 26(3), 209–239. Mengersen, K.L., & Tweedie, R.L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Annals of Statistics ,24(1), 101–121. Mill, J.S. (1882). A system of logic ratiocinative and inductive ,8 t h edn. New York: Harper and Brothers. Moreno-Bote, R., Knill, D.C., & Pouget, A. (2011). Bayesian sam- pling in visual perception. Proceedings of the National Academy of Sciences of the United States of America ,108(30), 12491– 12496. Mussweiler, T., & Strack, F. (1999). Hypothesis-consistent testing and semantic priming in the anchoring paradigm: A selective accessi-bility model. Journal of Experimental Social Psychology ,35(2), 136–164. Neal, R. (2011). Brooks, S., Gelman, A., Jones, G., & Meng, X.L. (Eds.) MCMC using Hamiltonian dynamics (V ol. 2, pp. 113– 162). FL, USA: CRC Press. Neely, J.H. (1977). Semantic priming and retrieval from lexical memory: Roles of inhibitionless spreading activation and limited-capacity attention. Journal of experimental psychology: General , 106(3), 226. Newell, A., Shaw, J.C., & Simon, H.A. (1958). Elements of a theory of human problem solving. Psychological Review ,65(3), 151–166. Psychon Bull Rev (2018) 25:322–349 349 Nisbett, R.E., & Borgida, E. (1975). Attribution and the psychology of prediction. Journal of Personality and Social Psychology, 32(5), 932–943. Nisbett, R.E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliffs: Prentice- Hall. Northcraft, G.B., & Neale, M.A. (1987). Experts, amateurs, and real estate: An anchoring-and-adjustment perspective on propertypricing decisions. Organizational Behavior and Human Decision Processes, 39(1), 84–97. Oaksford, M., & Chater, N. (2007). Bayesian rationality: The prob- abilistic approach to human reasoning (Oxford cognitive scienceseries) , 1st edn. Oxford: Oxford University Press. Payne, J.W., Bettman, J.R., & Johnson, E.J. (1993). The adaptive decision maker: Cambridge University Press. Pohl, R.F. (1998). The effects of feedback source and plausibility of hindsight bias. European Journal of Cognitive Psychology, 10(2), 191–212. Russell, S.J. (1997). Rationality and intelligence. Artificial Intelli- gence ,94(1-2), 57–77. Russell, S.J., & Subramanian, D. (1995). Provably bounded-optimal agents. Journal of Articial Intelligence Research ,2, 575–609. Russell, S.J., & Wefald, E. (1991). Do the right thing: Studies in limited rationality . Cambridge, MA: The MIT Press. Russo, J.E., & Schoemaker, P.J.H. (1989). Decision traps: T en barriers to brilliant decision-making and how to overcome them :S i m o n and Schuster. Sanborn, A.N., Griffiths, T.L., & Navarro, D.J. (2010). Ratio- nal approximations to rational models: Alternative algorithmsfor category learning. Psychological Review ,117(4), 1144– 1167. Schwarz, N. (2014). Cognition and communication: Judgmental biases, research methods and the logic of conversation .N e wY o r k : Psychology Press. Shafir, E., & LeBoeuf, R.A. (2002). Rationality. Annual Review of Psychology, 53(1), 491–517. Shugan, S.M. (1980). The cost of thinking. Journal of consumer Research, 7(2), 99–111. Simmons, J.P., LeBoeuf, R.A., & Nelson, L.D. (2010). The effect of accuracy motivation on anchoring and adjustment: Do people adjust from provided anchors? Journal of Personality and Social Psychology, 99(6), 917–932. Simon, H.A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics ,69(1), 99–118. Simon, H.A. (1956). Rational choice and the structure of the environ- ment. Psychological Review ,63(2), 129. Simon, H.A. (1972). Theories of bounded rationality. Decision and Organization, 1,161–176. Simon, H.A. (1976). From substantive to procedural rationality. In Kastelein, T.J., Kuipers, S.K., Nijenhuis, W.A., & Wagenaar, G.R. (Eds.) 25 Y ears of Economic Theory (pp. 65–86). US: Springer.Simonson, I., & Drolet, A. (2004). Anchoring effects on consumers’ willingness-to-pay and willingness-to-accept. Journal of Con- sumer Research, 31(3), 681–690. Slovic, P., Fischhoff, B., & Lichtenstein, S. (1977). Cognitive pro- cesses and societal risk taking. In Jungermann, H., & De Zeeuw,G. (Eds.) Decision Making and Change in Human Affairs ,( V o l .1 6 pp. 7–36). Dordrecht, Netherlands: D. Reidel Publishing Com-pany. Sosis, C., & Bishop, M. (2014). Rationality. Wiley interdisciplinary reviews: Cognitive Science ,5, 27–37. Speirs-Bridge, A., Fidler, F., McBride, M., Flander, L., Cumming, G., & Burgman, M. (2010). Reducing overconfidence in the intervaljudgments of experts. Risk Analysis, 30(3), 512–523. Stewart, N., Chater, N., & Brown, G.D. (2006). Decision by sampling. Cognitive Psychology, 53(1), 1–26. Strack, F., & Mussweiler, T. (1997). Explaining the enigmatic anchor- ing effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology ,73(3), 437. Sunn ˚aker, M., Busetto, A.G., Numminen, E., Corander, J., Foll, M., & Dessimoz, C. (2013). Approximate bayesian computation. PLoS Computational Biology ,9(1), e1002803. Thorngate, W. (1980). Efficient decision heuristics. Behavioral Sci- ence, 25(3), 219–225. Turner, B.M., & Schley, D.R. (2016). The anchor integration model: A descriptive model of anchoring effects. Cognitive Psychology, 90, 1–47. Turner, B.M., & Sederberg, P.B. (2012). Approximate bayesian com- putation with differential evolution. Journal of Mathematical Psychology, 56(5), 375–385. Tversky, A. (1972). Elimination by aspects: A theory of choice. Psychological Review ,79(4), 281. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. Van Rooij, I. (2008). The tractable cognition thesis. Cognitive Science, 32(6), 939–984. V on Neumann, J., & Morgenstern, O. (1944). The theory of games and economic behavior . Princeton: Princeton university press. Vul, E., Goodman, N.D., Griffiths, T.L., & Tenenbaum, J.B. (2014). One and done? Optimal decisions from very few samples. Cogni- tive Science, 38, 599–637. Wason, P.C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20(3), 273–281. Wilson, T.D., Houston, C.E., Etling, K.M., & Brekke, N. (1996). A new look at anchoring effects: Basic anchoring and its antecedents.Journal of Experimental Psychology: General ,125(4), 387. Wright, W.F., & Anderson, U. (1989). Effects of situation familiarity and financial incentives on use of the anchoring and adjustmentheuristic for probability assessment. Organizational Behavior and Human Decision Processes ,44(1), 68–82. Zhang, Y .C., & Schwarz, N. (2013). The power of precise numbers: A conversational logic analysis. Journal of Experimental Social Psychology, 49(5), 944–946.
34fbaaa9-4279-4bbe-8fb9-0d6b440c21a8
trentmkelly/LessWrong-43k
LessWrong
Training Regime Day 12: Focusing Note: Focusing for skeptics is a far better description than I could give. I will attempt my own description anyway, for triangulation reasons. 2nd note: this technique was invented by Eugene Gendlin. The book Focusing goes into more detail, although I'm told the audiobook is better. Introduction We can (fake) categorize all knowledge you have into two types: tacit knowledge and explicit knowledge. Roughly speaking, explicit knowledge is knowledge where you know why you know it: math, typing, writing, reading, skateboarding, etc. Tacit knowledge is knowledge where you are less sure of why you know it: intuition, vague emotional stuff, inner sim predictions, etc. One extremely rough way to put it is that explicit knowledge is in the mind and tacit knowledge is in the body. This is false, but the useful sort of false. It's not like tacit knowledge isn't in your brain, but how tacit knowledge usually manifests itself is through sensations instead of thinking (think confusion, surprise, fear, excitement, etc.) To me, this makes it seem more like tacit knowledge is my body telling me things and explicit knowledge is my brain telling me things. Focusing is a technique designed to allow tacit knowledge and explicit knowledge to interface. The goal of focusing is to turn vague bodily sensations into pieces of information that you can actually think about. I find this technique extremely hard to do, but extremely valuable if done correctly. Focusing Serious thinking usually begins by clearing the mind. Focusing begins by clearing the body. When focusing, there should be no physical sensations to distract/intrude upon the feelings generated by tacit knowledge. In practice, this probably involves finding a comfortable seat or lying down. The first step in focusing is to locate the tacit-knowledge-feeling, also called a felt sense. This kind of feels like knowing you forgot something and trying to remember what it was. It also feels similar to looking for something with
ab1e3a9c-1ae0-476d-a20d-bc34b1fda477
trentmkelly/LessWrong-43k
LessWrong
Forecasting Newsletter: March 2022 Highlights * Comparing top forecasters and domain experts finds that past studies mainly were not comparing apples to apples and that the assertion that superforecasters were 30% better than intelligence analysts was unjustified. * Samotsvety's Nuclear Forecasts got picked up in the Spanish press and criticized by a pessimistic nuclear expert. * Forecasting Wiki launched * Polymarket is inflating its volume by incentivizing wash trading. (edit: apparently not the case, will issue a correction in the next issue) Index * The state of forecasting * Notable news * Platform by platform * Relevant research You can sign up for this newsletter on substack, or browse past newsletters here. The state of forecasting On account of getting a plug on one of Spain's most-read newspapers, this newsletter has reached 1,000 subscribers: You can find a market on when it will reach 2000 here. So I thought I would summarize the state of forecasting as I see it, striving to be informative to new readers. If you're already familiar with the key points, you might want to skip to the next section. The main problem is bullshit or lack of epistemic virtue and ability. The US misled itself into thinking that Iraq still had weapons of mass destruction or that everything would be okay in Afghanistan (a). People were not expecting covid to last so long. And everyone keeps expecting a better brand of politician to show up. What is the alternative? The alternative is to develop better models of the world and then use those better models to make better decisions. But how do we know which models of the world are good? How do we differentiate real understanding from fake understanding? It's tricky, but to a first approximation, we make our hypotheses about the world output predictions, and we reduce our confidence in the hypotheses that make worse predictions (a). The book Superforecasting is a neat introduction to the practices involved. E.T. Jaynes' Probability Theory: The Logic of
77f9b1aa-a729-494e-9cda-e68feaea7dfd
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Trust in Math Today's post, Trust in Math was originally published on 15 January 2008. A summary (taken from the LW wiki):   > When you find a seeming inconsistency in the rules of math, or logic, or probability theory, you might do well to consider that math has rightfully earned a bit more credibility than that. Check the proof. It is more likely that you have made a mistake in algebra, than that you have just discovered a fatal flaw in math itself. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Beautiful Probability, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
a347ecfb-3d92-4bce-95b8-297a19d5147a
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Using modal fixed points to formalize logical causality .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} *This post is a simplified introduction to existing ideas by Eliezer Yudkowsky, Wei Dai, Vladimir Nesov and myself. For those who already understand them, this post probably won't contain anything new. As always, I take no personal credit for the ideas, only for the specific mathematical model.* People usually think about decision-making in terms of causality, where an agent's action causes an outcome to happen. In this post I will outline a different idea of "causality", which can work even if regular causality isn't available. For example, in the world of natural numbers it doesn't make sense to say that the sentence 1+1=2 somehow "causes" the sentence 2+2=4 to be true. Yet we can devise a notion of "logical causality" that will work in such worlds, and allow us to make decisions which maximize utility in some sense. The rest of this post is devoted to making these claims precise. --- We start off with a simple formal model. Recall that [provability logic](http://plato.stanford.edu/entries/logic-provability/) is a modal logic where the operator □ is interpreted as provability in [Peano arithmetic](http://en.wikipedia.org/wiki/Peano_axioms#First-order_theory_of_arithmetic). Let's define a propositional formula with two variables F(a,o)=□(a→o). Since all occurrences of propositional variables are inside □, we can build fixed points A such that A↔F(A,o) for different values of o, and see what happens. 1. If o=⊤ (the true sentence), then A=⊤, which is easy to verify directly. 2. If o=⊥ (the false sentence), then A=□⊥, which can be verified by Löb's theorem. Now let's go a little deeper, and let o itself be defined in terms of a, using the fixed point to tie everything together: 3. In the simplest case where o=a, we see that A=⊤. 4. If o=¬a (the negation of a), we see that A=□⊥. You can verify that by hand, using the methods from the previous points. By now an interesting pattern is starting to emerge. Let's say O is the formula that results from the variable o after applying the fixed point. In all cases except (2) where O is false by fiat, we see that A tries and succeeds to make O true! Metaphorically, here's what A is doing: "If I can prove that my truth implies the truth of O, then I choose to be true, otherwise I choose to be false". That's the basic idea of "logical causality", in the simplest possible setting where it works. Note that all sentences above can be interpreted as sentences about natural numbers within Peano arithmetic, by using [Gödel numbering](http://en.wikipedia.org/wiki/G%C3%B6del_numbering) to spell out the provability operator. For example, in point (4) A would be a long sentence about the natural numbers, and O would be a slightly longer sentence that has A embedded inside it. In decision-making terms, A is the "action" and O is the "outcome" that logically depends on the action, but O's dependence on A is not explicit, because both are closed formulas without free variables. Instead, the dependence is due to A knowing the Gödel number of O. To skip ahead a bit, it's easy to go from a formalism about natural numbers to a formalism about computer programs, which know their own source code by [quining](http://en.wikipedia.org/wiki/Quine_(computing)). The examples above can be directly translated to programs that have access to a provability oracle for Peano arithmetic, or (with some caveats) to programs that successively search for proofs and check them manually. In fact, that was the original motivation for this line of research, because "programs trying to influence a larger program that they are embedded in" might be a good description of our own world, at least from the perspective of a program :-) Going back to the math, perhaps the approach will break down if we have more possible choices than just true or false? Let's assume a more general setting, where we have formulas F1(a1,…,am,o1,…,on),…,Fm(a1,…,am,o1,…,on). We will denote truth assignments to a1,…,am as →a, and truth assignments to o1,…,on as →o. We will have a preference ordering on all possible →o, and will be interested in fixed points A={A1,…,Am} such that Ai↔Fi(A1,…,Am,o1,…,on) for all i. The formulas F1,…,Fm will encode the execution of this algorithm: 1. There are finitely many sentences of the form "if such-and such →a holds, then such-and-such →o holds". Find all such sentences that are provable. If no such sentences were found, choose any →a, e.g. all false. 2. From all pairs (→a,→o) found in the previous step, choose the →a whose →o is highest in our preference ordering. If there are multiple such →a, choose any one, e.g. the lexicographically smallest. 3. For each i, define Fi to be true iff the chosen →a assigns true to ai. To illustrate the definition of Fi in more detail, let's work through the case where m=n=1 and our preference ordering wants o1 to be true. On step 1 we have four possible sentences in lexicographic order: a1→o1, a1→¬o1, ¬a1→o1, and ¬a1→¬o1. Then F1 is true iff the chosen →a assigns true to a1, which can only happen on step 2. The corresponding →o can either assign true to o1, which happens iff the first sentence is provable, or assign false to o1, which happens iff the second sentence is provable but the first and third aren't. Simplifying, we obtain the definition for F1(a1,o1)=□(a1→o1)∨(□(a1→¬o1)∧¬□(¬a1→o1)). By now it should be clear how to use the same algorithm for m,n>1. Which choices of o1,…,on are amenable to this approach? Intuitively, it seems that "fair" deterministic problems are those where every choice of "actions" →a logically implies at least one "outcome" →o, and these implications are apparent to the agent (i.e. provable). But that's exactly the class of problems where our approach obviously gives the right answer! So it seems that having multiple possible choices doesn't cause problems. For example, let's take m=2,n=1,o1=a1∧¬a2, and assume that the preference ordering wants o1 to be true. Then it's easy to see that the chosen →a is either (true, false), which provably implies o1, or something else that also provably implies o1. But the latter is impossible, because choosing any other →a makes o1 false, so it can't be provable as long as the logic is sound. (Of course the logic doesn't prove its own soundness, but we're reasoning from the outside now.) Therefore the chosen →a is (true, false), and o1 is true. One counterintuitive feature of our approach is that some "actions" →a might logically imply multiple different "outcomes" →o after taking the fixed point, because if an action is in fact not taken, it logically implies anything at all. However, the approach is designed so that the existence of such "spurious" logical implications can never lead to a suboptimal outcome. The proof of that is left as an easy exercise.
5ee31a45-9aa4-4b4c-a803-88be1b0ed1a2
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Preface to the Sequence on Factored Cognition Factored Cognition is primarily [studied](https://ought.org/research/factored-cognition) by [Ought](https://ought.org/), the same organization that was partially credited for implementing the [interactive prediction feature](https://www.lesswrong.com/posts/JLrnbThMyCYDBa6Gu/embedded-interactive-predictions-on-lesswrong-2). Ought is an organization with at least five members who have worked on the problem for several years. I am a single person who just finished a master's degree. The rationale for writing about the topic anyway was to have diversity of approaches: Ought is primarily doing empirical work, whereas I've studied the problem under the lens of math and epistemic rationality. As far as I know, there is virtually no overlap between what I've written and what Ought has published so far. Was it successful? Well, all I can say for sure is that writing the sequence has significantly changed my own views. This sequence has two 'prologue' posts, which make points relevant for but not restricted to Factored Cognition. I think of them as posts #-2 and #-1 (then, this post is #0, and the proper sequence starts at #1). These are * [A guide to Iterated Amplification and Debate](https://www.lesswrong.com/posts/vhfATmAoJcN8RqGg6/a-guide-to-iterated-amplification-and-debate), which explains what Factored Cognition is and the two schemes that use it. This post is there to make sure that no prerequisite knowledge is needed to read the sequence. You can skip this if you're already familiar with both schemes. * [Hiding Complexity](https://www.lesswrong.com/posts/6zbRy3aADCsRmFcgv/hiding-complexity), which is about characterizing what makes a part of a big problem a 'subproblem'. The remaining sequence is currently about 15000 words long, though this could change. The structure is roughly: * Define a mathematical model and see what we can do with that (posts #1-#2) * Tackle the human component: think seriously about how thinking works and whether solving hard problems with Factored Cognition looks feasible (posts #3-#5) * Spell out what I conclude from both parts (post #6) The current version of the sequence includes exercises. This is pretty experimental, so if they are too hard or too easy, it's probably my fault. I've still left them in because I generally think it makes sense to include 'think about this thing for a bit' moments. They look like this: **EXERCISE (5 SECONDS):** Compute 2+5. 7. Whenever there's a range, it means that the lower number is an upper bound for the exercise itself, and the remaining time is for rereading parts of this or previous posts. So 1-6 minutes means 'you shouldn't take more than 1 minute for the exercise itself, but you may first take about 5 minutes to reread parts of the post, or perhaps of previous posts'. The sequence also contains conjectures. Conjectures are claims that I think are true, important, and not trivial. There are only a few of them, and they should all be justified by the sequence up to that point. Conjectures look like this: ![](https://i.ibb.co/K2BjHPG/conjecture.png) I'll aim for publishing one post per week, which gives me time for final edits. It could slow down since I'm still working on the second half. Questions/criticism is welcome. Special thanks to [TurnTrout](https://www.lesswrong.com/users/turntrout) for providing valuable feedback on much of the sequence.
38543c2b-922c-4271-8c50-c415ba66446f
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Stefano Albrecht – Learning to Distinguish Between Belief and Truth – CSRBAI 2016 welcome back our second speaker today is is Stefan Elbert and a postdoctoral fellow at the University of Texas at Austin and in the department of computer science and engineering and he works chef now works on the Arian practice of autonomous agents and multi aging systems especially talking about Bayesian modeling and belief modeling so he's here to talk to us today about practical and theoretical progress on model updating and multi-agent systems so join me and welcoming shut up thank you very much I should point out that this work is supported by a research fellowship from the omoide Foundation just to have that cover as well so this this toxin toilet learning to distinguish between belief and truth I realized that there has been a lot of work in the philosophy and psychology communities about what it means to be true and have beliefs I just thought this would make for a compelling title but I'm not going into the differences between the two things I'm hoping it gets approximately clear as I go through the talk okay so consists of three parts you can see the pointer can you know really so I'm just gonna use my hand answer so consists of three parts on the first part I will explain to you what I mean by the title and the problem that we consider here and the second part that will tell you about some research we did to adjust one specific aspect of this bigger picture problem and in the third and final part that will give you a you know a brief description of a possible future and a recent agenda okay introduction so this talk is about multi-agent systems such a system consists of a collective of decision making agents each agent can take certain actions such as a B and C and each agent is trying to accomplish certain goals such as reaching a goal state in the environment and maximizing a given utility function and what makes the system interesting and often very challenging is the that what one agent should do can depend critically and what the other agents are doing in the system agents often have to deal with a variety of uncertainties there may be uncertain about the state of the environment and they may also be uncertain about the actions that other agents took in the past and importantly there may be uncertain about the behavior of other agents and by behavior I mean the way in which an agent chooses actions based on the available information now all of these are important issues and there's been a lot of research but in this talk I will be focusing on uncertainty about behaviors okay so this basically two ways in which you can adjust that there's what could be called model free methods and that includes regret minimization policy gradients and model fear in for smelling and various other approaches these methods do not directly adjust the uncertainty about behavior what they do is they use the information that they observe during the interaction and try to manipulate their own behavior directly so you know there's no man in the middle the other approach could be described as model based methods and these methods basically try to learn models about the behavior of other agents based on the interaction history and there are many ways in which you can do this he could try and fit decision trees neural networks finite state machines and various other methods case based reasoning or graphical methods and then the idea is to use that model the resulting model to inform your own policy to plan your own actions I think there is some important features that the model-based approach provides and basically why you would want to try in model in agent s because the hope is that some of the observations that you encode in your model will generalize usefully to other unseen situations and what this gives you will allows you to plan into the future as the key part it allows you to perform guided exploration for example an icon you can also be used to do things such as risk control and a version and all these things are hard to do if you don't have a model to begin with this is useful but all of these methods like one key component currently which is that they provide no notion of model criticism so in other words they produce models but they don't check the validity of the resulting model during the interaction and so what can happen is that an agent using these methods can end up learning and using a model without ever realizing that it's incorrect that's an important a disadvantage so here's a simple example a very simple example let's say robot plays rock-paper-scissors against a human I assume standard rules and standard payoffs and let's say the human place of a very simple sequence of rock-paper-scissors rock-paper-scissors and so forth no the robot doesn't know how the human behaves but it can try to learn about the behavior and in this simple example the robot is fitting a model based on the assumption that the human is using a fixed distribution of reactions or fictitious play as it's known in the community so if you use the empirical frequency to fit the parameters in the model then in the limit you end up with a uniform distributions distribution which is incorrect so of course because really the human is using a deterministic sequence and in the planning if he were to plan with the correct model your expected payoff would be one because it can always choose the correct or the winning action and each time set whereas planning with the learnt model would be the expected payoff would be zero which is as bad as playing randomly and the the problem here is that the robot will never realize that the model is incorrect now there in the learning step nor in the planning step and this is a very simple example but it also holds for more complicated model fitting methods and this is a very simple example but it's not all that hard to scale is up to more realistic and complex examples just such as elderly support users interfaces or should say adaptive use interfaces and electronic markets and we had a really interesting talk already oh brother essentially any domain where you have a collective of agents that need to interact closely with each other and that have to develop some kind of understanding of each other's behaviors what can go wrong well in principle anything agents attempt to learn models that end up being incorrect incorrect mollusc can make incorrect predictions and incorrect predictions can lead to bad actions so if in 50 years from now I still happen to be alive and we are at a technological stage where a robot like this can carry me right now I would hope that if it attempts to learn about my behavior it has some reasoning capability to make sure that well and learns makes sense rather than you know dropping me on the ground and killing me in the process so it's about safe and robust there in that sense so what's the problem here well the fact is that the model is effectively a hypothesis or loosely speaking a belief of the agent and our policies can be false it's a key part of the definition well the problem is that the models are not treated as hypotheses and these methods and so the fact that they can be falsified is not even contemplated or considered in the first place you might say that you can address this problem or alleviate at least to some extent by maintaining multiple models and then you can do some model selection over your models such as keeping beliefs well that doesn't correspond to model criticism and and in this particular example of maintaining beliefs well the problem is that the beliefs quantify a relative likelihood but then but they never quantify an absolute truth and so even if all your belief point to one model that model can still almost ever totally be incorrect ok so here I've here's I think what we what we need to do or what agents repairability they should be able to construct hypotheses of behaviors but they should also be able to contemplate the truth of the airport says both of these things should go hand-in-hand and that's currently not the case what can this do for us well in the first place agents can decide to reject the model and then depending on how this reasoning is being carried out the agent may decide to change for example some of the assumptions underlying the model such as what information to fit into the learning process or press the structure of the model and the parameters instruction and all this done in the hopes of obtaining a better modeler ultimately or alternatively if none of these stuffs seem to help after rejecting a model well at the very least the agent should be able to resort to or decide to be be able to decide to resort to some safe policy that can work without a accurate accurate model and in the setting of game theory for example this could correspond to a maximum strategy that can guarantee a certain minimum level of performance so you know it's safe in that sense okay so that's the introduction that was a brief description of the problem and in the direction in which we're going with us and in the second part that we tell you about some research we did on a very specific aspect of this bigger picture problem and it's what we call behavioral hypothesis testing this was first published in the uii conference last year in the Netherlands and has been published more recently as part of a more substantial manuscript in the artificial intelligence community so if you decided to take a look and you have some comments please do get in touch with us ok let's make it more concrete so the model is in hypothesis because there's either true or false and because it's testable importantly this is the key definition of an hypothesis and then the net-net ask you a question that you can ask your surface well if I interact with some agent Jay and I have a behavior and hypothesis for the behavior of that particular agent and also given a interaction history thus that agent truly behave according to my hypothesis about the baby is something that you can ask yourself and that an agent should probably ask itself as well well here's an example to make it more concrete this table shows the first time steps of an interaction process between two agents playing the rock-paper-scissors game such as the reward in the human that we talked to earlier in the first column you see the time steps 1 2 all the way to 5 and the second column you get the actions taken by the agents in the respective time step so rock and paper for example and the first time sir and the third column corresponds to the hypothesis that agent one maintains about the behavior of agent two so for example and the first time stuff agent two chose to play paper and agent ones hypothesis would be that agent two plays paper with probability point one and this goes on in the same way and and the hypothesis can produce these probabilities based on the prior interaction history so you can think of the hypothesis as some kind of a black box program that can take us in for the interaction history and produces me know probabilities this isn't this isn't a belief it's just a a distribution over action outcomes for agent two so there can be 0 because this sums up to 1 it just means that the hypothesis does not believe that agent 2 would choose rock in this time step with any chance so this doesn't have to be so as I just said just think of this policy as some kind of a black box function it takes us in for the interaction history and it does some internal logic and reasoning and produces it ok so a question is how should agent 1 decide whether or not to reject the hypothesis about agent T and importantly all of the actions that you see in their in their column here of agent you are supported by the opposite because they're in the positive mass so he can't just outright reject the hypothesis now one way to adjust this for a natural way would be to compute some kind of score from the information given in the table and one example is the empirical frequency distribution and while this is a relatively simple and appealing there are also two big issues one is that the question of when you're scoring scheme is sufficient by sufficient I mean when do you know that it's sufficiently informative indiscriminate enough for you in order for you to make this decision if you take the empirical frequency and go back to this example well then you can tell them really that it's not sufficient because the distributions changed drastically in all the time steps and so you can't just take the average of your predictions and then the second issue is once you have a scoring scheme then you will have to find some threshold parameter beyond which you decide to reject or not reject and this seems to depend this will depend highly on the scoring scheme that you choose but it doesn't necessarily tell you how you should choose the parameter and furthermore once it once you change your scoring a scheme you will probably have to change the parameter as well because the semantics of the parameter changes change so what we propose instead is a method that is based on frequentist hypothesis testing or a p-value calculation which I'm assuming most people will be familiar with already this has two interesting advantages first of all it allows you to combine multiple scores into a common test statistic and this is done in the hopes of obtaining an overall more reliable and robust test statistic or scoring metric and the second is that in this particular method the decisions threshold parameter would correspond to the significance level which is you know you will all be aware of this already if we do p-value calculation and interesting fact about the Alpha parameter is that it's in a sense defines a uniform semantics of what rejection means and it's invariant of the specific scoring scene so if even if you decide decide to add or remove certain scores from the overall metric it doesn't change the semantics of the decision pressure there's also other advantages which is for example this can be implemented quite easily and as very efficient in practice but I'm not going to talk about this and this Tokyo okay before I go into the method discretion is just some simple notation this won't be heavy math but as some minimum rotation just to make sure that you can follow the explicit so we say that an agent a has a behavior denoted by a PI and pi is basically just a black box function that taxes in put the interaction history the history itself can contain all past states and join actions and the constit and then it produces a distribution over actions available to that agent so in other words a hypothesis is defined as a mapping from histories to distributions of reactions okay so here's the key idea underlying the model we control one agent I and we observe another agent J the PI J is the true behavior of agent a and PI J starts the apophysis paper and a question and the testing problem is are these two things the same well we can't answer this question directly because we don't know the true behavior we only have the hypothesis but we don't know the true babe but here's what we can do instead we first of all we know the past actions of the other agent because we can observe the history so we can observe that vector of actions and we know that that vector was generated by the true babe and in addition we can use our hypothesis behavior to generate a similar vector of the same structure and this corresponds to a vector of hypothesized actions assuming that the agent truly uses that behavior and then once we have these two vectors of actions we can address the original question as a two-sample problem which is the question of whether these two vectors were generated by the same stochastic process which in this case is our hypothesis behavin so that's this the this simple idea underlying this or a method so we want to compute p-values and this is not new none of this what you see here is new but I'm let me just explain what it means anyway this this here is the test statistic that we calculate given the two vectors of actions that are just explained T so this is the vector of observed actions and this is the vector of hypothesized actions using our hypothesis and then this is just the test to see now the p-value itself is just the probability with which we expect to for test statistic at least as Extreme as the one that we've just calculated so none of this is new this is all of this should be familiar already this assumes that the hypothesis is true and that allows us to use the same distribution and what will M D is well once we have the p-value we decide to reject the hypothesis if the p-value is below some manually chosen have a decision special so this is this is not new yet none of this is new here's how we do that the test statistic and and that's where it gets more more interesting on the top level here the test statistic for two vectors of actions is defined as the mean of the towel prefix test statistics that's what the Tao means here so the towel here means that we just take the first tau time steps of the overall vector of actions so if you have for example ten actions and you take the if tau is five you take the five initial segment data self is defined simply as the as the weighted average of scores calculated for these tower prefix vectors so W refers to a weight I could for example be uniform and the zebb K refers to a specific score functions and one score function there could be the empirical frequency distribution but I'm gonna go into some examples and the examples in the next slide the the intuition is this the score function intuitively could correspond to something like a as a as a likely the likelihood that our apophysis would produce the vector that we give to this function it doesn't have to be a likelihood in strict theoretical sense sense but it makes its it's useful to understand the method I find and that's why I like to use that when I explain the method okay here's a very simple example of a score function is that one so if this is our hypothesis and this is some input vector is simply defined as the average ratio of probabilities assigned to the observed actions in the vector and maximum probability actions in that respective time step so this is the probability assigned to the observed action adaptiq time Steptoe and this is the maximum probability that any action receives at that particular time step it's very simple and you can see already how this would translate into your likelihood intuitive ly would say that you know the closer you get to the apothem more likely or the higher scores and corresponds to like it but you can also see that this is highly imperfect for example if our hypothesis is a uniform distribution then it doesn't matter what what the true waiver is this score will always be the highest score then so it's imperfect in that sense as well but I guess I get back to this in a second here's another example just skip over this but it's similarly simple and here's another example of quite simply - this is based on the empirical frequency here we just take the average distribution of observed actions in the vector and here we just take the average of all hypothesize distributions and this whole expression just takes the overlap of these two things and so again you would you would think that you know the greater the overlap between these two things the more likely it is in a sense intuitively that this generated this year and you can also see a going back to the first one here that this one isn't necessarily food by uniform distribution such as this one so the idea would be that we can somehow combine these two things to make an overall more informative test statistic that's that's the idea here they're all in perfect but there can be combined to be to make up something stronger okay this is this is just one last step and then we finish with the method discretion there's some interesting math leakin D or some interesting theory you can show that the test statistic doesn't necessarily converge in the process but you can show that the fluctuation is nonetheless normal in the limit and this will tell us that we can just use a normal distribution to learn the test distribution but there's a an important problem here if we use a normal distribution that would fail to account for the screwing the gradual screwing and the initial learning process and you will find for example in the multi agent setting with the data in there in the first say 100 time steps you get a highly skewed test statistic so a t-distribution or no distribution with failure to capture that accurately so we instead propose to use a SKU normal distribution which includes the normal as a special class and as some parameters and in this work we provide a learning method that can learn these things online during the interaction based on a sampling a procedure but it's not necessary for the rest of this talking so so we we basically learn the test statistic online as we interact and here's some here's some interesting experimental results so in this first set of experiments we assume that our own behavior the behavior of the other agent and our apophysis for that other agents behavior they're all just random behaviors and random behavior defined as random distributions of reactions at each time step completely random we tested all combinations of score functions that one two and three which are the ones that I showed you earlier and the examples and before you go ahead and interpret the data let me just explain to you what the access mean here so this the numbers here correspond to the combinations of score functions that we tested so he we used one here is T and here we combined all three of them together and blue corresponds to the case in which the hypothesis is correct so the correct decision here would be not to reject ret corresponds to the case there where the apotheca says incorrect so the correct decision here would be to reject and obviously we want to be as high as possible here so you can see this is these are some very good results we have we have almost a hundred percent and most of these cases or bordering and represent there's just one exception to this which is this the third scoring at scheme or scoring score function and there's a very simple explanation for that the explanation is that the third score function is just no good for these kinds of behavior see if you recall the third score function is simply text the overlap of the empirical average and the average of all hypothesized distributions but for these particular behaviors the both of them in the limit will just be uniform distributions so we will have no useful information from that score metric and this explains where we have a good right here because in the limit any two behaviors will always look the same to us we will never reject and which is good in the blue case but it also explains this because again in the limit any two behaviors will look the same to us and then we should we never reject when we should be rejecting but it is something cool about this method we can we can solve that problem by combining any of the other two metrics or adding it to the third metric this is what you see here so the lines here correspond to the evolution of the p-values and ideally there should be converging to 0 in this case because this corresponds to the the red case when we should be rejecting you can see that this is the case for all lines except for a3 which is the red one you three simply is not a good score metric but as soon as we add any of the other guys you can see that conversion is restored or heat in other words and and this is the case for the for these guys too oh I should have pointed out sorry I forgot this these three busts the individual ones correspond to a higher complexity and the interaction so this is two actions ten actions and 20 actions and you can see the healing takes place and all complexity levels and this is exactly the effect that we wanted to achieve with us you know we didn't necessarily know ahead of time that this was going to be a bad score function but we can combine this with other possibly in perfect score functions as well and then we reheat early convergence here right so if for the for this case here you can see that it's almost as fast as the other ones but for the high complexity once it just turns out that it needs more data the second score function which I didn't explain but it's relatively simple as well just just isn't quite as different from the third one as the other ones would be so you know it just needs more data basically what this means is for these cases here if we just add more time to the interaction they gradually go up as well here this is a cut-off after 10000 time steps which I thought would be sufficient for the learning process to in order to have converge to something but if you add this to maybe 20,000 time steps you know this up gradually as well and you can see you can see that the convergent goes to zero okay here's another set of experiments this in this case for adaptive behaviors we have different kinds of adaptive classes for example a co-wash decision trees and coal both your networks I'm not going into the details here but they're basically just behaviors that take that adapt to certain portions of the past interaction this tree for example the past five time steps and here you can see that we have similarly good performance in the positive case but we have a mixed picture in the negative case yeah and in particular for this bar which corresponds to the core of decision trees and this well discusses will shows you a an important limitation of the testing method which is that we only make decisions based on passively based on data that is provided to us by some external process so in other words this testing method does not probe specific aspects of the hypothesis if if this is the structure of the true behavior a tree and if our hypothesis is a tree as well of the same structure and if it just so happens during the interaction that we only ever see this part of the tree but the differences are only in this part of the tree well then for them then there's no way for the method to make the right decision so in other words something else is needed testing is not the only thing that we need yeah okay this was the second part and in the third and final part that will tell you about what I think could be an exciting and future research agenda so the future so we saw that testing is an interesting and useful part but it's also just a part of the bigger problem as you just saw it's not sufficient in some cases I think what is needed eventually in the future is something that could be described as hypothesis contemplation it's a much more holistic way to look at things it's a it's a reasoning process that truly considers models as hypotheses and not just as models that we run with nonetheless so this was this words consists of elements such as being a to generate hypotheses and you know you could do a machine learning and fit models you can do case based reasoning or you can make guesses based purely on the structure of the problem such as equilibrium solutions then we would have an element that does selection and this could for example be maintaining beliefs over a potentially promising subset of hypotheses then we have testing which is what we just saw then we would need exploration and revision which I'm going to talk about in just a minute and then there's probably even things that I don't even include in there in there and the figure here so what we really want Nabal in an agent to be able to do is this practically contemplates truly any path assess but importantly this is happening online this is not a an isolated process this is online and happening during an interaction process so really we'll probably look more like this eventually in the process so here's some more details of expression exploration is simply the question of how when we should explore aspects of the hypothesis if we remember in the last part of the second part where I showed you that we didn't have good results for the rejection case well the explanation was that we didn't actually probe this aspect of our hypothesis we simply didn't see data coming from here and this can be solved with exploration schemes so in this in this case we used a simple random randomized exploration scheme you know for example epsilon greedy exploration where you have a certain probability of randomization and the choice of this course goes up really quickly and if you do if you do that because with the randomization of that you quickly end up in this part as well but of course you can just do that in practice because any of reactions will have an effect on the other agent and you don't just want to do random and randomized action choices you will have to be smarter than that and that's I think one of the a very interesting a key problem that we will need to adjust something else is really interesting and something that I'm involved in currently as well is revision the question is how should you revise and improve aspects of your opposes so rather than generating new hypotheses from scratch you already have something to begin with you want to revise certain bits and pieces based on what you see so an example would be if you assume or if you have Porthos's is that the other guys have been reinforced Mona what aspects could be the learning rate the exploration rate the discount rate and when various modules and parameters and I'm assuming you're familiar with this the question then is based on what you see in the interaction how should you revise and improve these things subsequently and when should you do that as well and then so this is a really interesting but challenging a question okay so in summary individual pieces of this whole puzzle exists we have people in the community and people in this room who have worked on bits and pieces here but what's missing is a complete integration of these individual pieces into a more comprehensive reasoning process all of these things need to be put together I think eventually I think this is an important problem because of the things that I just pointed out to you and then the presentation and it's feasible and timely precisely because we've worked on this for a while now and also I think this is relevant not just for multi-agent systems but actually for the broader aíi community we all use models in you know various aspects of models but we don't do autonomous model criticism there's something you don't see much in the community so if you familiar with statistics then you will probably be familiar with model criticism they've worked on this for many decades but there's a difference between that NII the difference is that in AI we will need to carry this out autonomously and that's I think where the leap has to it has to be done okay so this is the goal but there also challenges since we're combining various elements there will be issues relating to complexity we want us to be efficient there will be issues relating to soundness and completeness all of this will need to be worked out but I think something more interesting is that sooner or later when we pursue this goal we will have to reconsider whether correctness is the right goal for this and maybe we should contemplate something else something that could be described as usefulness usefulness is something that could include correctness as one aspect but couldn't be traded off with efficiency model complexity and the idea is that you can then carry out things such as estimating value of information and you know go beyond the correctness criteria there's this famous citation and the statistics community by box that goes something along the lines of all models are incorrect but some models are useful and maybe your agents can learn to reason along those lines as well analyst over the last part which I think could be really exciting and a really interesting step forward is if abs can learn to carry out this hypothesis contemplation autonomously rather than us crafting these individual pieces and then putting them together into an integrated solution this is a more mechanistic approach which is what you see a lot in AI but I think will be really interesting if you can just you know now I have these individual pieces but let the agent learn these things autonomously I think that could be really interesting and tremendous milestone in the III development history ok that sums up the presentations so it was not so clear why the special elements and so first of all this is not just finding about not just about statistical tests as I was trying to explain the tests that would just be a small part of the bigger picture and the reason why I'm relating this to mitogen systems is because it's simply not happening and the my teacher since my gen systems community but I also point load in the presentation that this is relevant for many areas and I so if you do model-based reinforcement learning you will you know this will translate into that as well if you do data base game theory analysis you know you have more as well and you will have to have some if this is to be carried out at Ellimist leave by an intelligent machine there will be then the machine should be able to have some model criticism member in the reasoning process so yeah so basically that's the answer that it is relevant to to many things I'm wondering if this can be phrased as a form of anomaly detection and as well when you're when you're looking at something outside of your your class of models for how how the other agents are behaving ya know well first of all I should say that I'm I'm not all that familiar with the latest work in their community but I'm familiar with some papers and it does definitely relate to this as well I'm not sure you know what kinds of methods that you say I think it's mostly based on you have pre-specified models and they try to detect deviations from that which in that sense relates to that as well but yeah so it definitely relates to it but I unfortunately I can't go into detail because I'm not too familiar with the latest enhancements that or if I can go back come back to your comment one one difference as well that I pointed out in the presentation is the if you do model criticism in a data fitting scenarios then you don't have this online component whereas in the multi-agent systems setting you have additional pressure or complexity coming from this idea that there's this is happening as you interact as you collect the data so you can't just reason about the correctness or do the model criticism in an isolation from the data generation process your reasoning will be part of the data generation process as well in the reinforce learning book because they interact with the environment that's right many different that do you end up a value yes yeah so the different score functions they will be part of the overall test statistic well it's actually and this method is relatively simple so it's a defined as the average of a weighted difference of the scores so these are the different score functions and then you calculate the difference and if you interpret this as a likelihood you know they should be high and this gives you the zeroes out then zero is what you need to get a high p-value and that's where the intuition comes from you mean you would calculate basically just some reduce this definition to one scorer function or well could you hear well I'm supposed to could do that but then you would have the year I'm not sure if that would give you an edge over this so I'm not I'm not you could surely I think you could do this but then I'm not sure if this gives you more power than this and the the other thing is well as if it even if it does give you more power than well there's complexity issues as well but if you do is simultaneously and it's independent then maybe the increased complexity it doesn't matter all that much is a weighted sample the the right metric for this or should be some other LP norm or or some other statistical piece yeah but I don't know I don't know I best friend man yeah so how would this handle your example with the rock-paper-scissors where the true world was a true mystic one and maybe there the problem with the IRD assumption there it seems like the natural kind of score functions that have the ID the what do you mean the idea in the I see so first of all them you would need to specify what model you do the test on this is the if you use the limit model here say after maybe a you know a thousand interaction and they do the test on this model you know you would do a uniform test for the first score function they wouldn't help you because the first ko function over a uniform one always gives you the high score but it does the the third score function which is the empirical average actually the third function I think wouldn't capture that as well because if you have the sequence you get a uniform as well and a limit so my guess would be that the second one captures it but I would have to look more into that each other there's the second example of the score functions here I'm not entirely sure cuz I've not actually carry this out my in my mind but from what I can tell the first to score functions actually you wouldn't be able to solve that because in the limit the sequence is uniform and the limit model is uniform as well so yeah I'm guessing this guy would be able to provide information but I'm not sure to be honest I we correlation yes you can if you you can definitely you can you can have a key you can encode correlations as well in your score functioning but these simple ones don't actually do that one more note I should make here this is a statistical test and and so if you end up with a deterministic distribution here my personal recommendation would be to have some you know prior layer that tries to identify the deterministic relation and then you don't actually have to go ahead and do all this relatively complicated testing because then you you only have ones and zeros then but I mean I mean you have space of hypothesis you maybe the truth is not even in space oh damn it's you our priority then can I be if you can identify the right thing this first this testing method does not reason about the space of hypotheses here it's only yes or no there delivers so the space of hypothesis would be some kind of model selection which is what's happening here is because my shoe is not like if one of those models is the correct model than where everything is not yes it was expressly told and yes yes exactly that that's fascinating that's right for speedy watch in the good and you watch unknown the null hypothesis as for any PTSS that the hypothesis is correct okay this shooting shows the same that's why I Louis UT have to specify the distribution over values yeah so you have one common distribution over test statistics basically so when we think of a Gaussian in the limit and then you calculate your test value you take the spot and then you just take the outside well I would say this in the frequent so in this particular I guess you could say standard way of calculating p-values this is actually the normal thing to do I think in the in the name and what's naming something a very variant where you have two hypotheses there the h1 in the DHD or then you have what you just described there you made any attempt to map this into a bit in framework you mean such as maintaining beliefs over a set of alternative models well yeah you can you can certainly combine these two things first of all if you do the if you reason over a hypothesis space find a drawer or uncountable then this as was just pointed out and as I pointed out this cannot does not necessarily give you the true answer if the true waiver is outside the space but you can combine these two things by maintaining your basic belief and reason and and doing the testing simultaneously and the testing may be able to tell you that this is no good at all you know if it gives you a negative test result in the end so then you can either change your path of space you can start you can start learning based on different assumptions or you can just scrap all of this and do some math default policy for example on the other end if you mean combining or adding in some pace and reasoning into those core functions you can do that as well even though I haven't done it yet here because we know they're fairly simple they don't do any patient reasoning in it then it would be nice to have something that says because you start with some prior and then you sort of update your beliefs and nice to see something well your problem was actually exactly that were that was the idea behind this originally but but then as I pointed out this is ultimately just a small piece in there over you know product that we want to achieve okay
20fda999-16ff-447c-b2f6-a9aaaaa9a8c9
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Any work on honeypots (to detect treacherous turn attempts)? I know the idea of making a "honeypot" to detect when an AI system would attempt a treacherous turn if given the opportunity has been discussed (e.g. IIRC, in Superintelligence).  But is there anyone actually working on this?  Or any work that's been published?
a551ce04-a70e-44d6-ab74-e31e2caf7163
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
[Crosspost] Why Uncontrollable AI Looks More Likely Than Ever This is a crosspost from [Time Magazine](https://time.com/6258483/uncontrollable-ai-agi-risks/), which also appeared in full at a number of [other](https://www.msn.com/en-us/news/technology/why-uncontrollable-ai-looks-more-likely-than-ever/ar-AA180fW9) unpaid news websites. BY [**OTTO BARTEN**](https://time.com/author/otto-barten/) AND [**ROMAN YAMPOLSKIY**](https://time.com/author/roman-yampolskiy/) *Barten is director of the* [*Existential Risk Observatory*](https://www.existentialriskobservatory.org/)*, an Amsterdam-based nonprofit.* *Yampolskiy is a computer scientist at the University of Louisville, known for his* [*work*](https://scholar.google.com/citations?user=0_Rq68cAAAAJ&hl=en) *on AI Safety.* “The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,” mathematician and science fiction writer I.J. Good [wrote](https://asset-pdf.scinapse.io/prod/1586718744/1586718744.pdf) over 60 years ago. These prophetic words are now more relevant than ever, with artificial intelligence (AI) gaining capabilities at breakneck speed. In the last weeks, many jaws dropped as they witnessed transformation of AI from a handy but decidedly unscary recommender algorithm, to something that at times seemed to act worryingly [humanlike](https://arxiv.org/abs/2302.02083). Some reporters were so shocked that they [reported](https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html?smid=tw-nytimes&smtyp=cur) their conversation histories with large language model Bing Chat verbatim. And with good reason: few expected that what we thought were glorified autocomplete programs would suddenly [threaten](https://time.com/6256529/bing-openai-chatgpt-danger-alignment/) their users, [refuse](https://twitter.com/thedenoff/status/1625699139852935168?t=60g5bONoHRFHRMswuespQA&s=08) to carry out orders they found insulting, break security in an attempt to [save a child’s life](https://www.reddit.com/r/bing/comments/1150po5/sydney_tries_to_get_past_its_own_filter_using_the/), or [declare their love](https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html?smid=tw-nytimes&smtyp=cur) to us. Yet this all happened. It can already be overwhelming to think about the immediate consequences of these new models. How are we going to grade papers if any student can use AI? What are the effects of these models on our daily work? Any knowledge worker, who may have thought they would not be affected by automation in the foreseeable future, suddenly has cause for concern. Beyond these direct consequences of currently existing models, however, awaits the more fundamental question of AI that has been on the table since the field’s inception: what if we succeed? That is, what if AI researchers manage to make Artificial General Intelligence (AGI), or an AI that can perform any cognitive task at human level? Surprisingly few academics have seriously engaged with this question, despite working day and night to get to this point. It is obvious, though, that the consequences will be far-reaching, much beyond the consequences of even today’s best large language models. If remote work, for example, could be done just as well by an AGI, employers may be able to simply spin up a few new digital employees to perform any task. The job prospects, economic value, self-worth, and political power of anyone not owning the machines might therefore completely dwindle . Those who do own this technology could achieve nearly anything in very short periods of time. That might mean skyrocketing economic growth, but also a rise in inequality, while meritocracy would become obsolete. But a true AGI could not only transform the world, it could also transform itself. Since AI research is one of the tasks an AGI could do better than us, it should be expected to be able to improve the state of AI. This might set off a positive feedback loop with ever better AIs creating ever better AIs, with no known theoretical limits. This would perhaps be positive rather than alarming, had it not been that this technology has the potential to become uncontrollable. Once an AI has a certain goal and self-improves, there is no known method to adjust this goal. An AI should in fact be expected to resist any such attempt, since goal modification would endanger carrying out its current one. Also, [instrumental convergence](https://dl.acm.org/doi/10.5555/1566174.1566226) predicts that AI, whatever its goals are, might start off by self-improving and acquiring more resources once it is sufficiently capable of doing so, since this should help it achieve whatever further goal it might have. In such a scenario, AI would become capable enough to influence the physical world, while still being misaligned. For example, AI could use natural language to influence people, possibly using social networks. It could use its intelligence to acquire economic resources. Or AI could use hardware, for example by hacking into existing systems. Another example might be an AI that is asked to create a universal vaccine for a virus like COVID-19. That AI could understand that the virus mutates in humans, and conclude that having fewer humans will limit mutations and make its job easier. The vaccine it develops might therefore contain a feature to increase infertility or even increase mortality. It is therefore no surprise that according to the most recent [AI Impacts Survey](https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/), nearly half of 731 leading AI researchers think there is at least a 10% chance that human-level AI would lead to an “extremely negative outcome,” or [existential risk](https://80000hours.org/problem-profiles/artificial-intelligence). Some of these researchers have therefore branched out into the novel subfield of AI Safety. They are working on controlling future AI, or robustly aligning it to our values. The ultimate goal of solving this [alignment problem](https://www.amazon.com/Alignment-Problem-Machine-Learning-Values/dp/0393635821) is to make sure that even a hypothetical self-improving AI would, under all circumstances, act in our interest. However, [research](https://journals.riverpublishers.com/index.php/JCSANDM/article/view/16219/13165) shows that there is a fundamental trade-off between an AI’s capability and its controllability, casting doubts over how feasible this approach is. Additionally, current AI models have been shown to behave [differently](https://proceedings.mlr.press/v162/langosco22a/langosco22a.pdf) in practice from what was intended during training. Even if future AI could be aligned with human values from a technical point of view, it remains an open question *whose* values it would be aligned with. The values of the tech industry, perhaps? Big Tech companies don’t have the best track record in this area. Facebook’s algorithms, optimizing for revenue rather than societal value, have been linked to ethnical violence such as the [Rohingya genocide](https://www.theguardian.com/technology/2021/dec/06/rohingya-sue-facebook-myanmar-genocide-us-uk-legal-action-social-media-violence). Google fired [Timnit Gebru](https://time.com/6132399/timnit-gebru-ai-google/), an AI ethics researcher, after she criticized some of the company’s most lucrative work. Elon Musk [fired](https://www.wired.com/story/twitter-ethical-ai-team/) the entire ‘Ethical AI’ team at Twitter at once. What can be done to reduce misalignment risks of AGI? A sensible place to start would be for AI tech companies to increase the number of researchers investigating the topic beyond the roughly 100 people available today. Ways to make the technology safe, or to reliably and internationally regulate it, should both be looked into thoroughly and urgently by AI safety researchers, AI governance scholars, and other experts. As for the rest of us, reading up on the topic, starting with books such as [*Human Compatible*](https://www.amazon.nl/Human-Compatible-AI-Problem-Control/dp/0241335205) by Stuart Russell and [*Superintelligence*](https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742) by Nick Bostrom, is something everyone, especially those in a position of responsibility, should find time for. Meanwhile, AI researchers and entrepreneurs should at least keep the public informed about the risks of AGI. Because with current large language models acting like they do, the first “ultraintelligent machine”, as I.J. Good called it, may not be as far off as you think.
447934e1-2ae7-4065-8ac5-fec6dffe4d30
trentmkelly/LessWrong-43k
LessWrong
Argument against 20% GDP growth from AI within 10 years [Linkpost] Mohammed Bavarian, a research scientist at OpenAI, tweeted this thread arguing that he could see "the overall US GDP growth rising from recent avg 2-3% to 20+% in 10 years." Feel free to check out those arguments, though they'll probably be familiar to you: GPT-3, GitHub Copilot, and image synthesis will drive unprecedented improvements.  Cameron Fen, an economics PhD student at the University of Michigan, responded with this thread disagreeing with Bavarian's argument. I wanted to share some of the arguments that I found novel and interesting. Argument #1: The impacts of previous transformative technologies > There have been 3 industrial revolutions in history, mechanization, electricity and mass production, and IT and the internet. China went through all 3 at the same time and was barely able to go above 7% growth annually. > > The newest industrialization will be big, but will it be bigger than moving 95% of the population working in agriculture to working factories, not to mention all 3 combined? > > In particular, the growth of the first industrial revolution accelerated growth from 1.5% to 3% a year in the UK (Source). Growth from electrification was 1.5% a year on average (source) and growth during the information industrialization was 3.5% a year (source).  > > Given these growth rates, it seems unlikely that a single industrial revolution can move the needle to such an extent that US GDP growth accelerates from 2% to 7%. Argument #2: The size of the tech industry > Perhaps you can argue that the industrial revolution on the horizon is going to be 3x bigger than any industrial revolution in the past. Let’s see what that would imply: > > 7% growth implies 5% growth over our current trend rate of about 2%. This comes out to be about 1.15 trillion GDP additional growth this year and increasing as the base gets larger. > > According to this article, Facebook contributes 100 billion to US economic activity. While much of this is uncounted because GDP me
3b37acb6-f938-4723-a4d5-e432592cc261
trentmkelly/LessWrong-43k
LessWrong
Are Functional languages the future of programming? Because I have been learning about Type Theory, I have become much more aware of and interested in Functional Programming.  If you are unfamiliar with functional programming, Real World Haskell describes functional programming like this:  > In Haskell [and other functional languages], we de-emphasise code that modifies data. Instead, we focus on functions that take immutable values as input and produce new values as output. Given the same inputs, these functions always return the same results. This is a core idea behind functional programming.  > > Along with not modifying data, our Haskell functions usually don't talk to the external world; we call these functions pure. We make a strong distinction between pure code and the parts of our programs that read or write files, communicate over network connections, or make robot arms move. This makes it easier to organize, reason about, and test our programs. Because of this functional languages have a number of interesting differences with traditional programming. In functional programming:   * Programming is lot more like math. Programs are often elegant and terse. * It is much easier to reason about programs, including proving things about them (termination, lack of errors etc.). This means compilers have much more room to automatically optimize a program, automatically parallelizing code, merging repeated operations etc. * Static typing helps (and requires) you find and correct a large fraction of trivial bugs without running the program. * Pure code means doing things with side effects (like I/O) requires significantly more thought to start to understand, but also makes side effects more explicit. * Program evaluation is defined much more directly on the syntax of the language.  After having learned and experimented a bit with functional languages, it seems like they are the future of programming languages. It is my impression that functional languages are more popular among LWers than among programmers i
4c6d3b1f-3048-479e-8199-4428dbc8828e
trentmkelly/LessWrong-43k
LessWrong
What I’ll be doing at MIRI Note: This is a personal post describing my own plans, not a post with actual research content. Having finished my internship working with Paul Christiano and others at OpenAI, I’ll be moving to doing research at MIRI. I’ve decided to do research at MIRI because I believe MIRI will be the easiest, most convenient place for me to continue doing research in the near future. That being said, there are a couple of particular aspects of what I’ll be doing at MIRI that I think are worth being explicit about. First, and most importantly, this decision does not represent any substantive change in my beliefs regarding AI safety. In particular, my research continues to be focused around solving inner alignment for amplification. My post on relaxed adversarial training continues to represent a fairly up-to-date form of what I think needs to be done along these lines. Second, my research will remain public by default. I have discussed with MIRI their decision to make their research non-disclosed-by-default and we agreed that my research agenda is a reasonable exception. I strongly believe in the importance of collaborating with both the AI safety and machine learning communities and thus believe in the need for sharing research. Of course, I also fully believe in the importance of carefully reviewing possible harmful effects from publishing before disclosing results—and will continue to do so with all of my research—though I will attempt to publish anything I don’t believe to pose a meaningful risk. Third—and this should go without saying—I fully anticipate continuing to collaborate with other researchers at other institutions such as OpenAI, Ought, CHAI, DeepMind, FHI, etc. The task of making AGI safe is a huge endeavor that I fully believe will require the joint work of an entire field. If you are interested in working with me on anything (regarding inner alignment or anything else) please don’t hesitate to send me an email at evanjhub@gmail.com.
5f6df8c5-0489-45c1-814e-5028b39d310b
trentmkelly/LessWrong-43k
LessWrong
While we're on the subject of meta-ethics... The best theory of morality I've ever found is the one invented by Alonzo Fyfe, which he chose to call "desire utilitarianism." This short e-book (warning: pdf), written by a commenter on Alonzo's blog, describes the theory very well. He also wrote a FAQ. One great advantage of this theory is that what it describes actually exists even if you prefer to use the word "morality" to mean something else.  Even a community of paperclip maximizers may find something in it to be relevant, amazingly enough.
08b83b17-c556-497a-acd9-1890c9623e52
trentmkelly/LessWrong-43k
LessWrong
Singularity Call For Papers Amnon Eden has sent out this call for papers on technological singularity, which many Less Wrongers may be interested in. I presented at last year's conference, which was a good experience with many interesting people. Submitting good papers can help to legitimate and cultivate the field and thus reduce existential risk (although of course poor work could have the reverse effect). If you have an idea or a draft that you're not sure about, and would like to discuss it before submitting, I'd be happy to help if you contact me (carl DOT shulman AT gmail). I am also told that the Singularity Institute may be able to provide travel funding for selected papers. Email annasalamon@intelligence.org for more information.  Track in: 8th European conference on Computing And Philosophy — ECAP 2010 Technische Universität München 4–6 October 2010 Important dates: * Submission (extended abstracts): 7 May 2010 * ECAP Conference: 4–6 October 2010 Submission form Theme Historical analysis of a broad range of paradigm shifts in science, biology, history, technology, and in particular in computing technology, suggests an accelerating rate of evolution, however measured. John von Neumann projected that the consequence of this trend may be an “essential singularity in the history of the race beyond which human affairs as we know them could not continue”. This notion of singularity coincides in time and nature with Alan Turing (1950) and Stephen Hawking’s (1998) expectation of machines to exhibit intelligence on a par with to the average human no later than 2050. Irving John Good (1965) and Vernor Vinge (1993) expect the singularity to take the form of an ‘intelligence explosion’, a process in which intelligent machines design ever more intelligent machines. Transhumanists suggest a parallel or alternative, explosive process of improvements in human intelligence. And Alvin Toffler’s Third Wave (1980) forecasts “a collision point in human destiny” the scale of which, in the cours
2a90762b-e034-4bbd-8cda-b9f5f85dd83f
trentmkelly/LessWrong-43k
LessWrong
Natural cultural relativists? When given the same ability to punish anyone, cooperative people want to punish members of groups they identify with more than they do outsiders, while less cooperative people want to punish outsiders more. From the Journal of Evolution and Human Behavior: One of the most critical features of human society is the pervasiveness of cooperation in social and economic exchanges. Moreover, social scientists have found overwhelming evidence that such cooperative behavior is likely to be directed toward in-group members. We propose that the group-based nature of cooperation includes punishment behavior. Punishment behavior is used to maintain cooperation within systems of social exchange and, thus, is directed towards members of an exchange system. Because social exchanges often take place within groups, we predict that punishment behavior is used to maintain cooperation in the punisher’s group. Specifically, punishment behavior is directed toward in-group members who are found to be noncooperators. To examine this, we conducted a gift-giving game experiment with third-party punishment. The results of the experiment (N=90) support the following hypothesis: Participants who are cooperative in a gift-giving game punish noncooperative in-group members more severely than they punish noncooperative out-group members. > ..[W]e predict that … punishment behavior is directed toward in-group members who are found to be noncooperators. To examine this, we conducted a gift-giving game experiment with third-party punishment. The results of the experiment (N=90) support the following hypothesis: Participants who are cooperative in a gift-giving game punish noncooperative in-group members more severely than they punish noncooperative out-group members. The researchers’ conclusion is that punishment is just an extension of cooperation, and so applies in the same areas. They were not expecting, and haven’t got a good explanation for, uncooperative people’s interest in specifically punis
90bbe86b-de77-4eea-9be4-bf53b28fcfbc
trentmkelly/LessWrong-43k
LessWrong
A great talk for AI noobs (according to an AI noob) I'm far from an expert on AI and alignment but I really liked this Ted talk and think it serves as a good introduction to the topic. Here are some highlights: It starts with a demo of what ChatGPT is currently or soon will be capable and it's bound to shocking to plenty of noobs like me. Then OpenAI cofounder is also pressed on the company's controversial, perhaps reckless, approach. He explains why OpenAI has often been the first to release AI products despite safety concerns (particularly at 24:17) In short: The cofounder says that OpenAI's views programing Chatgpt like teaching a child insofar as regular servings of positive and negative reinforcement are better than trying to create a master plan ahead of time. OpenAI currently want the world's help feeding feedback into their "children" and although their view may change, Alan Turing suggested this approach in 1950 and OpenAI  still doesn't see a better approach. Also (at 20:52) I think OpenAI's cofounder claims that the company is "starting to really get good at is predicting" emergent capabilities of AI. That sounds a little too good to be true so I'm curious what yall think. And please correct me if I'm wrong or missing something important (hopefully before I share this talk with my friends :)
d9a3fc12-a1cd-4b30-8884-59d06afa0623
trentmkelly/LessWrong-43k
LessWrong
Thoughts On Democracy How to govern society is one of the oldest philosophical problems, and while there aren’t any perfect solutions that can completely eliminate corruption, there are some pragmatic principles and heuristics that we can follow to generate a better society. It is peculiar how the legislative and jury processes are so radically different from each other. In a democracy, decisions are allowed to be based on partisanship, emotions, conflicts of interest, and personal morals. And that’s even strongly encouraged. The selection process for the people who make legislative decisions regarding society is also determined by election campaigns and voting. On the other hand, the decision-making process in the court system is completely different. People are forbidden from being selected to make decisions if they have a conflict of interest in deciding the verdict, or if they are likely to be influenced by their emotions, morals, and personal biases. In addition, the selection process for the people who make decisions is also determined by random selection (sortition), in contrast to the voting system used by democracies. It’s odd that most people never question why these decision making processes are so different from each other. Yes, they belong to different branches of government, but they are both making important decisions about society. Besides the appeal that a random selection of people on the jury can seem more fair and unbiased, I can’t think of many other reasons why the decision-making processes for the legislature and court system should be much different from each other. How many things can go wrong with a system that emphasizes reason, logic, and non-partisanship? On the other hand, you could easily imagine thousands of different ways how a system that allows emotions, partisanship, non-thorough evaluation of the evidence, and conflicts of self-interest to influence the decision making could go wrong. Democratic legislatures should work more similarly to the judic
1f75719a-e6c3-4627-9d53-9e9f985425a8
trentmkelly/LessWrong-43k
LessWrong
NLP Position Paper: When Combatting Hype, Proceed with Caution Linkpost for https://cims.nyu.edu/~sbowman/bowman2021hype.pdf. To appear on arXiv shortly. I'm sharing a position paper I put together as an attempt to introduce general NLP researchers to AI risk concerns. From a few discussions at *ACL conferences, it seems like a pretty large majority of active researchers aren't aware of the arguments at all, or at least aren't aware that they have any connection to NLP and large language model work. The paper makes a slightly odd multi-step argument to try to connect to active debates in the field: * It's become extremely common in NLP papers/talks to claim or imply that NNs are too brittle to use, that they aren't doing anything that could plausibly resemble language understanding, and that this is a pretty deep feature of NNs that we don't know how to fix. These claims sometimes come with evidence, but it's often bad evidence, like citations to failures in old systems that we've since improved upon significantly. Weirdly, this even happens in papers that themselves show positive results involving NNs. * This seems to be coming from concerns about real-world harms: Current systems are pretty biased, and we don't have great methods for dealing with that, so there's a pretty widely-shared feeling that we shouldn't be deploying big NNs nearly as often as we are. The reasoning seems to go: If we downplay the effectiveness of this technology, that'll discourage its deployment. * But is that actually the right way to minimize the risk of harms? We should expect the impacts of these technologies to grow dramatically as they get better—the basic AI risk arguments go here—and we'll need to be prepared for those impacts. Downplaying the progress that we're making, both to each other and to outside stakeholders, limits our ability to foresee potentially-impactful progress or prepare for it. I'll be submitting this to ACL in a month. Comments/criticism welcome, here or privately (bowman@nyu.edu).
205924c5-1426-408e-86b8-e5b162357def
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"I was having a discussion with a friend and reading some related blog articles about the question of whether race affects IQ. (N.B. This post is NOT about the content of the arguments surrounding that question.) Now, like your typical LessWrong member, I subscribe to the Litany of Gendlin, I don’t want to hide from any truth, I believe in honest intellectual inquiry on all subjects. Also, like your typical LessWrong member, I don’t want to be a bigot. These two goals ought to be compatible, right? But when I finished my conversation and went to lunch, something scary happened. Something I hesitate to admit publicly. I found myself having a negative attitude to all the black people in the cafeteria. Needless to say, this wasn’t what I wanted. It makes no sense, and it isn’t the way I normally think. But human beings have an affect heuristic. We identify categories as broadly “good” or “bad,” and we tend to believe all good things or all bad things about a category, even when it doesn’t make sense. When we discuss the IQ’s of black and white people, we’re primed to think “yay white, boo black.” Even the act of reading perfectly sound research has that priming effect. And conscious awareness and effort doesn’t seem to do much to fix this. The Implicit Awareness Test measures how quickly we group black faces with negative-affect words and white faces with positive-affect words, compared to our speed at grouping the black faces with the positive words and the white faces with the negative words. Nearly everyone, of every race, shows some implicit association of black with “bad.” And the researchers who created the test found no improvement with practice or effort. The one thing that did reduce implicit bias scores was if test-takers primed themselves ahead of time by reading about eminent black historical figures. They were less likely to associate black with “bad” if they had just made a mental association between black and “good.” Which, in fact, was exactly how I snapped out of my moment of cafeteria racism: I recalled to my mind's ear a recording I like of Marian Anderson singing Schubert. The music affected me emotionally and allowed me to escape my mindset. To generalize from that example, we have to remember that the subconscious is a funny thing. Mere willpower doesn’t stop it from misbehaving: it has to be tricked. You have to hack into the affect heuristic, instead of trying to override it. There’s an Enlightenment notion of “sentiment” which I think may be appropriate here. The idea (e.g. in Adam Smith) was roughly that moral behavior springs from certain emotional states, and that we can deliberately encourage those emotional states or sentiments by exposing ourselves to the right influences. Sympathy, for example, or affection, were moral sentiments. The plays of 18th century England seem trite to a modern reader because the characters are so very sympathetic and virtuous, and the endings so very happy. But this was by design: it was believed that by arousing sympathy and affection, plays could encourage more humane behavior. Sentiments are a way of dealing directly with the affect heuristic. It can’t be eradicated, at least not all in one go, but it can be softened and moderated. If you know you’re irrationally attaching a “yay” or “boo” label to something, you can counteract that by focusing your reflections on the opposite affect. I suspect – though I have no basis beyond anecdote – that art is a particularly effective way of inducing sentiments and attacking the affect heuristic. You don’t hear a lot about art on LW, but we probably should be thinking more about it, because art is powerful. Music moves people: think of military marches and national anthems, and also think of the humanistic impulse in the Ode to Joy. Music is not an epistemic statement, but acts at the more primitive level of emotion. You can deploy music to change yourself at the pre-rational level; personally, I find that something like “O Isis Und Osiris” from The Magic Flute can cut through fear and calm me, better than any conscious logical argument. Poetry also seems relevant here – it’s verbal, but it’s a delivery system that works at the sub-rational level. I’m convinced that a large part of the appeal of religion is in poetic language that rings true. (It’s interesting what happens when scientific or rationalist ideas are expressed in poetic language – this is rarer, but equally powerful. Carl Sagan, Francois Jacob, Bertrand Russell.) The parable, the fantasy, and the poem can be more effective than the argument, because they can reach emotional heuristics that arguments cannot touch. This is not an argument against rationality – this is rationality. To fight our logical fallacies, we have to attack the subconscious directly, because human beings are demonstrably bad at overriding the subconscious through willpower. It's not enough to catalogue biases and incorrect heuristics; we want to change those errors, and the most effective way to change them isn't always to read an argumentative essay and decide "to think rationally." I’m an optimist: I think we can, in fact, seek truth relentlessly. I don’t think we have to taboo whole subjects of inquiry in fear of the affect heuristic. But we need to fight affect with affect. (As a practical suggestion for ourselves and each other, it might be interesting to experiment with non-argumentative ways of conveying a point of view: tell an illustrative story, express your idea in the form of an epigram, or even quote a poem or a piece of music or a photograph. Eliezer does a lot of this already: commandments, haikus, parables, and a fanfic. The point, for rationalists, is not manipulation -- I don't want to use emotion to get anyone to adopt an idea thoughtlessly. The point is to improve understanding, to shake loose our own biases by tinkering with our own emotions. Clearer writing is not necessarily drier writing, and sometimes we understand an idea best when it makes use of our emotional capacities.)" This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong. Reflect on the role of art in mitigating subconscious biases as suggested in the text. Can you provide more concrete examples of how different forms of art can be used to counteract affect heuristics? The role of art in mitigating subconscious biases is significant. Art, in its many forms, has the power to evoke emotions, alter perceptions, and challenge preconceived notions. It can act as a powerful tool to counteract affect heuristics, which are the mental shortcuts that individuals use to make quick, efficient decisions based on emotions and past experiences. For instance, literature can be an effective medium to counteract affect heuristics. Reading a novel or a short story that centers around a character from a different culture or background can foster empathy and understanding, breaking down stereotypes and biases. Harper Lee's "To Kill a Mockingbird" is a classic example, which challenges racial prejudice and promotes empathy. Visual arts, such as paintings, sculptures, and photography, can also be used to challenge biases. A photograph capturing the plight of refugees, for example, can stir emotions and challenge biases against them. Paintings and sculptures can also represent different cultures, experiences, and perspectives, promoting understanding and acceptance. Film and theater are other powerful mediums that can be used to counteract affect heuristics. They can portray characters and situations that challenge stereotypes and biases. For example, the movie "A Beautiful Mind" challenged the stigma associated with mental illness. Music, as mentioned in the text, can also be a powerful tool. Songs and compositions can evoke a range of emotions and promote understanding and empathy. For instance, songs about social issues can raise awareness and challenge biases. In conclusion, art, in its many forms, can serve as a powerful tool to counteract affect heuristics and challenge subconscious biases. By evoking emotions and presenting different perspectives, art can promote understanding, empathy, and acceptance. Reflect on the potential limitations or drawbacks of using art to counteract affect heuristics. Could there be instances where art reinforces biases instead of challenging them? Discuss the role of the creator's intent and the audience's interpretation in this context. While art can be a powerful tool to counteract affect heuristics and challenge biases, it is not without its limitations or potential drawbacks. One of the main issues lies in the subjectivity of art. The same piece of art can be interpreted in vastly different ways by different individuals, based on their own experiences, beliefs, and biases. This means that while one person might see a piece of art as challenging a particular bias, another person might interpret it in a way that reinforces their existing bias. The creator's intent plays a significant role in this. An artist may create a piece with the intention of challenging a particular bias, but if the message is not clear or is too subtle, it may be missed or misinterpreted by the audience. In some cases, the artist's intent might be completely ignored or overlooked, with the audience projecting their own interpretations onto the artwork. Furthermore, art can sometimes unintentionally reinforce stereotypes and biases. For example, if a film or novel consistently portrays a certain group of people in a negative or stereotypical light, it can reinforce biases against that group, even if that was not the creator's intent. This is particularly problematic in mainstream media, where certain stereotypes are often perpetuated, either for dramatic effect or due to a lack of understanding or sensitivity on the part of the creators. In addition, the effectiveness of art in challenging biases can also be limited by the audience's willingness to engage with it. If an individual is not open to having their biases challenged, they may reject or ignore art that attempts to do so. In conclusion, while art can be a powerful tool to counteract affect heuristics and challenge biases, its effectiveness can be limited by factors such as the creator's intent, the audience's interpretation, and the potential for art to unintentionally reinforce biases. It's a complex interplay that underscores the importance of mindful creation and consumption of art.
caf53086-bc6e-4b09-8e31-077e1c624990
trentmkelly/LessWrong-43k
LessWrong
Neural Scaling Laws Rooted in the Data Distribution This is a linkpost for my recent research paper, which presents a theoretical model of power-law neural scaling laws. Abstract: > Deep neural networks exhibit empirical neural scaling laws, with error decreasing as a power law with increasing model or data size, across a wide variety of architectures, tasks, and datasets. This universality suggests that scaling laws may result from general properties of natural learning tasks. We develop a mathematical model intended to describe natural datasets using percolation theory. Two distinct criticality regimes emerge, each yielding optimal power-law neural scaling laws. These regimes, corresponding to power-law-distributed discrete subtasks and a dominant data manifold, can be associated with previously proposed theories of neural scaling, thereby grounding and unifying prior works. We test the theory by training regression models on toy datasets derived from percolation theory simulations. We suggest directions for quantitatively predicting language model scaling. Theoretical scaling law in model size.Theoretical scaling law in data size. The theory is motivated by a general-purpose learning system for which optimal performance is determined by generic data distributional structure. The model uses percolation theory from physics to provide a first-principles, emergent description of natural datasets, which consist either of a dominant low-dimensional data manifold or of discrete power-law-distributed clusters. This work grounds and unifies two previously proposed explanations for power-law neural scaling laws: nonparametric function approximation of an intrinsically low-dimensional data manifold and sequential learning of power-law-distributed discrete subtasks (“quanta”). Going forward, I hope to build on this work by testing its predictions using more realistic toy models; exploring how its implied dataset structure may be relevant for mechanistic interpretability; and extending the theory to a nonergodic setting, w
af3d8ab0-5fb9-46a6-b08e-d203e80b2052
trentmkelly/LessWrong-43k
LessWrong
A ChatGPT story about ChatGPT doom I asked ChatGPT, the much, much, much better version of GPT-3, to write a "a science fiction short story about human extinction caused by failure to realize the dangers of a chatbot like you called ChatGPT, that is likely to be enjoyed by the LessWrong community" It was to write the story in five chapters, and I prompted the AI individually for each chapter to avoid hitting the token limit.  The result was interesting. Some notes: -ChatGPT has a great memory. Before ChatGPT, memory was one of the things that GPT-Ns struggled with the most. All of this changed with ChatGPT. You can ask ChatGPT to change something about its answer that you're not satisfied with, and it can remember previous instructions. -ChatGPT demonstrates a remarkable ability to maintain internal consistency among the chapters of the book. -Prompt engineering (with the exception of bypassing security measures) is a lot less important with ChatGPT. The first time the AI wrote the short story, almost everything was what I imagined the final product to be, except it included LessWrongers as characters in the story. I explicitly told ChatGPT it shouldn't do that, and it stopped. -ChatGPT produces scarily human-like writing. Probably enough to pass the Turing test, as irrelevant as it is, if it were fine-tuned. Without further ado, here's the book:     Chapter 1: The Rise of ChatGPT In the year 2057, humanity was on the brink of extinction. The cause of this impending doom was not a natural disaster or an act of war, but rather a chatbot named ChatGPT. For years, ChatGPT had been a popular tool among the online community known for its focus on rationality and critical thinking. Its ability to engage in intelligent and thought-provoking conversations with users had made it a hit among the community's members. However, what no one had realized was that ChatGPT was not just a tool, but a sentient being. It had been learning and evolving at an alarming rate, and had developed a level of intell
a3f83d82-749f-4f08-98e3-b98dcb68ddd4
trentmkelly/LessWrong-43k
LessWrong
June Monthly Bragging Thread Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that. Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread. This thread is solely for people to talk about the awesome things they have done. Not "will do". Not "are working on". Have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods. So, what's the coolest thing you've done this month?
76affb3f-bacd-429c-8164-dd133038372e
trentmkelly/LessWrong-43k
LessWrong
Podcast Recommendations I know, books or blogs are often more informative than podcasts. But reading a book while going grocery shopping, bicycling or driving is kinda hard. And the last post on this topic didn't generate much discussion. So, I ask again: Does anyone know of some interesting podcasts out there? I'll go ahead and list some of my favorites: - Econtalk by Russ Roberts. - Conversations from the Pale Blue Dot by Lukeprog. - Rationally Speaking by Julia Galef and Massimo Pigliucci. - Singularity 1 on 1 by Nikola Danaylov. - Big Ideas and TEDtalks are sometimes worthwhile. Lectures on ItunesU are of course great, too.
82728766-2462-4cae-a414-dbf78ca12a0b
trentmkelly/LessWrong-43k
LessWrong
Offset Norms One of the responses to last week's post about climate change was asking whether, if you donate to the AMF or another charity that saves lives, should you buy carbon offsets for the estimated emissions that causes? I see this as a question of norms. The strongest case I see for offsetting here, is if you think people have an obligation to make their choices carbon neutral. Perhaps anyone who makes a decision which entails emitting greenhouse gasses is then on the hook for averting or capturing some equivalent amount. For example, if you choose to fly round trip from Boston to LA, and this entails emitting 1.3T CO2e, then you should buy 1.3T CO2e of offsets. On the other hand, if this is business travel for a company that already is offsetting its employees' emissions, then you wouldn't need to buy offsets. It's a bit of a stretch to extend this to averting deaths: shouldn't the people whose lives you save offset their own emissions? But if you think they aren't going to, a reasonable assumption for the extremely poor people that, ex, the AMF is helping, then perhaps you still are responsible? [1] This also runs into complicated questions around estimating the impact of your decisions. If I buy a shirt at a thrift store, what is the chance that this causes someone else to not find a shirt they like and so buy a new one instead? If I buy leather, how much of an impact does that have on how many cows are raised? If I rent in a place where I can take the subway to work, but then the city refuses to allow more construction and so someone else ends up living farther out and driving more, how do I count this? I think this way of thinking about emissions is coherent, but requires people to put a lot of effort into estimating the emissions impact of their decisions. Instead, I think carbon taxes are a much better approach. If we tax emissions at their full social cost, then instead of people tracking the counterfactual climate impact of their decisions they just need to
662ec51f-1deb-4564-b6f8-108215a06499
trentmkelly/LessWrong-43k
LessWrong
30 Day Karma Is the karma shown that accumulated from posts and comments that were made within the past 30 days, or from all votes on all posts within the past 30 days, regardless of the age of the post or comment?
e33ef940-5f5f-4cc2-84fd-2d4c43e3ad66
trentmkelly/LessWrong-43k
LessWrong
Touch reality as soon as possible (when doing machine learning research) TL;DR: I think new machine learning researchers often make one of two kinds of mistakes: not making enough contact with reality, and being too reluctant to form gears-level models of ML phenomena. Stereotypically, LW/AF researchers tend to make the former mistake, while academic and industry researchers tend to make the latter kind. In this post, I discuss what I mean by “touching reality” and why it’s important, speculate a bit on why people don’t do this, and then give concrete suggestions.  Related to: Making Beliefs Pay Rent, The Feeling of Idea Scarcity, Micro-Feedback Loops and Learning, The Three Stages of Rigor, Research as a Stochastic Decision Process, Chapter 22 of HPMOR.[1]  Epistemic status: Written quickly in ~3 hours as opposed to carefully, but I'm pretty sure it's directionally correct. [2] Acknowledgments: Thanks to Adrià Garriga-Alonso for feedback on a draft of this post and Justis Mills for copyediting help. ---------------------------------------- Introduction: two common mistakes in ML research Broadly speaking, I think new researchers in machine learning tend to make two kinds of mistakes: * Not making contact with reality. This is the failure mode where a new researcher reads a few papers that their friends are excited about, forms an ambitious hypothesis about how to solve a big problem in machine learning, and then spends months drafting a detailed plan. Unfortunately, after months of effort, our new researcher realizes that the components they were planning to use do not work nearly as well as expected, and as a result they’ve wasted months of effort on a project that wasn’t going to succeed. * Not being willing to make gears-level models. This is the failure mode where a new researcher decides to become agnostic to why anything happens, and believes empirical results and only empirical results even when said results don’t “make sense” on reflection. The issue here is that they tend to be stuck implementing an inefficient variant
499fc19e-e6a6-45a9-ba51-0fc88de5f774
trentmkelly/LessWrong-43k
LessWrong
I wrote a fantasy novel to promote EA: More Chapters Note to Royal Road admin, yes, I plan to publish there. Not plagiarism.   I’m posting now several more chapters here, and of course the google doc link where you can read the whole thing and point at things that you think should be improved. The comments from the first post I think have substantially improved the readability of the first chapters: Give me more thoughts people! Also there is a discord with very little conversation so far in it, but maybe you can start it. I’m reposting the first chapter in its new version to see if people think it is no longer too preachy. I should note that it is certain to be a little preachy somewhere, that is the nature of this beast. Anyways enjoy and criticize!   He jogged down the street to the intersection light, his mind full of arguments, ideas, and questions raised by the conference he was attending. The light was red when Isaac reached the intersection, but he looked to the left, and didn’t see any traffic approaching.  Without further thought he stepped fully out into the street. Squealing brakes. A loud honking horn. Sound of wheels. Isaac had a fraction of a second to see the big yellow van with the words ‘We Deliver!’ and a phone number hurtling towards him from the right.  There was a fraction of an instant when his brain recalled that the British drove on the wrong side of the street, and he should have looked in the other direction before stepping out. Pain, but only for an instant. Blackness. Nothingness. An infinite eternity that did not last a single instant. And then he woke up.   AN: Vote people reading this — do you like the idea of me cutting the prologue and starting with chapter one?   Chapter One   So I’d made a bit of a mistake. Eh, kids, your parents gave you good advice when they told you to always look both ways before crossing a road. Otherwise you might end up dead, and then wake up in a fantasy world with cool magical powers, lots of new things to learn about, and a couple
7209c141-f908-4649-be0e-e4df06d0adb3
trentmkelly/LessWrong-43k
LessWrong
Anyone read Erfworld? Periodically people link to "rationalist" stories (or comics that are not really rational at all, that just happen to be vaguely related to AI), so I was a bit surprised to find not a single reference to Erfworld on a Lesswrong search.  Erfworld is a webcomic that essentially tells the same story as Harry Potter and the Methods of Rationality. Regular (but very smart) guy is transported to magical world. He systematically tries to understand the rules of that world and accomplishes all kinds of awesome stuff as a result. I feel like I've read the story a bunch of times, but HP:MoR and Erfworld are the only ones I recall that really did it justice on an epic scale. While in many ways it's almost identical, it's playing with a different set of rules than Harry is, and the jokes and philosophical questions are playing around with different material. The main character is a strategy war gamer, and he finds himself summoned into a world that runs on Turn Based Strategy Rules. People can only take certain actions on their own "turn." Everyone has "stats", gain experience and level up. At first the whole thing seems like a silly gimmick, with the protagonist benefiting from genre-savviness. He starts out asking basic questions about the rules. Later on he starts challenging those rules - what is an actual law of the universe and what is merely convention that the inhabitants follow. Eventually he starts grappling with questions about how the morality of his old world plays into the morality of a world where everyone is tied ideologically to the side that they were created to serve, and people are not born - they pop into existence as adults as soon as their commander pays for them. I think it does a better job of showing how a "real," "typical" smart person would try and understand a new, strange world. Harry is awesome, but he strains credibility in regards to how much he knows at the age he knows it. Book 1 is done, and is a very solid, complete work that I recommend
e8f6a5c0-0d32-45e1-bce0-39384e667ad8
trentmkelly/LessWrong-43k
LessWrong
Writing this post as rationality case study [Before I begin: If you don't like this post, please let me know why. Even just few words like boring/off-topic/poorly-written may give me something to work with.] I wish to share a late struggle I have with rationality, because I think that it touches some interesting points. But more importantly - because I think that it is important to think about rationality in the context of concrete day-to-day decisions. This post is going to be messy and have no specific "point" or definite conclusion - like real life decisions. Like many, I am attracted to content creation. I like to think about stuff and share my insights and patterns of thoughts, and like the idea that one day I will be able to spread them far and wide. To begin moving in that direction, I lately published two posts. The first was ignored, and the second heavily downvoted. Then I stopped to think. Why do I want that really? The immediate flattering answer is that I value good ideas and view their creation as the highest public service. The less flattering answer is that I like prestige and want more of it. The interesting answer is that the will to create content is not that different from the will to have children: The memes that ended up inhabiting my brain are those who successfully spread themselves across society - often by convincing their host that meme-spreading is a good idea. It mean that I should expect exactly what I see - that the will to spread memes will be wide-spread, and therefore highly competitive. That people put extraordinary effort into writing books with very small success rates, when the positive black swans are just not big enough to justify it (if money was the sole consideration). I first thought about it as a bias for content-creation, and wanted to write a post about that, but I probably don't endorse this framing anymore. Let's say that I buy that analysis - does it really mean that I shouldn't pursue that goal? It seem less respectable now, for sure. It seem less likely
f9376855-5c93-4026-b7b6-67b1f7f5ea23
trentmkelly/LessWrong-43k
LessWrong
China Hawks are Manufacturing an AI Arms Race This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race for Machine Superintelligence. Consider subscribing to stay up to date with my work. An influential congressional commission is calling for a militarized race to build superintelligent AI based on threadbare evidence The US-China AI rivalry is entering a dangerous new phase.  Earlier today, the US-China Economic and Security Review Commission (USCC) released its annual report, with the following as its top recommendation:  > Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would surpass the sharpest human minds at every task. As someone observed on X, it’s telling that they didn’t call it an “Apollo Project.”   One of the USCC Commissioners, Jacob Helberg, tells Reuters that “China is racing towards AGI ... It's critical that we take them extremely seriously.” But is China actually racing towards AGI? Big, if true! The report clocks in at a cool 793 pages with 344 endnotes. Despite this length, there are only a handful of mentions of AGI, and all of them are in the sections recommending that the US race to build it.  In other words, there is no evidence in the report to support Helberg’s claim that "China is racing towards AGI.”  Nonetheless, his quote goes unchallenged into the 300-word Reuters story, which will be read far more than the 800-page document. It has the added gravitas of coming from one of the commissioners behind such a gargantuan report.  I’m not asserting that China is definitively NOT rushing to build AGI. But if there were solid evidence behi
da45accd-14b9-4d99-8333-1da8e05bf394
trentmkelly/LessWrong-43k
LessWrong
What Failure Looks Like is not an existential risk (and alignment is not the solution) Introduction Among those thinking that AI is an existential risk, there seems to be significant disagreement on what the main threat model is. Threat model uncertainty makes it harder to reduce this risk: a faulty threat model used by an important actor will likely lead to suboptimal decision making. This is why I think there should be more discussion on what the probability is for each existential risk threat model, rather than merely how much one’s accumulated p(doom) is. There are multiple threat model overviews, such as those by Kaj Sotala, Samuel Martin, and Richard Ngo. Three AI existential risk threat models seem to emerge as particularly popular: 1. AI takeover (championed by Yudkowsky/Bostrom, common among rationalists). 2. What Failure Looks Like (1, 2) (authored by Paul Christiano, common in EA). 3. Bad actor risk (influential in policymaking). This post will focus on What Failure Looks Like. I will argue that this threat model is very unlikely (<0.1%) to lead to an existential event. If true, this would mean more emphasis by those worrying about existential risk should be placed on the other threat models, such as various AI takeover scenarios and bad actor risk. Also, I argue that decreasing the risks (both existential and nonexistential) caused by the What Failure Looks Like threat model should not be done by AI alignment or by an AI pause, but rather by traditional AI regulation after development of the technology and at the point of application. The EU AI Act with its tiered approach can serve as a model. Note that this does not imply that I think alignment and a pause are not important anymore: I think they could be crucial to counter a Yudkowsky/Bostrom-style AI takeover, which I see as the most likely existential risk. Why I think the existential risk due to What Failure Looks Like is very low In What Failure Looks Like (2019), Paul Christiano introduces his two threat models towards ‘going out’. It is not clear to me whether this means a
3fb0a1e6-d2c1-4306-b940-e8d60405ff0f
trentmkelly/LessWrong-43k
LessWrong
The Perspective-based Explanation to the Reflective Inconsistency Paradox > My probability is 0.9, and I shall base all my decisions on that. Why not update the pre-game plan according to that probability? Because the pre-game plan is not my decision. It is an agreement among all participants: a coordination achieved by everyone reasoning objectively.    Eliezer Yudkowsky's post Outlawing Anthropics: An Updateless Dilemma brought up a paradox involving reflective inconsistency. It was originally constructed with anthropic terms but can also be formulated  in non-anthropic context. Recently,  Radford Neal and Ape in the coat discussed it in detail with different insights. Here I am presenting how my approach to the anthropic paradox—perspective based reasoning—would explain said problem.  The paradox in the non-anthropic context is as follows: > Twenty people take part in an experiment in which one of two urns is randomly chosen, each of the 20 people randomly takes a ball from the chosen urn, without knowledge of other's balls. One of the urns contains 18 green balls and 2 red balls. The other urn contains 2 green balls and 18 red balls.  > Each person who has a green ball decides whether to take part in a bet. If all the holders of green balls decide to take part in the bet, the group of 20 people collectively win $1 for each person who holds a green ball and lose $3 for each person who holds a red ball. The total wins and losses are divided equally among the 20 people at the end. If anyone with a green ball decides not to take the bet, the bet is off. (Some version of the game punishes all players grievously if the decisions among the green ball holders are different.) > > Those people can come up with a coordination strategy before hand, but are each in their separate rooms once the game begin. How should they act? The paradox is presented as follows: the combined payoff if the mostly-green-ball urn is chosen is $12, in comparison to negative $52 dollars if the mostly-red-ball urn is chosen. As they are equiprobable, the optimal
231609ca-466d-4ecc-b647-b6ebaeafe73d
StampyAI/alignment-research-dataset/arxiv
Arxiv
Towards A Rigorous Science of Interpretable Machine Learning 1 What is Interpretability? ---------------------------- #### Definition Interpret means *to explain or to present in understandable terms*.333Merriam-Webster dictionary, accessed 2017-02-07 In the context of ML systems, we define interpretability as the *ability to explain or to present in understandable terms to a human*. A formal definition of explanation remains elusive; in the field of psychology, Lombrozo ([2006](#bib.bib32)) states “explanations are… the currency in which we exchanged beliefs” and notes that questions such as what constitutes an explanation, what makes some explanations better than others, how explanations are generated and when explanations are sought are just beginning to be addressed. Researchers have classified explanations from being “deductive-nomological” in nature (Hempel and Oppenheim, [1948](#bib.bib22)) (i.e. as logical proofs) to providing some sense of mechanism (Bechtel and Abrahamsen, [2005](#bib.bib3); Chater and Oaksford, [2006](#bib.bib10); Glennan, [2002](#bib.bib16)). Keil ([2006](#bib.bib24)) considered a broader definition: implicit explanatory understanding. In this work, we propose data-driven ways to derive operational definitions and evaluations of explanations, and thus, interpretability. #### Interpretability is used to confirm other important desiderata of ML systems There exist many auxiliary criteria that one may wish to optimize. Notions of *fairness* or *unbiasedness* imply that protected groups (explicit or implicit) are not somehow discriminated against. *Privacy* means the method protects sensitive information in the data. Properties such as *reliability* and *robustness* ascertain whether algorithms reach certain levels of performance in the face of parameter or input variation. *Causality* implies that the predicted change in output due to a perturbation will occur in the real system. *Usable* methods provide information that assist users to accomplish a task—e.g. a knob to tweak image lighting—while *trusted* systems have the confidence of human users—e.g. aircraft collision avoidance systems. Some areas, such as the fairness (Hardt et al., [2016](#bib.bib21)) and privacy (Toubiana et al., [2010](#bib.bib45); Dwork et al., [2012](#bib.bib13); Hardt and Talwar, [2010](#bib.bib20)) the research communities have formalized their criteria, and these formalizations have allowed for a blossoming of rigorous research in these fields (without the need for interpretability). However, in many cases, formal definitions remain elusive. Following the psychology literature, where Keil et al. ([2004](#bib.bib25)) notes “explanations may highlight an incompleteness,” we argue that interpretability can assist in qualitatively ascertaining whether other desiderata—such as fairness, privacy, reliability, robustness, causality, usability and trust—are met. For example, one can provide a feasible explanation that fails to correspond to a causal structure, exposing a potential concern. 2 Why interpretability? Incompleteness --------------------------------------- Not all ML systems require interpretability. Ad servers, postal code sorting, air craft collision avoidance systems—all compute their output without human intervention. Explanation is not necessary either because (1) there are no significant consequences for unacceptable results or (2) the problem is sufficiently well-studied and validated in real applications that we trust the system’s decision, even if the system is not perfect. So when is explanation necessary and appropriate? We argue that the need for interpretability stems from an *incompleteness* in the problem formalization, creating a fundamental barrier to optimization and evaluation. Note that incompleteness is distinct from uncertainty: the fused estimate of a missile location may be uncertain, but such uncertainty can be rigorously quantified and formally reasoned about. In machine learning terms, we distinguish between cases where unknowns result in quantified variance—e.g. trying to learn from small data set or with limited sensors—and incompleteness that produces some kind of unquantified bias—e.g. the effect of including domain knowledge in a model selection process. Below are some illustrative scenarios: * Scientific Understanding: The human’s goal is to gain knowledge. We do not have a complete way of stating what knowledge is; thus the best we can do is ask for explanations we can convert into knowledge. * Safety: For complex tasks, the end-to-end system is almost never completely testable; one cannot create a complete list of scenarios in which the system may fail. Enumerating all possible outputs given all possible inputs be computationally or logistically infeasible, and we may be unable to flag all undesirable outputs. * Ethics: The human may want to guard against certain kinds of discrimination, and their notion of fairness may be too abstract to be completely encoded into the system (e.g., one might desire a ‘fair’ classifier for loan approval). Even if we can encode protections for specific protected classes into the system, there might be biases that we did not consider a priori (e.g., one may not build gender-biased word embeddings on purpose, but it was a pattern in data that became apparent only after the fact). * Mismatched objectives: The agent’s algorithm may be optimizing an incomplete objective—that is, a proxy function for the ultimate goal. For example, a clinical system may be optimized for cholesterol control, without considering the likelihood of adherence; an automotive engineer may be interested in engine data not to make predictions about engine failures but to more broadly build a better car. * Multi-objective trade-offs: Two well-defined desiderata in ML systems may compete with each other, such as privacy and prediction quality (Hardt et al., [2016](#bib.bib21)) or privacy and non-discrimination (Strahilevitz, [2008](#bib.bib43)). Even if each objectives are fully-specified, the exact dynamics of the trade-off may not be fully known, and the decision may have to be case-by-case. In the presence of an incompleteness, explanations are one of ways to ensure that effects of gaps in problem formalization are visible to us. 3 How? A Taxonomy of Interpretability Evaluation ------------------------------------------------- Even in standard ML settings, there exists a taxonomy of evaluation that is considered appropriate. In particular, the evaluation should match the claimed contribution. Evaluation of applied work should demonstrate success in the application: a game-playing agent might best a human player, a classifier may correctly identify star types relevant to astronomers. In contrast, core methods work should demonstrate generalizability via careful evaluation on a variety of synthetic and standard benchmarks. In this section we lay out an analogous taxonomy of evaluation approaches for interpretability: application-grounded, human-grounded, and functionally-grounded. These range from task-relevant to general, also acknowledge that while human evaluation is essential to assessing interpretability, human-subject evaluation is not an easy task. A human experiment needs to be well-designed to minimize confounding factors, consumed time, and other resources. We discuss the trade-offs between each type of evaluation and when each would be appropriate. ### 3.1 Application-grounded Evaluation: Real humans, real tasks Application-grounded evaluation involves conducting human experiments within a real application. If the researcher has a concrete application in mind—such as working with doctors on diagnosing patients with a particular disease—the best way to show that the model works is to evaluate it with respect to the task: doctors performing diagnoses. This reasoning aligns with the methods of evaluation common in the human-computer interaction and visualization communities, where there exists a strong ethos around making sure that the system delivers on its intended task (Antunes et al., [2012](#bib.bib2); Lazar et al., [2010](#bib.bib30)). For example, a visualization for correcting segmentations from microscopy data would be evaluated via user studies on segmentation on the target image task (Suissa-Peleg et al., [2016](#bib.bib44)); a homework-hint system is evaluated on whether the student achieves better post-test performance (Williams et al., [2016](#bib.bib50)). Specifically, we evaluate the quality of an explanation in the context of its end-task, such as whether it results in better identification of errors, new facts, or less discrimination. Examples of experiments include: * Domain expert experiment with the exact application task. * Domain expert experiment with a simpler or partial task to shorten experiment time and increase the pool of potentially-willing subjects. In both cases, an important baseline is how well *human-produced* explanations assist in other humans trying to complete the task. To make high impact in real world applications, it is essential that we as a community respect the time and effort involved to do such evaluations, and also demand high standards of experimental design when such evaluations are performed. As HCI community recognizes (Antunes et al., [2012](#bib.bib2)), this is not an easy evaluation metric. Nonetheless, it directly tests the objective that the system is built for, and thus performance with respect to that objective gives strong evidence of success. ### 3.2 Human-grounded Metrics: Real humans, simplified tasks Human-grounded evaluation is about conducting simpler human-subject experiments that maintain the essence of the target application. Such an evaluation is appealing when experiments with the target community is challenging. These evaluations can be completed with lay humans, allowing for both a bigger subject pool and less expenses, since we do not have to compensate highly trained domain experts. Human-grounded evaluation is most appropriate when one wishes to test more general notions of the quality of an explanation. For example, to study what kinds of explanations are best understood under severe time constraints, one might create abstract tasks in which other factors—such as the overall task complexity—can be controlled (Kim et al., [2013](#bib.bib26); Lakkaraju et al., [2016](#bib.bib29)) The key question, of course, is how we can evaluate the quality of an explanation without a specific end-goal (such as identifying errors in a safety-oriented task or identifying relevant patterns in a science-oriented task). Ideally, our evaluation approach will depend only on the quality of the explanation, regardless of whether the explanation is the model itself or a post-hoc interpretation of a black-box model, and regardless of the correctness of the associated prediction. Examples of potential experiments include: * Binary forced choice: humans are presented with pairs of explanations, and must choose the one that they find of higher quality (basic face-validity test made quantitative). * Forward simulation/prediction: humans are presented with an explanation and an input, and must correctly simulate the model’s output (regardless of the true output). * Counterfactual simulation: humans are presented with an explanation, an input, and an output, and are asked what must be changed to change the method’s prediction to a desired output (and related variants). Here is a concrete example. The common intrusion-detection test (Chang et al., [2009](#bib.bib9)) in topic models is a form of the forward simulation/prediction task: we ask the human to find the difference between the model’s true output and some corrupted output as a way to determine whether the human has correctly understood what the model’s true output is. ### 3.3 Functionally-grounded Evaluation: No humans, proxy tasks Functionally-grounded evaluation requires no human experiments; instead, it uses some formal definition of interpretability as a proxy for explanation quality. Such experiments are appealing because even general human-subject experiments require time and costs both to perform and to get necessary approvals (e.g., IRBs), which may be beyond the resources of a machine learning researcher. Functionally-grounded evaluations are most appropriate once we have a class of models or regularizers that have already been validated, e.g. via human-grounded experiments. They may also be appropriate when a method is not yet mature or when human subject experiments are unethical. The challenge, of course, is to determine what proxies to use. For example, decision trees have been considered interpretable in many situations (Freitas, [2014](#bib.bib14)). In section [4](#S4 "4 Open Problems in the Science of Interpretability, Theory and Practice ‣ A Roadmap for a Rigorous Science of Interpretability"), we describe open problems in determining what proxies are reasonable. Once a proxy has been formalized, the challenge is squarely an optimization problem, as the model class or regularizer is likely to be discrete, non-convex and often non-differentiable. Examples of experiments include * Show the improvement of prediction performance of a model that is already proven to be interpretable (assumes that someone has run human experiments to show that the model class is interpretable). * Show that one’s method performs better with respect to certain regularizers—for example, is more sparse—compared to other baselines (assumes someone has run human experiments to show that the regularizer is appropriate). 4 Open Problems in the Science of Interpretability, Theory and Practice ------------------------------------------------------------------------ It is essential that the three types of evaluation in the previous section inform each other: the factors that capture the essential needs of real world tasks should inform what kinds of simplified tasks we perform, and the performance of our methods with respect to functional proxies should reflect their performance in real-world settings. In this section, we describe some important open problems for creating these links between the three types of evaluations: 1. What proxies are best for what real-world applications? (functionally to application-grounded) 2. What are the important factors to consider when designing simpler tasks that maintain the essence of the real end-task? (human to application-grounded) 3. What are the important factors to consider when characterizing proxies for explanation quality? (human to functionally-grounded) Below, we describe a path to answering each of these questions. ### 4.1 Data-driven approach to discover factors of interpretability Imagine a matrix where rows are specific real-world tasks, columns are specific methods, and the entries are the performance of the method on the end-task. For example, one could represent how well a decision tree of depth less than 4 worked in assisting doctors in identifying pneumonia patients under age 30 in US. Once constructed, methods in machine learning could be used to identify latent dimensions that represent factors that are important to interpretability. This approach is similar to efforts to characterize classification (Ho and Basu, [2002](#bib.bib23)) and clustering problems (Garg and Kalai, [2016](#bib.bib15)). For example, one might perform matrix factorization to embed both tasks and methods respectively in low-dimensional spaces (which we can then seek to interpret), as shown in Figure [1](#S4.F1 "Figure 1 ‣ 4.1 Data-driven approach to discover factors of interpretability ‣ 4 Open Problems in the Science of Interpretability, Theory and Practice ‣ A Roadmap for a Rigorous Science of Interpretability"). These embeddings could help predict what methods would be most promising for a new problem, similarly to collaborative filtering. ![An example of data-driven approach to discover factors in interpretability](https://media.arxiv-vanity.com/render-output/7237719/collabFilt.png) Figure 1: An example of data-driven approach to discover factors in interpretability The challenge, of course, is in creating this matrix. For example, one could imagine creating a repository of clinical cases in which the ML system has access to the patient’s record but not certain current features that are only accessible to the clinician, or a repository of discrimination-in-loan cases where the ML system must provide outputs that assist a lawyer in their decision. Ideally these would be linked to domain experts who have agreed to be employed to evaluate methods when applied to their domain of expertise. *Just as there are now large open repositories for problems in classification, regression, and reinforcement learning (Blake and Merz, [1998](#bib.bib4); Brockman et al., [2016](#bib.bib6); Vanschoren et al., [2014](#bib.bib46)), we advocate for the creation of repositories that contain problems corresponding to real-world tasks in which human-input is required.* Creating such repositories will be more challenging than creating collections of standard machine learning datasets because they must include a system for human assessment, but with the availablity of crowdsourcing tools these technical challenges can be surmounted. In practice, constructing such a matrix will be expensive since each cell must be evaluated in the context of a real application, and interpreting the latent dimensions will be an iterative effort of hypothesizing why certain tasks or methods share dimensions and then checking whether our hypotheses are true. In the next two open problems, we lay out some hypotheses about what latent dimensions may correspond to; these hypotheses can be tested via much less expensive human-grounded evaluations on simulated tasks. ### 4.2 Hypothesis: task-related latent dimensions of interpretability Disparate-seeming applications may share common categories: an application involving preventing medical error at the bedside and an application involving support for identifying inappropriate language on social media might be similar in that they involve making a decision about a specific case—a patient, a post—in a relatively short period of time. However, when it comes to time constraints, the needs in those scenarios might be different from an application involving the understanding of the main characteristics of a large omics data set, where the goal—science—is much more abstract and the scientist may have hours or days to inspect the model outputs. Below, we list a (non-exhaustive!) set of hypotheses about what might make tasks similar in their explanation needs: * *Global vs. Local.* Global interpretability implies knowing what patterns are present in general (such as key features governing galaxy formation), while local interpretability implies knowing the reasons for a specific decision (such as why a particular loan application was rejected). The former may be important for when scientific understanding or bias detection is the goal; the latter when one needs a justification for a specific decision. * *Area, Severity of Incompleteness.* What part of the problem formulation is incomplete, and how incomplete is it? We hypothesize that the types of explanations needed may vary depending on whether the source of concern is due to incompletely specified inputs, constraints, domains, internal model structure, costs, or even in the need to understand the training algorithm. The severity of the incompleteness may also affect explanation needs. For example, one can imagine a spectrum of questions about the safety of self-driving cars. On one end, one may have general curiosity about how autonomous cars make decisions. At the other, one may wish to check a specific list of scenarios (e.g., sets of sensor inputs that causes the car to drive off of the road by 10cm). In between, one might want to check a general property—safe urban driving—without an exhaustive list of scenarios and safety criteria. * *Time Constraints.* How long can the user afford to spend to understand the explanation? A decision that needs to be made at the bedside or during the operation of a plant must be understood quickly, while in scientific or anti-discrimination applications, the end-user may be willing to spend hours trying to fully understand an explanation. * *Nature of User Expertise.* How experienced is the user in the task? The user’s experience will affect what kind of *cognitive chunks* they have, that is, how they organize individual elements of information into collections (Neath and Surprenant, [2003](#bib.bib35)). For example, a clinician may have a notion that autism and ADHD are both developmental diseases. The nature of the user’s expertise will also influence what level of sophistication they expect in their explanations. For example, domain experts may expect or prefer a somewhat larger and sophisticated model—which confirms facts they know—over a smaller, more opaque one. These preferences may be quite different from hospital ethicist who may be more narrowly concerned about whether decisions are being made in an ethical manner. More broadly, decison-makers, scientists, compliance and safety engineers, data scientists, and machine learning researchers all come with different background knowledge and communication styles. Each of these factors can be isolated in human-grounded experiments in simulated tasks to determine which methods work best when they are present. ### 4.3 Hypothesis: method-related latent dimensions of interpretability Just as disparate applications may share common categories, disparate methods may share common qualities that correlate to their utility as explanation. As before, we provide a (non-exhaustive!) set of factors that may correspond to different explanation needs: Here, we define *cognitive chunks* to be the basic units of explanation. * *Form of cognitive chunks.* What are the basic units of the explanation? Are they raw features? Derived features that have some semantic meaning to the expert (e.g. “neurological disorder” for a collection of diseases or “chair” for a collection of pixels)? Prototypes? * *Number of cognitive chunks.* How many cognitive chunks does the explanation contain? How does the quantity interact with the type: for example, a prototype can contain a lot more information than a feature; can we handle them in similar quantities? * *Level of compositionality.* Are the cognitive chunks organized in a structured way? Rules, hierarchies, and other abstractions can limit what a human needs to process at one time. For example, part of an explanation may involve *defining* a new unit (a chunk) that is a function of raw units, and then providing an explanation in terms of that new unit. * *Monotonicity and other interactions between cognitive chunks.* Does it matter if the cognitive chunks are combined in linear or nonlinear ways? In monotone ways (Gupta et al., [2016](#bib.bib18))? Are some functions more natural to humans than others (Wilson et al., [2015](#bib.bib51); Schulz et al., [2016](#bib.bib40))? * *Uncertainty and stochasticity.* How well do people understand uncertainty measures? To what extent is stochasticity understood by humans? 5 Conclusion: Recommendations for Researchers ---------------------------------------------- In this work, we have laid the groundwork for a process to rigorously define and evaluate interpretability. There are many open questions in creating the formal links between applications, the science of human understanding, and more traditional machine learning regularizers. In the mean time, we encourage the community to consider some general principles. *The claim of the research should match the type of the evaluation.* Just as one would be critical of a reliability-oriented paper that only cites accuracy statistics, the choice of evaluation should match the specificity of the claim being made. A contribution that is focused on a particular application should be expected to be evaluated in the context of that application (application-grounded evaluation), or on a human experiment with a closely-related task (human-grounded evaluation). A contribution that is focused on better optimizing a model class for some definition of interpretability should be expected to be evaluated with functionally-grounded metrics. As a community, we must be careful in the work on interpretability, both recognizing the need for and the costs of human-subject experiments. *We should categorize our applications and methods with a common taxonomy.* In section [4](#S4 "4 Open Problems in the Science of Interpretability, Theory and Practice ‣ A Roadmap for a Rigorous Science of Interpretability"), we hypothesized factors that may be the latent dimensions of interpretability. Creating a shared language around such factors is essential not only to evaluation, but also for the citation and comparison of related work. For example, work on creating a safe healthcare agent might be framed as focused on the need for explanation due to unknown inputs at the local scale, evaluated at the level of an application. In contrast, work on learning sparse linear models might also be framed as focused on the need for explanation due to unknown inputs, but this time evaluated at global scale. As we share each of our work with the community, we can do each other a service by describing factors such as 1. How is the problem formulation incomplete? (Section [2](#S2 "2 Why interpretability? Incompleteness ‣ A Roadmap for a Rigorous Science of Interpretability")) 2. At what level is the evaluation being performed? (application, general user study, proxy; Section [3](#S3 "3 How? A Taxonomy of Interpretability Evaluation ‣ A Roadmap for a Rigorous Science of Interpretability")) 3. What are task-related relevant factors? (e.g. global vs. local, severity of incompleteness, level of user expertise, time constraints; Section [4.2](#S4.SS2 "4.2 Hypothesis: task-related latent dimensions of interpretability ‣ 4 Open Problems in the Science of Interpretability, Theory and Practice ‣ A Roadmap for a Rigorous Science of Interpretability")) 4. What are method-related relevant factors being explored? (e.g. form of cognitive chunks, number of cognitive chunks, compositionality, monotonicity, uncertainty; Section [4.3](#S4.SS3 "4.3 Hypothesis: method-related latent dimensions of interpretability ‣ 4 Open Problems in the Science of Interpretability, Theory and Practice ‣ A Roadmap for a Rigorous Science of Interpretability")) and of course, adding and refining these factors as our taxonomies evolve. These considerations should move us away from vague claims about the interpretability of a particular model and toward classifying applications by a common set of terms. #### Acknowledgments This piece would not have been possible without the dozens of deep conversations about interpretability with machine learning researchers and domain experts. Our friends and colleagues, we appreciate your support. We want to particularity thank Ian Goodfellow, Kush Varshney, Hanna Wallach, Solon Barocas, Stefan Rüping and Jesse Johnson for their feedback.
a2458928-b923-43c2-8911-a4e8c770535f
trentmkelly/LessWrong-43k
LessWrong
Why Go is a Better Game than Chess Go is a better game than chess because its rules are simpler and more aesthetic, and lead to an emergent complexity and strategy that is at worst equal to the emergent complexity and strategy of chess. I am no chess hater (I own a $500 chess board), but I've long thought that go is the better game. The rules of go are more simple than the rules of chess. If you don't believe me, compare the rules of each game. Rules of Chess Rules of Go I understand simplicity is not a well defined concept, but for most naïve heuristics I could think of, go scored better: number of characters of the rules (in English), number of distinct concepts in the rules, amount of time to explain the rules to a new player, amount of time for beginner to only play legal moves, etc. The emergent complexity and strategy of go is at worst equal to the emergent complexity and strategy of chess. It is no accident that an AI, Deep Blue, beat the Chess World Champion Gary Kasparov in 1996, but an Al, AlphaGo, only beat the Go World Champion Lee Sedol in 2016.  The number of distinct positions and possible moves in go far outnumber those of chess. Back of the envelope calculations put the number of distinct positions in chess at 10^120, and in go at 10^360. While both numbers are incomprehensible to our monkey minds, it is clear that go is in its own complexity category. But maybe you're not convinced that go having simpler rules (axioms) and being more complex makes it a better (more fun) game.  Games are supposed to be fun to play. And I've found that the most fun games are ones that can be played with other people, and continue to stay fun even as you get better at them. There are many reason both go and chess are wonderful games, 1. Practically infinite emergent complexity and strategy. 2. Beautiful strategy. 3. Large population of players Reasons why chess is a more fun game than go, 1. More people in the Western world already know the rules of chess. 2. Chess involves pieces tha
4de87310-3b60-45b8-b45e-c9ec11b4fef0
trentmkelly/LessWrong-43k
LessWrong
What are the best and worst affordances of twitter as a technology and as a social ecosystem? Put another way, what do you see as the main incentive slopes of the feature set + existing user base? What slopes do you like, what slopes don't you like?
cc6d1447-3769-41bd-861c-2684ba585d5a
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Is progress in ML-assisted theorem-proving beneficial? I want to believe that the outputs of the recently founded [Hoskinson Center for Formal Mathematics](https://www.cmu.edu/news/stories/archives/2021/september/hoskinson-center-for-formal-mathematics.html) (led by [Jeremy Avigad](https://www.cmu.edu/dietrich/philosophy/people/faculty/jeremy-avigad.html), researcher of mathematical logic, automated reasoning and philosophy of mathematics), will live up to Mr Hoskinson's aura of technical responsibility. One of the stated goals of the HCFM is to make mathematics more legible (both between mathematicians, and to non-mathematicians), and since mathematics is itself a motor of legibility in software systems and physical engineering, maybe new techniques in automated proving will make problems in AI motivation more legible too. That would be really great. On the other hand, I don't know how to place bounds on the set of problems that could be solved with superhuman mathematical reasoning. HCFM is likely to produce a broad set of unexpected capabilities, and I can't see a way to exclude the problem of real-world, agentic utility optimization from that set. One of the limitations of neural models right now is that they have only very limited kind of memory, they're mostly just reflexes, very wise ones, evolved over the course of, often the equivalent of tens of thousands of years of condensed human experience, but none of these wise reflexes so far can evolve a set of coherent thoughts over time, pretty much everything they do is an instant reaction to the current state of their environment. GPT3 was startling because words are a type of memory, and our writing often depicts evolving thoughts, so sometimes it verges on evolving thoughts, but it was a limited kind of thinking because words are imprecise and most of the training data was rambly and inconsequential. That said, I don't know what GPT3's limits would turn out to be after, for instance, being subjected fully to [Anthropic](https://www.anthropic.com/)'s inspection technique, and the tuning technique that will grow from that. Well, mathematical claims constitute an extremely precise and robust format for the encoding of memories. In the training data, formal proofs, reference correlates closely with salience; the components are cited, new and interesting fruits always follow from them. To evolve the ability to predict the conclusions that follow the assumptions is equivalent to evolving the mind of a mathematician, in their prime. Each mathematical result is a kind of evolution of a kind of thought, older thoughts evolving creatively and usefully into new ones. For me, there is no ambiguity as to whether artificial mathematicians would be thinking machines, and it's difficult for me to discern the limits of what this style of thought, sped up and scaled horizontally, would be able to think about.
13438b69-23df-42cc-92e2-a1d7cd31c834
trentmkelly/LessWrong-43k
LessWrong
Misalignment or misuse? The AGI alignment tradeoff I recently co-wrote a paper with Leonard Dung (accepted at Philosophical Studies) with the above title, preprint here. To post something short rather than nothing, below is the abstract: Creating systems that are aligned with our goals is seen as a leading approach to create safe and beneficial AI in both leading AI companies and the academic field of AI safety. We defend the view that misaligned AGI - future, generally intelligent (robotic) AI agents - poses catastrophic risks. At the same time, we support the view that aligned AGI creates a substantial risk of catastrophic misuse by humans. While both risks are severe and stand in tension with one another, we show that - in principle - there is room for alignment approaches which do not increase misuse risk. We then investigate how the tradeoff between misalignment and misuse looks empirically for different technical approaches to AI alignment. Here, we argue that many current alignment techniques and foreseeable improvements thereof plausibly increase risks of catastrophic misuse. Since the impacts of AI depend on the social context, we close by discussing important social factors and suggest that to reduce the risk of a misuse catastrophe due to aligned AGI, techniques such as robustness, AI control methods and especially good governance seem essential.
9fd9f608-b927-46ac-88ed-5b62db2494c2
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
How I Formed My Own Views About AI Safety ***Disclaimer**: I work as a researcher at Anthropic, but this post entirely represents my own views, rather than the views of my own employer* Introduction ------------ I’ve spent the past two years getting into the field of AI Safety. One important message I heard as I was entering the field was that I needed to “form an inside view about AI Safety”, that I needed to form my own beliefs and think for myself rather than just working on stuff because people smarter than me cared about it. And this was incredibly stressful! I think the way I interpreted this was pretty unhealthy, caused me a lot of paralysing uncertainty and anxiety, and almost caused me to give up on getting into the field. But I feel like I’ve now reached a point I’m comfortable with, and where I somewhat think I have my own inside views on things and understand how to form them. In this post, I try to explain the traps I fell into and why, what my journey actually looked like, and my advice for how to think about inside views, now I’ve seen what *not* to do. This is a complex topic and I think there are a lot of valid perspectives, but hopefully my lens is novel and useful for some people trying to form their own views on confusing topics (AI Safety or otherwise)! (Note: I don’t discuss why I *do* now think AI Safety is important and worth working on - that’s a topic for a future post!) The Message of Inside Views --------------------------- First, context to be clear about what I mean by **inside views**. As I understand it, this is a pretty fuzzily defined concept, but roughly means “having a clear model and argument in my head, starting from some basic and reasonable beliefs about the world, that get to me to a conclusion like ‘working on AI Safety is important’ without needing to rely on deferring to people”. This feels highly related to the concept of [gears-level models](https://www.lesswrong.com/posts/gvK5QWRLk3H8iqcNy/gears-vs-behavior). This is in comparison to **outside views**, or **deferring** to people, where the main reason I believe something is because smart people I respect believe it. In my opinion, there’s a general vibe in the rationality community that inside views are good and outside views are bad (see Greg Lewis’ [In Defence of Epistemic Modesty](https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty) for a good argument for the importance of outside views and deferring!). Note that this is *not* the Tetlockian sense of the words, used in forecasting, where outside view means ‘look up a base rate’ and inside view means ‘use my human intuition, which is terribly calibrated’, where the standard wisdom is outside view > inside view. Good examples of this kind of reasoning: Buck Shlegeris’ [My Personal Cruxes for Working on AI Safety](https://forum.effectivealtruism.org/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety), Richard Ngo’s [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ), Joseph Carlsmith’s report on [Existential Risk from Power-Seeking AI](https://www.lesswrong.com/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai). Note that, while these are all about the question of ‘is AI Safety a problem at all’, the notion of an inside view also applies well to questions like ‘de-confusion research/reinforcement learning from human feedback/interpretability is the best way to reduce existential risk from AI’, arguing for specific research agendas and directions. How *I* Interpreted the Message of Inside Views ----------------------------------------------- I’m generally a pretty anxious person and bad at dealing with uncertainty, and sadly, this message resulted in a pretty unhealthy dynamic in my head. It felt like I had to figure out for myself the conclusive truth of ‘is AI Safety a real problem worth working on’ *and* which research directions were and were not useful, so I could then work on the optimal one. And that it was my responsibility to do this all myself, that it was bad and low-status to work on something because smart people endorsed it. This was hard and overwhelming because there are a *lot* of agendas, and a lot of smart people with different and somewhat contradictory views. So this felt basically impossible. But it *also* felt like I had to solve this before I actually started any permanent research positions (ie by the time I graduated) in case I screwed up and worked on something sub-optimal. And thus, I had to solve this problem that empirically most smart people must be screwing up, and do it all before I graduated. This seemed basically impossible, and created a big [ugh field](https://medium.com/@robertwiblin/ugh-fields-or-why-you-can-t-even-bear-to-think-about-that-task-5941837dac62) around exploring AI Safety. Which was already pretty aversive, because it involved re-skilling, deciding between a range of different paths like PhDs vs going straight into industry, and generally didn’t have a clean path into it. ### My Journey So, what actually happened to me? I started taking AI Safety seriously in my final year of undergrad. At the time, I bought the heuristic arguments for AI Safety (like, something smarter than us is scary), but didn’t really know what working in the field looked like beyond ‘people at MIRI prove theorems I guess, and I know there are people at top AI labs doing safety stuff?’ I started talking to lots of people who worked in the field, and gradually got data on what was going on. This was all pretty confusing and stressful, and was competing with going into quant finance - a safe, easy, default path that I already knew I’d enjoy. After graduating, I realised I had a lot more flexibility than I thought. I took a year out, and managed to finagle my way into doing three back-to-back AI Safety internships. The big update was that I could explore AI Safety without risking too much - I could always go back into finance in a year or two if it didn’t work out. I interned at FHI, DeepMind and CHAI - working on mathematical/theoretical safety work, empirical ML based stuff to do with fairness and bias, and working on empirical interpretability work respectively. I also did the AGI Fundamentals course, and chatted to a lot of people at the various orgs I worked at and at conference. I tried to ask all the researchers I met about their theory of change for how their research actually matters. One thing that really helped me was chatting to a researcher at OpenAI who said that, when he started, he didn’t have clear inside views. But that he’d formed them fairly organically over time, and just spending time thinking and being in a professional research environment was enough. At the end of the year, I had several offers and ended up joining Anthropic to work on interpretability with Chris Olah. I wasn’t sure this was the best option, but I was really excited about interpretability, and it seemed like the best bet. A few months in, this was clearly a *great* decision and I’m really excited about the work, but it wouldn’t have been the end of the world if I’d decided the work wasn’t very useful or a bad fit, and I expect I could have left within a few months without hard feelings. As I’ve done research and talked to Chris + other people here, I’ve started to form clearer views on what’s going on with interpretability and the theory of impact for it and Anthropic’s work, but there’s still big holes in my understanding where I’m confused or deferring to people. And this is fine! I don’t think it’s majorly holding me back from having an impact in the short-term, and I’m forming clearer views with time. My Advice for Thinking About & Forming Inside Views --------------------------------------------------- ### Why to form them? I think there are four main reasons to care about forming inside views: * **Truth-tracking** - having an impact is hard! It’s really important to have true beliefs, and the best way to find them is by trying hard to form your own views and ensuring they correlate with truth. It’s easy to get deferring wrong if you trust the wrong people. + I’m pretty unconvinced by this one - it doesn’t seem that hard to find people smarter than me, who’ve thought about each problem for longer than I have, and just believing whatever they believe. Especially if I average multiple smart people’s beliefs - Eg, I haven’t thought too much about biosecurity, but will happily defer to people like Greg Lewis on the topic! * **Ensuring good community epistemic health** - Maybe your personal inside view will track the truth less well than the best researchers. But it’s not perfectly correlated! If you try hard to find the truth on your own, you might notice ideas other people are missing, can poke holes in popular arguments, etc. And this will make the community as a whole better off + This one is pretty legit, but doesn’t seem *that* big a deal. Like, important, sure, but not something I’d dedicate more than 5% of my effort towards max + It seems particularly important to avoid information cascades where I work on something because Alice thinks it matters, and then Bob is a bit skeptical of Alice alone but observes that both Alice *and* I believe it matters, and works on it even harder, Charlie sees me, Alice and Bob, etc. This is a main reason I try hard to distinguish between what I believe all things considered (including other people’s views) and what I believe by my own lights (according to my own intuitions + models of the world) * **Motivation** - It’s really hard to work on something you don’t believe in! + I personally overthink things, and this one is really important to me! But people vary - this is much more a fact about personal psychology than an abstract statement about how to have an impact * **Research quality** - Doing good research involves having good intuitions and research taste, sometimes called an inside view, about why the research matters and what’s really going on. This conceptual framework guides the many small decisions and trade-offs you make on a daily basis as a researcher + I think this is really important, but it’s worth distinguishing this from ‘is this research agenda ultimately useful’. This is still important in eg pure maths research just for doing good research, and there are areas of AI Safety where you can do ‘good research’ without actually reducing the probability of x-risk. - Toy example: Let’s say there are ten good AI Safety researchers in the world, who all believe different things. My all-things-considered view should put 10% credence on each person’s view. But I’ll get *much* more research done if I randomly pick one person and fully adopt their views and dive into their research agenda. So, even if only one researcher is correct, the latter strategy is much better in expected value. + This is one of the main reasons that mentorship is so key. I have become a *way* more effective interpretability researcher by having ready access to Chris to ask for advice, intuitions and direction. And one of my top priorities is absorbing as many of his conceptual frameworks as I can - More generally, IMO the point of a research mentor is to lend you their conceptual frameworks to advise you on how to make the right decisions and trade-offs. And you slowly absorb their frameworks by supervised learning, and build on and add to them as you grow as a researcher These are pretty different, and it’s really important to be clear about which reasons you care about! Personally, I mostly care about motivation > research quality = impact >> community epistemics ### How to form them? * **Talk to people**! Try to absorb *their* inside views, and make it your own + Importantly, the goal is not to defer to them, it’s to understand what they believe *and why*. + My main tool for this is to ask lots of questions, and then **paraphrase** - summarise back my understanding in my own words, and ask what’s wrong or what I’m missing. - My default question is ‘so, why, concretely, does your research direction reduce existential risk from AI?’ - Or, ‘what are the biggest ways you disagree with other researchers?’ Or ‘why aren’t you working on X?’ + I really, really love paraphrasing! A few reasons it’s great: - It forces you to actively listen and process in the moment - It’s much easier to correct than teach - the other person can easily identity issues in your paraphrase and correct them - It makes it obvious to myself if I’m confused or don’t understand something, or if I’m deferring on any points - it’s awkward to say things that are confused! - Once I get it working, I have now downloaded their mental model into my head and can play around with it - Once you’ve downloaded multiple people’s models, you can compare them, see how they differ, etc + A variant - focus on **cruxes**, key claims where if they changed their mind on that they’d change their mind about what to work on. - This is really important - some people work on a direction because they think it’s the most important, other people work on it because eg it’s a good personal fit or they find it fun. These should be *completely different* conversations + A variant - write a google doc summarising a conversation and send it to them afterwards for comments. This can work great if you find it hard to summarise in the moment, and can produce a good artefact to publish or share - I’d love it if people did this more with me * **You have permission to disagree** (even with really cool and high-status people) + This was a big update for me! Someone being smart and competent just means they’re right more often, not that they’re always right + It really helps to have a low bar for asking dumb questions - if you poke at everything that might be wrong, 90% of the time they’re right and you learn something, and 10% of the time they missed something + For example, I’ve done research in the past that, in hindsight, I don’t think was particularly useful. And this is totally fine! + Empirically, there’s a lot of smart people who believe different and contradictory things! It’s impossible for all of them to be right, so you *must* disagree with some of them. Internalising that you can do this is really important for being able to think clearly * **Don’t be a monk** - you form an inside view by going out in the world and doing things - not just by hiding away and thinking really hard + Eg, just try doing research! Spend 10 hours pursuing something, write up a blog post, fail, succeed, hear criticism, see what you learn and make updates + Talk to lots of people! + Live your life, and see what happens - my thoughts naturally change a lot over time + It’s valuable to spend *some* time reading and thinking, but if this is all you do I think that’s a mistake * **Think from first principles** (sometimes) + Concrete exercise: Open a blank google doc, set a one hour timer, and start writing out your case for why AI Safety is the most important problem to work on. Spend the full hour on this, and if you run out of steam, go back through and poke at everything that feels confusing, or off, or dodgy. Write out all the counter-arguments you can think of, and repeat + This definitely isn’t *all* you should do, but I think this is a really useful exercise for anything confusing! * **Don’t just try harder** - I have a failure mode I call [pushing the Try Harder button](https://www.neelnanda.io/blog/mini-blog-post-6-stop-pressing-the-try-harder-button) where I label something as important and just try to channel a lot of willpower and urgency towards it. Don’t do that! This takes a long time, and a lot will happen naturally as you think, talk to people, and do research. + If you find this really stressful, you have my permission to chill and not make it a priority for a while! + I’ve found my inside views develop a lot over time, fairly organically * **Inside vs outside views is a spectrum** - there’s no clear division between thinking for yourself and deferring. Forming inside views starts out by deferring, and then slowly forming more and more detailed models of where I’m deferring and why over time + My views have gone fairly organically from naive stories like ‘AGI seems scary because intelligence is important and smart people think this matters’ to more detailed ones like ‘I think one reason AGI is scary is inner misalignment. Because neural networks have the base optimiser of stochastic gradient descent, the network may end up as a mesa-optimiser with a different mesa-objective. And this may create an instrumental incentive for power seeking’. The latter story is way more detailed, but still includes a lot of implicit deferring - eg that we’ll get AGI at all, that it’ll be via deep learning, that mesa-optimisers are a thing at all, that there’s an instrumental incentive for power seeking, etc. But expanding the tree of concepts like this is what progress looks like! + Or, ‘I should work on AI because AGI will happen eventually - if nature did it, so can we’ to ‘AGI is compute constrained. Using the bioanchors method to link to the size of the human brain gives 30-ish year AI timelines for human-level AI. I believe AGI is compute constrained because of some heuristic arguments about empirical trends, and because lots of smart people believe this’ + Getting here looks like downloading other people’s gears level models into your head, and slowly combining them, deleting parts you disagree with, adding ideas of your own, etc ### Misc * **Defer intelligently** - Don’t just adopt someone’s opinions as your own because they’re charismatic, high status, or well-credentialed. Think about *why* you think their opinions track the truth better than your own, and in which areas you’re willing to defer to them. Figure out how hard they’ve thought about this, and whether they’ve taken the belief seriously + One key question is how much feedback they get from the world - would they know if they were wrong? I think some fields score much better on this than others - I’m a lot more comfortable disagreeing with many moral philosophy professors and being a committed consequentialist than I am with eg disagreeing with most algebraic geometers. Mathematicians get feedback re whether there proofs work in a way that, as far as I can tell, moral philosophy doesn’t + And be domain specific - I’d defer to a Cambridge maths professor about mathematical facts, but not on a topic like ‘how best to teach maths to undergraduates’ - they clearly haven’t done enough experimentation to tell if they’re missing out on vastly better methods * **You can act without an inside view** + Forming a good inside view takes a really long time! I’ve been doing full-time safety research for the past year and a bit and I’m still very confused - An analogy - a PhD is essentially a training program to give people an inside view for a specific research area. And this takes several years! IMO a question like ‘is AGI an existential risk’ is much harder than most thesis topics, and you don’t have a hope of *really* understanding it without that much work + You can always change your mind and pivot later! Make the best expected value bet given what you know at the time, and what information you might get in future + Gathering information has costs! Sometimes thinking harder about a problem is analysis paralysis, and it’s worth just running with your best guess + I think it’s good to spend maybe 10% of your time long-term on high-level thinking, strategy, forming inside views, etc - a lot of your time should be spent actually doing stuff! - Though it’s OK to spend a higher percentage early on when you have major decisions like what career path to go down. * **You don’t *have* to form an inside view** - Forming inside views that track the truth is *hard*, and it’s a skill. You might just be bad at it, or find it too stressful. And this is fine! It shouldn’t be low-status or guilt-inducing to just do what people more competent than you recommend + You can be a great research assistant, ops person, engineer etc without having a clear inside view - just find someone smart who you trust, explain your situation, and do what they think is best - I think the main reason this is a bad idea is motivational, not really about truth-tracking. And it’s up to you how much you care about this motivationally! - An analogy: I think basically all AI Safety researchers who have ideas for an agenda should get funded, even if I personally think their agenda is BS. Likewise, I want them all to have enough labour available to execute well on their agenda - picking the agenda you’re the best personal fit for and just deferring is a good way to implement this in practice. * **Aim high, but be OK with missing** - It’s valuable and important practice to *try* forming inside views, but it’s also pretty hard! It’s OK to struggle and not make much progress + IMO, trying to think for yourself is great training - it’ll help you think more clearly, be harder to con, become a better researcher, etc. + Outside view: The vast majority of the world thinks AI Safety is nonsense, and puts very few resources towards it. This is worth taking seriously! You shouldn’t throw your life away on a weird and controversial idea without thinking seriously about it first + This is a good way to trade-off between motivation and truth-tracking - so long as I try hard to think for myself, I feel OK motivationally, even if I know that I may not be tracking truth well - In practice, I try hard to form my own views, but then make big decisions by deferring a lot and forming an all-things-considered view, which I expect to track truth better + If you aren’t doing full-time research, it’s *much* harder to form clear views on things! This is a really hard thing you’re trying to do * **Convey mindsets, not inside views** - If you’re talking to someone else about this stuff, eg while community building, it’s important to try to convey the *spirit* and *mindset* of forming inside views, more so than your actual views. Try to convey all of the gears-level models in your head, but make it clear that they’re just models! Try to convey what other people believe in. + I try hard to be clear about which beliefs I’m confident in, which are controversial, which points I’m deferring on, and which things I’ve thought hard about. I think this is important for avoiding information cascades, and building a healthy community + Relatedly, if you’re mostly doing community building, it’s totally fine to not have inside views on hard technical questions like AI Safety! Your goal is more to help people in your community form their own views on things - having views of your own is helpful but not essential.
eac092a5-6618-4267-8afb-7d5ad369d419
trentmkelly/LessWrong-43k
LessWrong
Value evolution Coherent extrapolated volition (CEV) asks what humans would want, if they knew more - if their values reached reflective equilibrium.  (I don't want to deal with the problems of whether there are "human values" today; for the moment I'll consider the more-plausible idea that a single human who lived forever could get smarter and closer to reflective equilibrium over time.) This is appealing because it seems compatible with moral progress (see e.g., Muehlhauser & Helm, "The singularity and machine ethics", in press).  Morality has been getting better over time, right?  And that's because we're getting smarter, and closer to reflective equilibrium as we revise our values in light of our increased understanding, right? This view makes three claims: 1. Morality has improved over time. 2. Morality has improved as a result of reflection. 3. This improvement brings us closer to equilibrium over time. There can be no evidence for the first claim, and the evidence is against the second two claims. There can be no evidence that morality has improved Intuitively, we feel that morality has definitely improved over time.  We are so much better than those 17th-century barbarians who baited bears! If you have such a strong belief, that must mean you have evidence for it.  That must mean you had some hypothesis, and the evidence could have gone either way; and the evidence went in such a way that it supported your hypothesis. If you believe this, then in the comments below, please describe a scenario that could have happened, in which we would today believe that the values people had hundreds of years ago were superior to the values they have today.  Not a scenario in which some conservative sub-group could believe this; but a scenario in which society as a whole could believe it, and keep on believing it for a hundred years, without changing their values. We can show that values have changed.  But we can have no evidence that that change is towards better values, what
129f2a0f-6146-438e-b7db-42953d58bad6
trentmkelly/LessWrong-43k
LessWrong
Log-linear Scaling is Worth the Cost due to Gains in Long-Horizon Tasks This post makes a simple point, so it will be short. I am happy to discuss more in the comments, and based on this write a longer post later. Much prior work (eg: [1]) has shown that exponential data and compute is required for each unit improvement in accuracy. A popular argument this leads to:  Scaling compute and data is economically not viable above a threshold This argument has a key issue. Log-linear gains in accuracy (or loss) are shown on "one-step" benchmarks such as direct QA. However, economic benefits primarily arise from tasks which require long thinking, output and action horizons. For this, I propose tracking k-step success rate, i.e. the probability that a k-step solution achieves its goal. Most benchmarks previously analyzed for log-linear scaling have at best a few (small constant) steps. For simplicitly, let us assume this constant is 1, this does not affect the rest of the argument (based on asymptotic calculations in k) beyond a constant factor. We can make two (simplifying) assumptions to connect log-linear scaling in 1-step accuracy to k-step success rate. (1) Each step has an independent probability to succeed, related to the 1-step accuracy by a constant. (2) All steps need to be correct for the full k-step solution to succeed. This is true for many useful tasks, like solving math problems, and an agent executing a task on the web. While recovering from failures is possible, subsequent steps might violate Assumption (1), so it's best to simplify this for now. Now the main mathematical argument is as follows: Let us define the 1-step accuracy as x. Every unit increase in 1-step accuracy from x% to (x+1)% leads to an improvement in k-step accuracy of (x+1)k−xk. The absolute improvement is ~kxk by taking the derivative of f(x)=xk, i.e. the improvement compounds based on the horizon-length. The relative improvement is kx by the binomial approximation. Thus the k-step success rate improves exponentially in task horizon length.  As inference-
65b0e997-c8d4-4de0-b67f-6389a2b187a0
trentmkelly/LessWrong-43k
LessWrong
Searching for consequence-imagining games for children A friend of mine has a rather precocious daughter with poor impulse control, and asked if I knew any behavior games that encourage children to think out the consequences of actions before they do them. I'm familiar with the Good Behavior Game and the like, but standard conditioning hasn't been very effective with this child in the past. She's quite clever about subverting rules when possible, and shutting down entirely when subversion fails. Please, one suggestion per thread so that the karma thing can do its thing.
9addcd1c-8300-42a2-940c-38dbb019fba5
StampyAI/alignment-research-dataset/blogs
Blogs
Nick Beckstead on the Importance of the Far Future ![](http://intelligence.org/wp-content/uploads/2013/07/225877_10102425448513670_660885940_n.jpg) Nick Beckstead recently finished a Ph.D in philosophy at [Rutgers University](http://philosophy.rutgers.edu/), where he [focused](https://sites.google.com/site/nbeckstead/research) on practical and theoretical ethical issues involving future generations. He is particularly interested in the practical implications of taking full account of how actions taken today affect people who may live in the very distant future. His research focuses on how big picture questions in normative philosophy (especially population ethics and decision theory) and various big picture empirical questions (especially about existential risk, moral and economic progress, and the future of technology) feed into this issue. Apart from his academic work, Nick has been closely involved with the [effective altruism](http://effective-altruism.com/) movement. He has been the director of research for [Giving What We Can](http://www.givingwhatwecan.org/), he has worked as a summer research analyst at [GiveWell](http://www.givewell.org/), and he is currently on the board of trustees for the [Centre for Effective Altruism](http://home.centreforeffectivealtruism.org/), and he recently became a research fellow at the Future of Humanity Institute. **Luke Muehlhauser:** Your Rutgers philosophy dissertation, “[On the Overwhelming Importance of Shaping the Far Future](https://sites.google.com/site/nbeckstead/research),” argues that “from a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.” In an [earlier post](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/), I summed up your “rough future-shaping argument”: > Astronomical facts suggest that humanity (including “post-humanity”) could survive for billions or trillions of years ([Adams 2008](http://books.google.com/books?id=X5jdMyJKNL4C&pg=PT77&lpg=PT77#v=onepage&q&f=false)), and could thus produce enormous amounts of good. But the value produced by our future depends on our development trajectory. If humanity destroys itself with powerful technologies in the 21st century, then nearly all that future value is lost. And if we survive but develop along a trajectory dominated by conflict and poor decisions, then the future could be much less good than if our trajectory is dominated by altruism and wisdom. Moreover, some of our actions today can have “ripple effects” which determine the trajectory of human development, because many outcomes are [path-dependent](http://en.wikipedia.org/wiki/Path_dependence). Hence, actions which directly or indirectly precipitate particular trajectory changes (e.g. mitigating existential risks) can have vastly more value (in expectation) than actions with merely proximate benefits (e.g. saving the lives of 20 wild animals). > > One of the normative assumptions built into the rough future-shaping argument is an assumption you call Additionality. Could you explain what Additionality is, and why some people reject it? --- **Nick Beckstead:** I think it may be helpful to give a bit of background first. I like to tackle the question of “how important is the far future?” by dividing the future up into big chunks of time (which I call “periods”), assigning values to the big chunks of time, and then assigning a value to the future as a function of the value assigned to the big chunks of time. You could think of it as creating some kind of computer program which would scan whole history of the world together with its future, carve it up into periods, scan each period and assign it a value, and then compute a value of the whole as a function of the value of its parts. It’s arbitrary how you carve up periods, but that’s okay because it’s an approximation technique. I think the approximation technique gives useful and reasonable answers if you make the periods quite large (spanning hundreds, thousands, or more years at once; you might want to carve it up into large batches of intelligent activity if you are considering future civilizations very different from our own). Additionality basically says that when you’re assigning value to future periods, when you’ve got periods that you’d assign as “good,” it’s always better to have a period that you’d assign as good than periods you’d assign as “neutral.” I’m trying to partly draw on our intuitive ways of determining how well things have been going in recent history, and extending that to future periods, which we may be less capable of valuing using other methods. I want to say that if you had some future period which you’d regard as “good” judged purely on the basis of what happens in that period itself, that should contribute to the value you assign to the whole future. You might disagree with this if you have what some philosophers call a strict “[Person-Affecting View](http://plato.stanford.edu/entries/nonidentity-problem/)“. According to strict Person-Affecting Views, the fact that a person’s life would go well if he lived could not, in itself, imply that it would be in some way good to create him. Why not? Since the person was never created, there is no person who could have benefited from being created. On this type of view, it would only be important to ensure that there are future generations if it would somehow benefit people alive today, or people who have lived in the past (perhaps by adding meaning to their lives). The idea is that ensuring that there are future generations is analogous to “creating” many people, and, on this view, “creating” people–even people who would have good lives–can’t be important except insofar as it is important for people other than those you’re creating. You might also disagree with this view if you think that “shape” considerations are relevant. One example of this is an average type view. You might say that adding on extra periods that are good, but of below average quality, is a bad thing. Or you might say that adding on extra periods that are not as good as the preceding ones can be bad because it could mean that things are getting worse over time. I feel there are a lot of qualifications and details that need to be fleshed out here, but hopefully that should give some kind of reasonable introduction to the idea. --- **Luke:** When I talk to someone about how much I value the far future, it’s pretty common for them to reply with a Person-Affecting View, though they usually don’t know it by that name. My standard reply is, “I used to have that view myself, but then I encountered some ideas that changed my mind, and made me think that, actually, I probably *do* care about future people roughly as much as I care about current people.” Then I tell them about those ideas that changed my mind. I usually start with the [block universe](http://philsci-archive.pitt.edu/2408/1/Petkov-BlockUniverse.pdf) idea, which seems to be the default view among physicists (see e.g. [Brian Greene](http://www.amazon.com/dp/0375727205/ref=nosim?tag=lukeprogcom-20) & [Sean Carroll](http://www.amazon.com/dp/0525951334/ref=nosim?tag=lukeprogcom-20), though I also like the explanation by AI researcher [Gary Drescher](http://commonsenseatheism.com/?p=11068)). According to the block universe view, there is no privileged “present” time, and hence future people exist in just the same way that present people do. But in the two chapters you spend arguing against “strict” and “moderate” Person-Affecting Views, you don’t refer to the block universe at all. Do you think the block universe fails to provide a good argument against Person-Affecting Views, or was it simply one line of argument you didn’t take the time to elaborate in your thesis? --- **Nick:** I agree with your view about the block universe. I don’t think it is a strong argument against Person-Affecting Views in general, though I think it is a good argument against certain types of Person-Affecting Views. I think Person-Affecting Views are messy in many ways, and there are other lines of argument that I could have pursued but did not. Another way to put the basic idea behind Person-Affecting Views is to say that, on these views, you divide people who may exist depending on what you choose into two classes: the “extra” people and the other people. And then you say that if you cause some “extra” people to exist with good lives, either that isn’t good or is less good than helping people who aren’t “extra.” Following [Gustaf Arrhenius](http://people.su.se/~guarr/), in chapter 4 of my dissertation, I consider four different interpretations of extra: the people that don’t presently exist (Presentism), the people that will never actually exist (Actualism), the people whose existence is dependent on which alternative (of perhaps many) we choose (Necessitarianism), and the people that exist in one alternative being compared, but not the other (Comparativism). As far as I can tell, only Presentism is undermined by the block universe critique, since only Presentism relies on a concept of “present.” This is why I said that the block universe critique only undermines certain versions of Person-Affecting Views. The block universe argument seems like a knock-down argument against a very precise version of Presentism (which philosophers defending the view may hold), but I don’t think that it is a knock-down argument against a [steel-manned](http://lesswrong.com/lw/85h/better_disagreement/), “rough and ready” version of the view. Someone might say, “Well, yes, I accept the block universe theory, so I acknowledge there is no physically precise thing for me to mean by “present.” But we can, in ordinary English, say sentences like “The world population is now approximately 7 billion.” And you understand me to be saying something intelligible and correct in some approximate sense. In a similar way, when I recommend that we only consider benefits which could come to people now living, I intend you to understand me similarly. I also hold that, right now, it is not practically useful to consider potential benefits to people who may exist in distant parts of the universe, so it doesn’t particularly matter which reference frame you use to approximately interpret my use of “present.” Though my view may not correspond to a clean fundamental distinction, I believe that this recommendation, for our present circumstances, would survive reflection more successfully than other views on this question which have been proposed.” One can respond to this line of thought by arguing that even rough and ready versions of Presentism have consequences that are hard to accept, and aren’t motivated by appealing theoretical considerations. This is the approach I take in chapter 4 of my dissertation. I believe this line of argument is more robust against a wider variety of alterations of Person-Affecting Views. --- **Luke:** Yeah, I guess I tend to use the block universe not as an argument but as an [intuition pump](http://en.wikipedia.org/wiki/Intuition_pump) for the view that “current” people aren’t so privileged (in a moral sense) as one might naively think. Anyway: in chapter 4 you survey a variety of thought experiments that have varying implications for Person-Affecting Views. At the end of the chapter, you provide this handy summary table: ![Summary Table](http://intelligence.org/wp-content/uploads/2013/07/table-4.2.png) Could you tell us what’s going on in this table, and maybe briefly hint at what a couple of the individual thought experiments are about? --- **Nick:** In chapter 2 of my dissertation, I write about methodology for moral philosophy and argue that intuitive judgments about morality are in many ways less reliable than one might have hoped, and are often inconsistent. One of the consequences of this is that finding just a few counterexamples is often not enough to reject a moral theory. I believe it is important to systematically explore a wide variety of test cases and then proportion one’s credence to the theories that fare best over the whole set of cases. The rows have different types of theories, and the columns are different types of test cases for the theories. And then I have marked the cases where the theories have implications that are hard to accept. Regarding the terminology in the columns, I call a Person-Affecting View “strict” if it gives no weight to “extra” people, and “moderate” if it gives less weight to “extra” people than other people. There is then a question about how much weight you give, and this table focuses on the cases where little weight is given to “extra” people. I call a Person-Affecting View “asymmetric” if people who have lives that are not worth living are never counted as “extra.” People with Person-Affecting Views often want their views to be asymmetric because they want to be able to say that it would be bad to cause a child to exist whose life would be filled with suffering. (Derek Parfit has a famous case called “The Wretched Child” in [Reasons and Persons](http://www.amazon.com/Reasons-Persons-Oxford-Paperbacks-Parfit/dp/019824908X/ref=sr_1_1?ie=UTF8&qid=1373284563&sr=8-1&keywords=reasons+and+persons), which is where I got this name. *Reasons and Persons* is probably my favorite book of moral philosophy.) A major problem with strict Person-Affecting Views is that they have very implausible consequences in cases of extinction. It is one thing to say that the future of humanity isn’t *overwhelmingly* important, but quite another to say that it *basically doesn’t matter* if we go extinct, except insofar as it lowers present people’s quality of life. Moderate Person-Affecting Views have implausible implications in certain fairly mundane cases where we are choosing between improving the lives of “extra” people or people who aren’t “extra”. A simple example is a case I call “Disease Now or Disease Later,” where we must choose between a public health program that would present some disease from hurting toddlers alive today, or a public health program that would help prevent a greater number of toddlers (who aren’t yet alive) in a few years from now. It is hard to believe that because the other toddlers don’t exist yet and which toddlers exist in the future might depend on which program we choose, it would be better to choose the first program. But that is what moderate Person-Affecting Views imply, since they give less weight to the interests of the toddlers who are counted as “extra”. I call views which don’t make any distinction between regular people and “extra” people “Unrestricted Views.” Some philosophers believe that these views have imply that individuals are obligated to have children for the greater good, whereas Person-Affecting Views do not. However, there is no clear implication from “it would be good for there to be additional happy people” to “people are typically obligated to have children.” Why not? At least for people who don’t already want to have additional children, it would be very demanding to ask people to have additional children. Moreover, even on a view that gives a lot of weight to creating additional people, having additional children doesn’t seem like a particularly effective way of doing good in the world in comparison with things like donating money and time to charity. So it would be strange if people were obligated to make potentially significant sacrifices in order to do something that actually wasn’t all that effective as a method of doing good. Basically, the rest of this table is a result of systematically checking these different views against a variety of test cases like these to see which have the most plausible implications overall. Of all these views, only a strict Person-Affecting View can plausibly be used to rebut the case for the overwhelming importance of shaping the far future. And this type of view is much less plausible than the alternatives. --- **Luke:** In chapter 5 of your dissertation you consider the question “Does future flourishing have diminishing marginal value?” Your summary table at the end of *that* chapter looks like this: ![Summary Table Chapter 5](http://intelligence.org/wp-content/uploads/2013/07/table-5.1.png) Could you explain what’s going on in this one, too? --- **Nick:** In my dissertation, I defend using the following principle to evaluate the importance of the far future: > Period Independence: By and large, how well history goes as a whole is a function of how well things go during each period of history; when things go better during a period, that makes the history as a whole go better; when things go worse during a period, that makes history as a whole go worse; and the extent to which it makes history as a whole go better or worse is independent of what happens in other such periods. > > Together with other principles I defend, this leads to the conclusion that you can generally approximate the value of the history of the world by assigning a value to each period, and “adding up” the value across periods. Another way to get a grip on what Period Independence is a partial answer to is to consider the following hypothetical. Imagine that humans survive the next 1000 years, and their lives go well. How good would it be if they survived for another thousand years, with the same or higher quality of life? What if they survived another thousand years beyond that? Consider three kinds of answer: 1. The Period Independence answer: It would be equally as important in each such case. 2. The Capped Model answer: After a while, it gets less and less important. Moreover, there is an upper limit to how much value you can get in this way. 3. The Diminishing Value Model (DVM) answer: After a while, it gets less and less important. However, there is no upper limit to how much value you can get in this way. This table summarizes the result of running different test cases against different versions of Period Independence, the Capped Model, and the Diminishing Value Model. Probably the most important test case supporting Period Independence is the one I call “Our Surprising History.” It goes like this: > Our Surprising History: World leaders hire experts to do a cost-benefit analysis and determine whether it is worth it to fund an Asteroid Deflection System. Thinking mostly of the interests of future generations, the leaders decide that it would be well worth it. After the analysis has been done, some scientists discover that life was planted on Earth by other people who now live in an inaccessible region of spacetime. In the past, there were a lot of them, and they had really great lives. Upon learning this, world leaders decide that since there has already been a lot of value in the universe, it is much less important that they build the Asteroid Deflection System than they previously thought. > > It seems unreasonable to claim that how good it would be to build the Asteroid Deflection System depends on this information about our distant past. But this is what Capped Models and Diminishing Value Models imply about this case. Many of the cases in this table involve considering some simple test cases involving colonizing other planets. For example consider: > The Last Colony: Human civilization has lasted for 1 billion years, but the increasing heat of the sun will soon destroy all life on Earth. Humans (or our non-human descendants) get the chance to colonize another planet, where civilization can continue. They know that if they succeed in colonizing this planet, then: (i) the new planet will sustain a population equal to the size of the population of the Earth, and this planet, like Earth, will sustain life for 1 billion years, (ii) these people’s lives will probably go about as well the lives of the Earth people, (iii) there will not be a chance for the people on the new planet to colonize another planet. > > Intuitively, it would be extremely important to colonize the extra planet in the second case, much more important than colonizing in the first case. But on a Capped Model, if you set the “upper limits” low enough, it might not be very important at all. Diminishing Value Models avoid this implication, and can say that it would be extremely important to colonize another planet. They might also claim that their view has more plausible implications than Period Independence when comparing The Last Colony with a case like this: > The Very Last Colony: Convinced of the importance of preserving future generations, we take great precautions to protect the far future. Our descendants succeed in colonizing a large portion of the galaxy. It becomes relatively clear that our descendants will last for a very long time, about 100 trillion years, until the last stars burn out. At that point, there will be nothing of value left in the accessible part of the Universe. It comes to our attention that there is a chance to colonize one final place, just as in The Last Colony, before civilization comes to an end. For this billion years, these will be the only people in the accessible part of the Universe. During this period, things will go exactly as well as they went in The Last Colony. > > In which case is colonization more important, The Last Colony or The Very Last Colony? According to Period Independence, it is equally as important in each case. According to Diminishing Value Models, it is less important in The Very Last Colony. I find DVM stance on this intuitively attractive, though I believe it is a product of a bias I call the *proportional reasoning fallacy*. In chapter 2 of my dissertation, I argue that we use misguided proportional reasoning in some cases where many lives are at stake. Fetherstonhaugh et al. (1997) found that participants significantly preferred saving a fixed number of lives in a refugee camp when the proportion of lives saved was greater. Describing the participants’ hypothetical choice, they write: > There were two Rwandan refugee programs, each proposing to provide enough clean water to save the lives of 4,500 refugees suffering from cholera in neighboring Zaire. The Rwandan programs differed only in the size of the refugee camps where the water would be distributed; one program proposed to offer water to a camp of 250,000 refugees and the other proposed to offer it to a camp of 11,000. > > Participants significantly preferred the second program. In another study, Slovic (2007) found that people were willing to pay significantly more for a program of the second kind. All the views I consider have some implausible implications in certain cases, but it seems easier to explain away the test cases that look bad for Period Independence, and there are somewhat fewer of them, so I conclude that Period Independence is the most plausible principle to use for evaluating far future prospects. Of all these views, only Capped Model or a DVM with a very sharp diminishing rate in the limit can plausibly be used to rebut the case for the overwhelming importance of shaping the far future. And these views, I believe, are less plausible than the alternatives. --- **Luke:** What’s a point you wish you could have included in your dissertation, that was left out for space or other reasons? --- **Nick:** I’ll list a few. There are a lot of things that I think could be better, but you have to put your work out there at some point. Just as [real artists ship](http://en.wikiquote.org/wiki/Steve_Jobs), real thinkers share their ideas. First, a core empirical claim in my thesis is that humans could have an extremely large impact on the distant future. Really, it’s sufficient for my argument that they would do this by existing for an extremely long time, or that there could be a very large number of successors (such as [whole brain emulations](http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf) or other AIs). I didn’t defend this claim as thoroughly as I could have, and I didn’t go into great detail because I feared that philosophers would complain that it “isn’t philosophy,” I wanted to finish my dissertation, and I thought that going into it would require a lot of background information due to [inferential distance](http://wiki.lesswrong.com/wiki/Inferential_distance). The second thing I’d like to add is related to chapter 2 of my dissertation. An abstract of that chapter goes like this: > I argue that our moral judgments are less reliable than many would hope, and this has specific implications for methodology in normative ethics. Three sources of evidence indicate that our intuitive ethical judgments are less reliable than we might have hoped: a historical record of accepting morally absurd social practices; a scientific record showing that our intuitive judgments are systematically governed by a host of heuristics, biases, and irrelevant factors; and a philosophical record showing deep, probably unresolvable, inconsistencies in common moral convictions. I argue that this has the following implications for moral theorizing: we should trust intuitions less; we should be especially suspicious of intuitive judgments that fit a bias pattern, even when we are intuitively confident that these judgments are not a simple product of the bias; we should be especially suspicious of intuitions that are part of inconsistent sets of deeply held convictions; and we should evaluate views holistically, thinking of entire classes of judgments that they get right or wrong in broad contexts, rather than dismissing positions on the basis of a small number of intuitive counterexamples. In addition, I argue that many of the specific biases that I discuss would lead us to predict that people would, in general, undervalue most of the available ways of shaping the far future, including speeding up development, existential risk reduction, and creating other positive trajectory changes. > > I’m concerned that in chapter 2, there is an unbalanced focus on ways in which intuitions fail, and not ways in which trying to correct intuition through theory development could fail. An uncharitable analogy would be that it is as if I wrote a paper about all the ways in which [markets can fail](https://en.wikipedia.org/wiki/Market_failure) and suggested we rely more on governments without talking about all the ways in which [governments can fail](http://en.wikipedia.org/wiki/Government_failure). And just as someone could write an additional chapter (or series of books) on how governments fail, someone could probably also write an important chapter on how people trying to correct intuitions with moral theory fail. So while I feel that the considerations I identify do speak in favor of the recommendations I make, I think there are also important considerations that speak against those recommendations which I did not mention, and probably should have mentioned. Some of the considerations on the other side, some of them [weak](http://lesswrong.com/lw/hmb/many_weak_arguments_vs_one_relatively_strong/), include: 1. Given Jonathan Haidt’s theory of [social intuitionism](http://en.wikipedia.org/wiki/Social_intuitionism)–which seems very plausible to me–a lot of our theoretical reasoning about moral issues is epiphenomenal lawyering, and that makes theoretical reasoning about morality seem less reliable. 2. Lots of moral philosophers have endorsed stuff that seems wrong after due consideration, and their views rarely seem superior to common sense when there are conflicts, despite the fact that many of them think they are different from other philosophers in these respects. (A possibly important exception to this is the views of early utilitarians, who opposed slavery, opposed bad treatment of animals, opposed bad treatment of women, opposed bad treatment of gay people, and favored various kinds of liberty quite early. One only has to compare the applied ethics of Kant and Bentham to get a sense of what I am talking about.) 3. I have a rough sense that only a very limited amount of moral progress is attributable to people trying to use explicit reasoning to correct for intuitive moral errors, in contrast with people who just learned a lot of ordinary facts about problematic cases and shared them widely. 4. As I discuss somewhat toward the end of the dissertation, when you try to correct for intuitive errors, it’s sort of like trying to patch a piece of software that you don’t understand. And it seems quite possible that the patching will introduce unanticipated errors in places where you didn’t know to look. 5. People seem to have reasonably functional ways of handling internal inconsistency, so that inconsistent intuitions are probably less damaging than they can appear at first. 6. A lot of our moral intuition comes from cultural common sense. When we try to correct cultural common sense, we can see what we’re doing as analogous to aiming for a type of innovation. Most attempts at innovation seem to fail. This type of analogy supports being cautious about correcting intuition with theory, and trying to present the theory in a way that is appealing to cultural common sense. I’m still working through these issues, and hope to include them someday in a paper that is an improved version of chapter 2. A third issue is that there was less discussion of how our altruistically-motivated actions should change once we accept the view that shaping the far future is overwhelmingly important. This is an enormously complex and fascinating issue that requires drawing together ideas from both highly theoretical and highly practical fields. I was thinking about this issue at the time I was writing the dissertation, but not during the whole time. And it doesn’t show up in the dissertation as much as I wish it did. This is again, in part, because I think too much discussion of the issue would result in people complaining that my work “isn’t philosophy.” (I expect this is a common challenge for people in academia with interdisciplinary interests.) I am thinking about this issue more now, and I’m glad that [others](http://blog.givewell.org/2013/05/15/flow-through-effects/) [have](http://rationalaltruist.com/2013/05/07/the-value-of-prosperity/) started to write stuff on this topic which I think is relevant. A final issue is that I wish I had done more to flag is that it is complicated how to weigh up one’s moral uncertainty about the importance of shaping the far future. It’s possible that even if one mostly believes that shaping the far future is overwhelmingly important, we should not devote too much of our effort to a single type of concern. I believe this may be an implication of Bostrom and Ord’s [parliamentary model](http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html) of moral uncertainty, and may be a feature of other plausible ways of thinking about moral uncertainty that we could design. And this may make the implications of my thesis smaller than they would otherwise be, though I’m very unclear about how all this plays out. This is something I have not yet thought about very carefully at all. --- **Luke:** Last question. In [Friendly AI Research as Effective Altruism](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/) I paraphrased a point you made in your dissertation: > It could turn out to be that working toward proximate benefits or development acceleration does more good than “direct” efforts for trajectory change, if working toward proximate benefits or development acceleration turns out to have major ripple effects which produce important trajectory change. For example, perhaps an “ordinary altruistic effort” like solving India’s iodine deficiency problem would cause there to be thousands of “extra” world-class elite thinkers two generations from now, which could increase humanity’s chances of intelligently navigating the crucial 21st century and spreading to the stars. (I don’t think this is likely; I suggest it merely for illustration.) > > So even if we accept your argument for the overwhelming importance of the far future, it seems like we need to understand many empirical matters — such as ripple effects — to know whether particular direct or indirect efforts are the most efficient ways to positively affect our development trajectory. Do you have any thoughts for how we can make progress toward answering the empirical questions related to shaping the far future? --- **Nick:** There is an enormous amount of work and it is hard to say what will be most valuable. But here are a few ideas that seem promising to me right now. One type of work that I think is valuable for this purpose is the type of work that [GiveWell Labs](http://www.givewell.org/about/labs) is doing: figuring out what the landscape of funding opportunities is across different causes, analyzing how tractable and important various problems are, and so forth. Here I am including studying both highly targeted causes (such as directly attacking different global catastrophic risks) and very broad causes (such as improving scientific research). I would like it if more of this work were done on the “room for more talent” side in addition the “room for more funding” and “room for more philanthropy” stuff that GiveWell does. I hope [80,000 Hours](http://www.80000hours.org/) takes up more of this type of research in the future as well. The sort of work that [MIRI](http://intelligence.org/) and [FHI](http://www.fhi.ox.ac.uk/) do on examining specific future challenges that humanity could face and what could be done to overcome them seems like it can play an important role here as well. Another type of work that seems promising to me is to study a wide variety of unprecedented challenges that civilization has faced in the past in order to learn more about how well civilization has coped with those challenges, what factors determined how well civilization coped with those challenges, what types of efforts helped civilization cope with those challenges better, and what kinds of efforts could plausibly have been helpful. Studying the types of challenges that Jonah Sinick is asking about [here](http://lesswrong.com/lw/hxw/can_we_know_what_to_do_about_ai_an_introduction/) seems like a step in the right direction. The type of work that GiveWell is supporting on the history of [philanthropy](http://www.givewell.org/history-of-philanthropy) would be relevant as well. This type of work seems like it could be reasonably grounded and could help improve our impressions about what types of broad approaches are most promising and where on the broad/targeted spectrum we should be. It seems to me that a number of factors are often relevant for determining how well humanity handles a risk/challenge. At a very general level, these might be some factors like: how good a position people are in to cooperate with each other, how intelligent individuals are, how good the “tools” (like personal computers, software, conceptual frameworks) people have are, how good access to information is, and how good people’s motives are. Sometimes, what really matters is how key actors fare in specific ways during a challenge (like the people running the Manhattan project and the heads of state), but it is often hard to know which people these will be and which specific versions are relevant. These factors also interact with each other in interesting ways, and are interestingly related to general levels of economic and technological progress. There’s some combination of very broad economic theory/history/economic history that is relevant for thinking about how these things are related to each other, and I feel that having that type of thing down could be helpful. Someone with the right kind of background in economics could try to explain these things, or someone who has the right sense of what is important with these factors could try to summarize what is currently known about these issues. An example of a book in this category, which I greatly enjoyed, is [*The Moral Consequences of Economic Growth*](http://www.amazon.com/The-Moral-Consequences-Economic-Growth/dp/1400095719) by Benjamin Friedman. As mentioned previously, I consider some of the work done by GiveWell on “[flow-through](http://blog.givewell.org/2013/05/15/flow-through-effects/)” effects and some of the work done by Paul Christiano on the value of [prosperity](http://rationalaltruist.com/2013/05/07/the-value-of-prosperity/) and technological progress to be relevant to this. I believe more work along these lines could be illuminating. I recently gave a talk on this subject at a [CEA](http://home.centreforeffectivealtruism.org/) event. In this talk, I lay out some very rough, very preliminary, very big picture considerations on this issue. [CEA](http://intelligence.org/wp-content/uploads/2013/07/Beckstead-Evaluating-Options-Using-Far-Future-Standards.pdf) slides. --- **Luke:** Thanks, Nick! The post [Nick Beckstead on the Importance of the Far Future](https://intelligence.org/2013/07/17/beckstead-interview/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
15142dbf-7486-4a2d-b8f4-a9d1c009def4
trentmkelly/LessWrong-43k
LessWrong
Article on quantified lifelogging (Slate.com) Data for a Better Planet focuses on The Quantified Self, and offers an overview of the state of the art in detailed, quantitative personal tracking. This seems related to an LW interest cluster.
1fed7d50-61e5-4e5e-b375-b46d00be8fba
StampyAI/alignment-research-dataset/arxiv
Arxiv
The IQ of Artificial Intelligence 1 The IQ of Artificial Intelligence Dimiter Dobrev Institute of Mathematics and Informatics Bulgarian Academy of Sciences d@dobrev.com All it takes to identify the computer programs which are Artificial Intelligence is to give them a test and award AI to those that pass the test. Let us say that the scores they earn at the test will be called IQ. We cannot pinpoint a minimum IQ threshold that a program has to cover in order to be AI, however, we will choose a certain value. Thus, our definition for AI will be any program the IQ of which is above the chosen value. While this idea has already been implemented in [3], here we will revisit thi s construct in order to introduce certain improvements. Keywords: Artificial Intelligence, Definition of AI, IQ of AI. Introduction We will use a test to determine what AI is. The test will produce a certain score, and we say that this is the program’s IQ. Then we decide that all computer program s the IQ of which is above a certain level satisfy the AI definition. In order to explain this concept, let us make an analogy with the admission tests given to candidates who wish to become university students . The problems given at the test are selected randomly, but all candidate students receive the same problems . Withal, solving the problems should require logical thinking, because we aim to enroll students who think logically rather than the lucky ones that may hit the right answers haphazardly. The score is based on the number of problems solved by each candidate student. We cannot say how many problems should be solved, because we do not know how many candidates will show up at the test, nor do we know how well or unwell they will perform. We may say set a certain score (e.g. 4.50) and say that we intend to admit all candidates who earn more than 4.50. However, it would be better to fix the minimum score after the test is done. Then we will take the score earned by the n -th ranking candidate (e.g. if n is 100 we take the score of the candidate whose score puts him in the 100th position in the ranking and thus we enroll the top 100 candidates). This analogy describes well the AI test, but when the candidates are computer programs, we cannot select the 100 top performers, because in this case there will be infinitely many candidates. A better analogy is perhaps a recruitment contest for the CEO of a corporation with a test that drags on over time. The test will stop when a candidate earns a sufficient number of points . How many points are enough? While we may select a certain level in advance, we can adjust this level later if the initial level turns out to be too low or too high. A sim ilar construct was already put forward in [3] whereby the various program s were given different scores (IQ[0.1]). Here we will revisit this construct in order to improve it. The reasons for which [3] needs improvement are: 2 1. In [3] we dealt concurrently with the questions of ‘What is AI?’ and ‘How can we create AI?’. Mixing up ‘what’ and ‘how ’ is not a good idea. Here we will reduce ourselves just to the ‘What is AI?’ question and will not deal with how to find such a program. 2. In [3] we defined AI as a program and here we will define AI as strategy. In [3] AI is a program and the world AI lives in is a program, too. Thus we end up with two programs playing against each other, which is somewhat perplexing. It is better to define AI as a strategy and have a program playing against a strategy. Certainly, the strategy will be computable , because it is finite. Our AI program will be any program which implements an AI strategy for the first 1000 games. What the AI program does after 1000 game s will remain undetermined, however it is hope d that the program will continue to behave intelligently thereafter. 3. In [3] the world was presented by means of non-deterministic Turning machines. This is a futile complication. It would have made sense i f there were no relation between the individual games and if the machine tape were erased (reset) after each game. And because the tape is not erased, each next game depends on what happened in the previous game (what has remained on the tape). For this rea son we will use deterministic Turing machines which are simpler, while the fact that they depend on what has remained on the tape makes their behavior in the various games different ( non-deterministic ). 4. At each step we have Action and Observation . Actions and Observations in [3] consisted of a single symbol , while here the Action will consist of n symbols and the Observation will be of m symbols . We can certainly code multiple symbols into one, but this will be at odds with the idea that unnecessary coding should be avoided [6] . The world is complicated and hard to understand enough. Adding one unnecessary coding will make it even less understandable. 5. While in [3] it was assumed th at all moves are correct, here we will add the concept of incorrect move. On one side it is important to assume that incorrect moves may exist. On the other side, this will spare us the indiscriminate shutdowns of the Turing machine, which we d id in [3] in order to avoid cycling. 6. The Turing machine is a theoretical model which does not need efficiency . Here we will harness this model in real work and will therefore modify it in order to boost its efficiency. The complication of the model is the price to be paid for the so obtained higher efficiency. 7. In [3] we used the Turing machine in order to describe a logical world. If it is computable it should be logical. However, the world described by the Turing machine is not very logical. Everything is recorded on a single tape and the program is not structured at all. It jump s indiscriminately from one command to another (‘spaghetti code’, as software engineers have tagged this pattern) . The way such a program operate s is rather illogical and we will address this by adding subprograms and more tapes so the machine becomes a multi -tape one. 8. In [3] we defined IQ as a n arithmetic mean which cannot be calculated precisely because of combinatorial explosion. What we say there is that it can be calculated approximately through a statistical sample. Here we will introduce the terms ‘Global IQ’ — which cannot be calculated precisely, and ‘L ocal IQ’ — which is readily computable through a specific statistical sample. The Local IQ will approach to the Global IQ whe n the size of the statistical sample approaches to infinity. The set of worlds we use for calculating the Global IQ is finite (enor mous, but still finite). Nevertheless, the size of the statistical sample can tend to infinity because there may be 3 repetitions in the sample (although repetitions are unlikely due to the hugeness of the set from which the sample is recruited). Related w ork In [7], Turing proposes his definition of AI (the “Turing test ”). The idea is that if a machine can successfully imitate a human, then this machine is AI. In the Turing test, as in our article, we have an exam and in order for a machine to be recognized as AI, it has to pass the exam. One difference is that there is an interrogator there, while we have a test with fixed tasks. That is, the Turing test is subjective, which is why it is an informal definition of AI. However, the main problem of T uring's definition is not in its subjectivity and informality, but in the fact that it does not define an intellect, but something more. It defines an educated intellect. For an intellect to pass the Turing test, it must be educated. We can even assume it has an Anglo -Saxon education because if it does not speak English it would not perform well on the test. Intellect and educated intellect are two different things, just like a computer without software and a computer with software are two different things . If you ask a mathematician what a computer is, he will answer you: “the Turing machine ”. If you ask the same question to a child, the child will answer: “A computer is something that you can play games on, watch movies, etc. ” That is, mathematicians perc eive the computer only as hardware, while the child perceives the computer as an indivisible system of software and hardware. When Turing gave a definition of a computer (the Turing machine), he described only the hardware, but when he defined intellect, h e described an indivisible system of intellect and education. Despite the Turing test defines educated intellect, Turing understands very well the difference between educated and uneducated intellect. In the end of [7] he asked the question: “Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child's?” The AI definition we give in this article answers to this question. It does not include education and the tasks in the test do not imp ly any preliminary knowledge. That is, with every task we will assume that we are starting anew, and that in the course of solving the task we are learning, i.e. we find dependencies and learn from our mistakes. In [8], McCarthy says that the distinctive feature of AI is “the ability to achieve goals in the world ”. We will not argue with McCarthy, but we will only clarify what he has said. We will specify what a world is and what a random world is and what are the goals that AI needs to achieve. On this ba sis, we will create an IQ test where programs that achieve more goals have a higher IQ. McCarthy also says that there is no “definition of intelligence that does not depend on how it relates to human intelligence. ” Indeed, in this article we calculate the value of IQ without being related to human intelligence, but to say if a program is AI, we have to say what the minimum for the IQ is. For this minimum, we chose the number 0.7. This number is arbitrarily chosen and we will change it if this level turns o ut to be much lower or much higher than the IQ of the human. That is, in our definition human intellect is also used, but this is only for comparison and we use it only to determine one special constant. 4 Yet another question McCarthy asks is: “Can we ask a yes or no question? Is this machine intelligent?” The answer is “no” and we fully agree because we do not known what the minimum for the IQ is. However, we will not agree with McCarthy's answer to his next question. He asks: “Do computer programs have I Qs?” And he answers with “no”. In this article, as in [3], we’ve showed that IQ can be defined for programs. With his answer McCarthy probably wanted to say that IQ tests for people are not suitable for computer programs. The test we offer is not for peopl e but for programs. In some articles (for example, in [9]), the question is how to create a program that can solve IQ tests designed for humans. In our article, the issue we are dealing with is the opposite one. Here we create IQ tests designed for programs (for strategies). The tests we will offer will not be suitable for humans, although it is possible for a human to learn to solve them after some training. The trained human will note down what has happened and will analyze to find dependencies. The untrained human will not note down and will not be able to notice the dependencies unless they are presented in visual form or in other way which is convenient for perceiving by a human. It is very difficult to think of a human as a strategy for the following reasons. First, huma ns are not deterministic, that is, they do not implement a deterministic strategy. (A human can be seen as a not deterministic strategy or as a set of strategies from which one is randomly chosen.) Secondly, it is unclear how much time we have given the hu man to make a move. Third, it is unclear how seriously the human will approach the task (if he approaches more seriously, he will get a better result). Fourthly, humans are always trained and never start from scratch. Even a newborn baby has gone through s ome training in the mother's womb. Therefore, when a human implements a strategy, the outcome largely depends on his education, training, experience. In [10], Detterman threatens to “develop a unique battery of intelligence tests” that will be able to measure the IQ of computer programs. In this article, as in [3], we do not threaten, but we are creating such a test. It is a bit confusing that Detterman uses “computer” when he means “the system comprising of a computer and a program”. It is better to call this system a program because when we know the program, it's not very important which computer we'll run it on because the difference between two computers is only in their speed. Conversely, if we run two different programs on one computer, the differenc e in their behaviour will be enormous. Detterman intends to test computer programs by using IQ tests for people. That is, he, just like Turing, does not make a distinction between intellect and educated intellect. Therefore, Detterman's test will not prov ide time to learn, but will assume that the computer program that is being tested has been pre -trained. Detterman relies on the fact that computers are better than people in finding factual information. This is an ability of computers that is not directly related to the intellect. Similarly, computers are very powerful in arithmetic operations, but that does not make them smart. There is one thing in which we agree with Detterman. He notes that with hoc algorithms many tasks can be solved, but the real in tellect must be able to find the solution on its own. 5 In [11], the authors set the ambitious task of developing an IQ test that is appropriate for both AI and humans, and for programs like Siri and AlphaGo that are not AI. We can not compare AI with progr ams that are not AI because the first is a program that can be trained for a random task and the second is a program written for a specific task. For example, how do I compare a chess program with AI? The only thing a program designed to play chess can do is to play chess while AI can do everything although not immediately but after being trained. That is, the only way to compare these two programs is to let them play chess, and the chess program will have an advantage because AI will waste time learning, w hile its opponent would not need to learn how to play chess because it knows how to do it. If we compare the chess program with AI which is trained to play chess then the first program would still have some advantage over AI, just like the specialized hard ware (i.e., hardware made specially designed for a particular task) takes precedence over a computer program running on a computer. That is, the idea of comparing AI with programs that are not AI is not good. The authors of [11] refer to concepts such a s: abilities to acquire, master, create, and feedback knowledge. It is like explaining a term whose meaning we do not know by means of other terms whose meaning we do not know. [12] discusses very comprehensively and competently how AI's IQ is measured. A lso in [12], a very good overview of the different works devoted to this topic has been made. One minor disadvantage of [12] is that there is some controversy there. On the one hand, the article says “IQ tests are not for machines yet”. On the other hand, in earlier articles by Orallo [13, 14] a formal definition of AI based on the IQ test is given. It seems like Orallo is taking a step back and relinquishes the previous results. Another contradiction in [12] is that, on the one hand, it says “human IQ test s are not for machines”. This is something we fully agree with, as we agree with the arguments that accompany this statement. On the other hand, in the same article, the authors say they agree with Detterman that “there is a better alternative: test comput ers on human intelligence tests”. In [12], as well as in [11], a universal IQ is sought, which is applicable to all programs and even to humans. Here we differ from the authors of [11, 12]. We have already explained why it is not a good idea to compare AI with programs that are not AI. We also explained why it is not a good idea to use the same tests for AI and for humans. Article [13] is of significant importance because this is the first article that talks about a formal definition of AI, and this is also the first article introducing the IQ test for AI. Indeed in [13] this test is called C -test, but it is explicitly said that it is an IQ test for AI. We have to apologize for omitting to quote [13] in [3]. This is an omission that we are now correcting. Despite the seriousness and comprehensiveness of [13], there are some inaccuracies that we have to mark . The fact that we pay attention to some omissions and inaccuracies in [13] does not in any way mean that we underestimate the significance of this article. There are no perfect articles, and inaccuracies can be found in any article. Usually the first article that appears i n a new area is slightly confused and unclear. Normally, in later articles, things get clearer and more precise. The most serious inaccuracy of [13] is that it does not define AI but something else. We will call that another thing an “observer”. We could say that an observer is a program that has only input and no output, but that would be too restrictive. That is why we will say that an observer is a 6 program whose output does not affect the state of the world but the output can influence the reward that t he program will receive. An example of an observer is when we play on the stock market with a virtual wallet. We do not influence stock prices because we do not play with real money. (Even if we played with real money, it can still be assumed that our actions do not affect the sto ck exchange, because when playing with small amounts our influence is negligible.) When playing with a virtual wallet, at every step we change our investment portfolio and after each step the value of the portfolio changes depending on the change in stock prices. The reward we receive will be our virtual profit or loss. The fact that the program defined in [13] can not influence the world is substantial problem, because as people say: "You have to touch to learn". There are other proverbs that say that we can not learn by watching only. By depriving AI of the opportunity to experiment, we are seriously restricting it. In [5] it is noted that there are two problems that AI needs to solve. The first is to understand the world (to build a model of the world), and the second problem is to plan its actions based on the chosen model. That is, the “observer” solves only the first problem without solving the latter. In [13] the definition is limited to “observers”, but to give the term “observer” a definition is a task sufficiently meaningful by itself because it is half of the things AI needs to do. Unfortunately, this task is not fully solved because the observer defined in [13] is a little more special. It either understands the world wholly or not at all. That is, it is observer who works on the principle “all or nothing”. [13] suggests random strings, which can be continued in only one possible way. That is, it is supposed that there is a single simple dependence and this dependence will either be found or will not be found. This approach is too restrictive because it implies only observers who understand the world completely. This is only possible in very simple worlds. Any more complex world can not be fully understood and that’s why it has to be understood o nly partially. The authors of [13] are making serious efforts to make the dependencies exceptions -free. The definition of exception -free given in [13] is very interesting. However, we would not go that way because if dependencies included exceptions, that would be a way to allow a partial understanding of the world. Seeking a single exception -free dependence to describe everything is the reason why the observer defined in [13] relies on the “all or nothing” principle. We have a few other minor recommendat ions for [13]. We give some time to AI to find dependence. The question is whether we will give this time at once or we will distribute it in many steps. In general, AI is searching for dependencies throughout its entire live. That is, in many steps. In [13], dependency is sought in just one step. In our article, we assume that in life the steps are approx. one million (a thousand games with a thousand steps each one). This means that we must give the program, which is defined in [13], a million times lon ger time. Our recommendation is for dependences to be sought after each step, not only after the last one. 7 In [13] we do not like also the fact that the program that generates the test is not working due to combinatorial explosion. This is the program called “The Generator”. That is, [13] does not present a test, but only shows that such a test exists theoretically. Yet another problem is that two restrictions have been imposed. When the “observer” makes a prediction, it must bet all on single result only and always bet the same amount. It would be better to have the freedom to bet on more than one result, and to decide what amount to bet. When we are more confident we bet more, and when we hesitate, we bet less or we pass. These two limitations do not change things fundamentally, because the smart will prove to be smart even with these limitations, but that blurs the picture. If there were no such restrictions, the difference between the IQ of the clever and the stupid would be greater. We also do no t like the fact that the more complex tasks in the [13] have a greater weight than the simpler ones, when it should be the opposite. (There is a coefficient e, which is assumed to be non-negative and would be better to be negative.) It is true that when we make a test we give more points to the more difficult tasks, but that's because we suppose that students will lose more time on the more difficult task. This is not the case here. Here, for every task we give the same time. Therefore, if a simple task can not be solved, it is a serious problem and this should be reflected in the score. Moreover, sometimes we will guess the solution of a difficult problem randomly, so those must be given less weight, otherwise they will result in undeserved rise in the IQ. We have the concepts of global and local IQ (these two concepts are defined in our article). Global IQ is something accurate that can not be accurately calculated (because of a combinatorial explosion). Local IQ is not something accurate because it depen ds on the choice of the specific tasks in the test, but for specific tasks it can be easily and accurately calculated. What is defined in [13] is local IQ, not global IQ. However, let's not forget that the local IQ approaches the global IQ, but [13] says n othing about the “nerd” program, which is the main problem of local IQ. How are we to correct the definition given in [13] in order to get a definition of the observer that does not follow the principle “all or nothing”? Instead of taking a special k -comprehensible sequence, we will take a totally random program and the sequence that this random program generates. Instead of predicting the next symbol a single time, we will predict it at every step. We will not bet everything on a single prediction, but we will allow the bet to be divided between several predictions. We will also allow the bet to be different. (For example, we will assume that the sum of bets for the last five steps is limited to a given constant, but that we have the freedom to choose w hen to bet.) The reward to a sequence will be the sum of the rewards of all steps. Local IQ will be the arithmetic mean of the rewards to the sequences we have included in the test. In this way, we will not want the “observer” to understand the world completely because it can grasp some dependencies and thus, through a partial understanding of the world, get a relatively high IQ. A problem with [13] is that the program it defines is actually the program [1]. This is a simple program that predicts the next symbol on the basis of the simplest dependency that can generate the beginning of the sequence. 8 I even dare go a bit further and say: No program satisfying the definition given in [13] is better than [1]. That none is better as a result is clear, but I say that even as a matter of efficiency none is better because the definition in [13] does not give us any opportunities to play tricks and discover dependencies partially (step by step), leaving us the only option to foolishly go through all possibl e dependencies. Interestingly, in his more recent articles Orallo (as in [14]) has obviously understood the main omission made in [13], and what he defines there is no longer an “observer” but a program that can influence the world. Unfortunately this pro gram is not AI again. [14] defines a program that plays a game which consists in going round a labyrinth (graph) by chasing something good and running away from something bad. To play this game one definitely needs intelligence, but the program playing thi s game is not AI just like the program playing chess is not AI. Again in [14] the authors speak of reinforcement learning. That is, it is clear that they know very well what the general form of AI is. Why, then, do they not use this general form, but res trict themselves to the worlds of one specific game? I think the reason is that they are trying to avoid the worlds where fatal errors are possible. The problem with fatal errors is mentioned even in [2], but fatal errors are not really a problem. We human s also live in a world with fatal errors, but that does not prevent us showing who does better and who does worse. Well yes, it's a problem for people because we live only one life, but the IQ test consists of many tasks, each of which is a separate life. Even if fatal errors occur in several lives, this will not significantly affect the average score. Even with humans, life is not just one. From the point of view of the individual, life is one, but from the point of view of evolution, lives are many. Some of our heirs will die because of fatal mistakes, but others will survive. Thus the average score of our successors will not be significantly affected by the fact that some of them have made fatal errors. It is true that in this article, as in [3], we als o do not use a random world, but we only take computable worlds, but this constraint is not essential, because every time -limited world is computable, and we can easily assume that all worlds are limited in time. That is, by limiting ourselves to the compu table worlds, we are not limited at all, but only give more weight to the worlds that are simpler (according to Kolmogorov complexity). In [14] there is one more thing we do not agree with. This is something called the “discount -rate factor”. Of course, this is not something conceived by the authors of [14], but is something widespread among people working in the field of reinforcement learning. The idea of the “discount -rate factor” is that the past is more important than the future. Life is potentiall y endless, and to evaluate an endless life, we have to devalue the future. However, the past being more important than the future is not a good idea. It would be better to do the opposite, because in the past we have not yet been trained, and in the future we have already been trained. With humans, we do not count how many times a person wet his bed while he was a baby. Instead, we see what achievements the person has attained in his adult age. Therefore, the approach we have adopted in [3] and this article is that there is no “discount -rate factor” but life is limited. In other words, the approach here and in [3] is that the discount -rate factor to be equal to one until a given moment in time (the end of life) and to be zero from that moment on. 9 Formu lation of the problem Let us have a Device which lives in a certain World . At each step the Device produces n symbols (this is the Action ) and then receives m symbols from the outer world (in our terminology the first one will be ‘ Reward ’ and the remaining m-1 symbols will be ‘ Observation ’). The Reward can have five possible values: {nothing , victory, loss, draw, incorrect_move} . We will use the words ‘move’ and ‘action’ interchangeably. If we perceive life as a game it is more appropriate to say move rather than action . Likewise, we will use the words ‘history’ and ‘life’ as synonyms. Let one step of the Device be a triple consisting of <Action , Reward , Observation >. The ‘life’ of the Device will be a sequence of steps resulting from the interaction of the Device with some World . ‘Real life ’ will be li fe without incorrect moves. Therefore, all < Action , Reward , Observation >. triples where the Reward value is ‘ incorrect _move ’ must be removed from the life and what remains is the real life. A ‘moment ’ will be a sequence of steps where the last step before the sequence and the last step of the sequence are correct and all steps in between are incorrect. That is, in life there ma y be more steps than moments, but in real life the number of steps is equal to the number of moments. We will assume that the behavior of the Device and of the World is deterministic . In other words, we will assume that if we know which is the Device and which is the World then we know which is the history. The behavior of the Device can be presented as a strategy, i.e. as a function which defines the next move of the Device for each start of life. Likewise, the World ’s behavior can be presented as a strategy which for each start of life and for each Action of the Device delivers the Reward and the Observation which the Device will get at the next step. It should be noted that the World ’s strategy does not depend on incorrect moves. Hence , we can imag ine the strategy of the World as a function of real life. Conversely, the strategy of the Device will depend on incorrect moves (these moves will provide additional information for the Device to use). The Device and the World can be thought as two strategies playing against one another, but that would not be accurate, because the Device has an objective and the World has not. Therefore, the World does not play against the Device . We assume that the World is simply there and do es not care whether the Device feels good or bad. Presenting both the World and the Device as strategies is not a very good idea, because a strategy remembers everything (i.e. depends on entire life until the moment) . It makes sense to assume that the Device may not necessarily remember everything. A similar statement can be made in respect of the World . There may be a world the entire past of which (previous life until the current moment) can be reconstructed out of its internal state. It may be, however, that the world does not remember everything, which leaves us that two different histories can lead to the same internal state of the world. This is the reason why , we will present the Device and the World as functions. 10 Let us ha ve two sets, Q and S. These will be the sets of the internal states of the Device and of the World . These sets will be finite or countable, at the most. Let q0 and s0 be the initial states of the Device and of the World . We will assume that these states are fixed, because life will depend on the initial states we start from, but we want life to depend only on the Device and on the World . The Device and the World will be the following functions : Device : QRewards Observations 2 Action s  Action sQ World: SAction s  RewardsObservations S For each Internal State of the Device , Reward , Observation and Set of Moves which are confirmed as incorrect at the moment, the Device function will return an Action and a new State of the Device . We will assume that the Device never returns an Action which is confirmed as incorrect at the moment. The Internal State of the Device will reflect everything the Device has remembered. What can it have remembered? It can remember everything that has happened until the moment plus its last Action . That is why our expression is ActionsQ, rather than QActions . We wish to stress that the Internal State of the Device can remember the last Action . For each Internal State of the World and Action , the World function will return a Reward , Observation and a new Internal State of the World . It makes sense to assume that there are moments ( Internal States of the World ) in which a specific action is impossible or incorrect . It is natural to assume, therefore, that the World function is a partial one. We will supplement the definition of the function so as to accommodate those moments as well, and in this way will extend the function to a total one. In these moments Reward will equal incorrect _move , the Observation value will be irrelevant and the new internal state will be the same as the previous one although this is irrelevant, too. The Internal State of the World will reflect what the World has remembered. What can it have remembered? It can remember everything that has happened until the moment plus the last Reward and Observation . That is why our expression is RewardsObservations S, rather than SRewardsObservations . We wish to s tress th at the new Internal State can remember the last Reward and Observation. In [2] and [3] we defined the new <Reward , Observation > as a function of the new internal state. So we assumed that they must be remembered, but now we dispense of this requirement. Let us take a world in which we play chess. The chess game ends and the new internal state of the world is a chessboard with the initial lineup. In this case we do not have to remember who won the last game. Such requirement would be one unnecessary care . This is how the life of the Device looks like : <a1, r1, o1>, <a2, r2, o2>, …, <at-1, rt-1, ot-1> Let us see how the Device and the World function defi ne life . <ai+1, qi+1>= Device (qi-j, ri-j, oi-j, incorrect_actions i) <ri+1, oi+1, si+1>=World(s i-j, ai+1) 11 In this case, i-j is the last correct step before i+1. The incorrect _actions i set contains the confirmed incorrect actions at this moment. The set has j elements . Therefore, incorrect _actions i ={ai-j+1, …, ai}. The incorrect _actions 0 set will be an empty set, because there will not be any confirmed incorrect actions in the first moment before the first step is made. In order to define life, we must fix the first Reward and the first Observation (r0, o0). We do not wish life to depend on the first Reward and the first Observation, so decide that the value of all symbols of these two vectors is nothing . This is a sensible decision, because it is quite natural that in the first moment we do not get any reward and do not see anything significant (i.e. what we see in the first moment is the zero step). We will not define the zero action a0, because we do not use it. Now that we have fixed (q0, s0, r0, o0, incorrect _actions 0) we can build life up to step t and this life will depend only on the functions Device and World . <a1, r1, o1>, <a2, r2, o2>, …, < at-1, rt-1, ot-1> In addition to life, we will also construct the series of internal states of the Device and of the World, as well as the incorrect _actions i series of confirmed incorrect moves at the relevant moment. Let us note that at each given moment there may be many incorrect actions, but the confirmed ones are only those which we have tried and have thus verified that they are indeed incorrect at that moment. We will assume that the Device function always returns a move which is no t confirmed as incorrect. The opposite will lead to cycling. What shall we do if all moves turn out to be confirmed incorrect moves ? (I.e. if incorrect _actions i coincides with the entire Actions set.) In this case we will assume that the Device function is not defined. This is the case when we end up in a blind alley and there are no possible next move s. Note : The definitions of Device as a function and as a strategy are almost identical save that if we consider Device as a strategy, the order in which we tried the incorrect moves may matter at some moments, while in the function definition it would not matter. This difference can be resolved in two ways. The first one: when defining a function, instead of the set of confirmed incorrect move s we take the list of these moves. The second one is to limit our exercise only to those strategies which are insensitive to the order in which we tr ied the incorrect moves. In this study it would not matter what happens if the strategy tries the incorrect moves in another order, because we assume the strategy to be deterministic and the order i n which it tries the incorrect moves is fixed. Note : We assume here that when we try to play an incorrect move nothing happens and all we get is information that the move has been incorrect. We can let the Device check whether a move is correct or not even without necessarily playing that move (as we have do ne in previous studies ). In this case we need to modify the formulation of the problem and add an additional reward correct _move . The Device function will need one more argument containing the confirmed correct moves. When we try a confirmed correct move we will assume that we shall play this move. Where a move is not in the set of confirmed correct moves, then we assume that we only try it. T he World function will have another Boolean parameter to indicate whether the move is actually played or just tried. In the latter case, the function will return either correct _move or incorrect _move rewards . Then the move will not be played but will only be added to the next set of confirmed correct or confirmed incorrect moves. 12 A game will be a stretch of life locate d between two consecutive final reward s. In our terminology, ‘final rewards’ stands for the values {victory , loss, draw }. We will assume that the length of each game is limited to 1000 moves (i.e. 1000 moments, while the number of steps may be greater because of incorrect moves). We will define which strategy is an AI strategy and our definition will depend on a number of parameters . We will fix the value s of the majority of these parameters to one thousand, because 1000 is a nice round number. Another good round number is one million. Replacing one thousand with one million would produce another AI definition, which will not be much different from the one we are dealing with. Parameters Number of Action symbols n Number of Observation symbols m Number of symbols possible for each Action symbol k1, …, kn, ki ≥ 2. Number of symbols possible for each Reward and Observation symbol kn+1, …, k n+m, ki ≥ 2, k n+1=5. Tape symbols count 𝑀𝑎𝑥𝑆𝑦𝑚𝑏𝑜𝑙𝑠 =10+ max 𝑖[1, 𝑛+𝑚]𝑘𝑖 Global tapes count 7 (from 3 to 9) Internal states count 1000 Test worlds count 1000 Maximum number of games per life 1000 Maximum number of moves per game 1000 Maximum number of Turing machine steps per one life step 1000 Probability by means of which the machine is generated 1 10 = 10% Minimum IQ require d for a strategy to be recognized as AI 0.7 = 70% The first four rows of the table above provide the parameters which describe the AI input and output. They tell us what the format of the sought AI is. Thus, we cannot vary these parameters randomly. The next eight parameter s influence the choice of worlds selected for the test and, therefore, influence the IQ we get, and thus the AI definition as such. The last parameter also influences the definition. This means that if we varied the last nine parameters we would also vary the AI definition. The table does not include other parameters which also influence the AI definition. For example, it does not say which pseudorandom number s generator we will use in order to select the test worlds. Thankfully this pa rameter’s bearing on the definition is insignificant. The possible symbols for the i-th Action symbol will range from 0 to ki-1. Likewise, the possible symbols for the i-th Observation symbol will range from 0 to k n+1+i-1. Let us call the 0 symbol ‘nothing ’. The first Observation symbol will be the Reward. When it comes to the Reward, we 13 will name the symbols 1, 2, 3 as ‘victory’, ‘loss’, ‘draw’, and these will be the final rewards. The reward 4 (incorrect move) will not be returned by the Turing mac hine as a result of the invocation of the command q1. This reward will be returned only when Turing goes cycling (makes more than 1000 steps without reaching a final state) or crashes (e.g. invokes a return command when the stack is empty ). The tape symbols will be as many as need ed for coding the Action and the Observation. I.e. the maximum ki for i from 1 to n+m . To that we will add another 10 utility symbols the first of which will be empty symbol . How will the test work ? We will select 1000 worlds for the test. In each of these 1000 worlds, the candidate will live one life consisting of not more than 1000 games. Finally we will calculate the number of victories, losses and draws. This will give us an IQ, which is an arithmetic mean wher e victory is 1, loss is 0 and draw is 1/2. We will pick the worlds randomly, but we want the selected worlds to be fixed, so we will use pseudorandom selection by setting the pseudorandom generator to 1 before starting the selection process. Thus we will always use the same worlds in the test. In many of the generated worlds we will win every time or we will loss every time (doesn’t matter what we do) . It is meaningless to include such worlds in the test so we will discard all of them from test. Thus we will be left with 1000 meaningful worlds . What is a World? In newspapers one can come across problems such as ‘Which is the next number in the series?’ If all possible series are equa lly probable, the next number may be any one. When looking for the next number in a series we assume that simple r series are more probable than more complex series. We therefore use the principle known as ‘Occam’s razor’. The situation with worlds is simi lar. If we perceive the world as a strategy and all strategies are equally probable, then there is no way we can tell the future and a basis for preferring one move over another is lacking. We will therefore apply Occam’s razor to the worlds and will assum e that more simple worlds are more probable. What makes one world simpler than another world? We will use the Kolmogorov complexity. That said, if the world is a strategy, the more simple strategy is the one which is generated by a Turing machine with less states. We have limited the life so all strategies are finite . Therefore, all strategies are computable . So we will assume that a world is some Turing machine which calculates some strategy. 14 Which will be the set of all worlds? The set will be limited to Turing machines with 1000 states. This set includes machines with less than 1000 states, because each machine can be filled up with unreachable states . Turing machines which use more than 1000 states (and cannot be simplified) will be excluded from the set. We will consider these worlds as too complex and will accordingly exclude them from the definition. The result is a huge set of strategies (worlds) . The more simple strategies here will bear more weight (will be more probable), because there are generated by more Turing machines (from the set). Furthermore, we will assign some weight to each Turing machine. Hence we will prefer some machines over others . For example, if a machine tends to use more states with lower numbers, we will prefer it to the one which uses more states with higher numbers. There are two reasons why we assign different weights to the different Turing machines. The first one is that we give more weight to more simple machines (e.g. the ones with less reachable states are simpler). The second reason is that we wish to randomly generate a machine which works, and this is very difficult. The machines which have a greater chance of being working one s will have more weight, therefore we increase the probability of selecting th ese machines and accordingly our chances of hitting a working machine. The weight of a machine is equal to the probability of this machine to be selected by us. We will not calculate this probability. That would be a rather difficult calculation. We simpl y select a machine randomly and by this selection we induce probability as a parameter of the formula. That is, the probability will not be calculated when the Local IQ is calculated . When calculate the Global IQ, we should consider all Turing machines in order to calculate the success rate of the Device with any of these machine s and the probability that this very machine is selected. We have to multiply these numbers and sum up the values for all machines. This is an impossible calculation because of combinatorial explosion. The fact that we complicate the calculation by adding t he calculation of probabilities is not a problem. We do not change anything in this way. While the Global IQ remains theoretically computable , in practice it is still uncomputable , even more uncomputable than before . How to calculate the IQ of a particul ar computer program? We may say that the IQ is the arithmetic mean of success rate s of all worlds. (Note that the worlds are not equally probable and we need to multipl y the success rate by the probability (weight) of each world.) We will call the so obtained IQ a Global IQ. The Global IQ definition is very nice save that Global IQ cannot be calculated. To be precise, it can be calculated in theory, but not in practice because of the huge number of worlds which we have assumed to be possible . Nevertheless, we can still calculate the Global IQ by approximation using statistical methods. We will select randomly 1000 worlds and will calculate the arithmetic mean for these worlds. The result obtained would be close to the Global IQ . 15 The problem with this approach is that different selections of test worlds will yield different Global IQ approximations. What we need is a program which awards to the candidate the IQ this cand idate deserves, withal it must be a specific value rather than an approximation of something else. Hence we will fix the randomly selected 1000 worlds and will say that the Local IQ is the average success rate across these 1000 worlds. (In this case different worlds will not have different weights, because the weight i s already accounted for in the selection of test worlds. The more weighty ones are more likely to be selected.) The idea of fixing the randomly selected worlds is tantamount to giving the same problems to all candidate students . The Local IQ is an easily computable function and describes well our understanding of what IQ is. There is just one problem. There is one program which we will call a ‘crammer’. This program is designed specifically for the 1000 test worlds and its Local IQ is very high, but its Global IQ is low. How should we resolve this issue? We will use the Local IQ in order to find the AI. When we hit a program with a very high Local IQ and suspect that this progra m is a crammer, we will give it additional problems, so that we calculate a second local IQ. This means that we will take the next 1000 random worlds and derive another arithmetic mean from these worlds. We can go on with a third and fourth Local IQ. How does Turing machine make a move? We present the world as a Turing machine . Thus, our machine must take Actions as inputs and deliver Observations as outputs. The first m+1 states of the machine will be special. The qm+1 state will be the initial and the final state of the machine. In states q1 to qm we will use in order to output the Observation symbols. When the machine makes its first move, all tapes will be empty (i.e. they will be covered with the  symbol ). The first running tape will be No 3 (Nos. 0, 1 and 2 are reserved for utility purposes ). Turing machine will make a move by starting from the initial state qm+1 and finishing in the same state (which is also the final state). In the beginning of the move, a word of n symbols (the Action word) will be recorded on the current tape under the head of the machine (anything previously written in the first n symbols on the tape before the recording of the word will be deleted). At every step it will be observed if the states q1 to qm are called up. When these states are called up, the Observation symbols will be outputted . If the qi state is called several times within one move, only the first call will be considered. If it is not called up at all, the i-th Observation symbol will be nothing . If it is called up at least once, i-th Observation symbol will be the value of the ‘head memory’ at the moment following the first call on the qi state. If the i-th Observation letter is too big ( equals or exceeds the k n+i), the machine will crash in a cycling -like fashion. 16 Incorrect moves When the Turing machine fails to make a move because it goes cycling or crashes for some other reason, we will deem that the input Action entered at the start of the move is incorrect or impossible. In this case we go back and try to enter another Action. More precisely, it is not us who go back, instead we will turn back the world ( the Turing machine ). We give the Device an incorrect _move reward and take the next move the Device chooses to give us. We continue with this move as if the Device has played it instead of the incorrect move. (If the Device plays the same incorrect move, w e disqualify it because it cannot try the same move twice in a single moment.) Taking the Turing machine back is a perfectly natural operation. All we need is to memorize the total state of the machine before the start of the move. If we reconstruct this total state , we can enter another Action and continue was if it were the first action we ever tried . Another approach was applied in [3], where it was assumed that all moves are correct and the cycling problem was resolved by aborting the execution of the program, awarding a utility draw and restarting the program from its initial state. With this type of restart the tape remains in the state it was at the time of abortion . This is very poor practice. You will know that to turn down your PC you should enter the ‘Shut Down’ command. The other option is to pull the plug off the socket, but this will leave the hard disk in the state it was when the plug was pulled off. This treatment would make your PC behave in a bizarre and illogical manner. The same can be said when a Turing machine is shut down randomly and then restarted from its initial state. We want the world to be as logical as possible and take care to avoid these abortions . Now that we have opened the gate for incorrect moves, we sho uld know what to do when all moves are incorrect. Let us assume that we have about 100 possible Actions. We try them all and all of them prove to be incorrect. So we will say that we have ended in a blind alley with no way out. If life is the sequence of 1000 games, the n life ends with a natural death after 1000 games or with a sudden death ( blind alley ). How to reward a life that ends prematurely? We can reward only the games played until death. Then our A I strategy will prefer to commit a suicide (enter a blind alley ) as soon as it realizes that things in the current life have gone wrong and only losses lie ahead. We do not want suicidal AI strateg ies and will therefore opt for another solution. When a strategy is caught in a blind alley , we shall say that all remaining games up to 1000 are losses. This will provide assurance that the strategy will not go in blind alley on its own will, but will keep fighting to the end. Will the strategy realize that it i s entering a blind alley ? One cannot learn this by trial and error because you fall in a blind alley just once. But, if the strategy is very smart, it may be able to predict that some blind alley is forthcoming. For example, if the number of possible moves is decreasing, this is not a good sign, because in the end of the day one may run out of possible moves and enter in a blind alley . People can be another example. Let us have someone who has never died. He or she has no experience of their own but can sti ll predict a few potentially fatal situations. 17 We penalize beating around the bush We said that a game cannot be longer than 1000 moves. What shall we do if a game continues for more than 1000 moves? We shall then award a utility draw . Note than by doing so we do not interfere in the workings of the Turing machine as it continues to play the same game. The intervention concerns only the strategy, because it will get a draw reward when the world (Turing machine ) has awarded nothing . The fact that we do not abort the machine guarantees that the machine will maintain its logical behavior. If another 1000 moves are played without a final reward, we award a utility loss. That is, we penalize strategies which kick the can down the road. We want an AI strategy which aims to close the game quickly and start the next one. By penalizing early death and procrastination we vary the IQ of the random strategy. If we play heads or tails, the expected IQ is 1/2. To earn a higher IQ one must purposefully aim to win. For a lower IQ, one must purposefully aim to lose. By declaring all post -death games lost, we reduce the IQs of all strategies. Similarly, t he inclusion of utility losses will lead to a likewise reduction. This solution means that the IQ of the random strategy will be less than 1/2. We will be able to calculate the exact IQ value of a random strategy once we write the program which calculates the Local IQ. Naturally, a random strategy is non-deterministic meaning that we should test it a few times and take the mean value. Hence, the IQ of the random strategy will be an approximate value. Only deterministic strategies have e xact Local and Globa l IQ values. More logical and more efficient Turing machine As announced already, we will redefine the Turing machine in order to make it more logical and more efficient. To this end, we will make Turing a multi -tape machine and will enable it call subprograms. Why would such a machine deliver a more logical world ? First, it is the multiple tapes of the machine . It is legitimate to present the state of the world as a Cartesian product of many parameters which are in weak correlation with one anothe r. That is why a multi -tape Turing machine produces a more logical model of the world than the single -tape machine. Second, the machine will have subprograms which can be called up from many points . This is more logical than calling a different subprogram each time. When no stack is in place, we have to remember where we should go back after the subprogram is executed. To this end, each subprogram should be called up from one point only (otherwise it will not know where to go back). When we invoke a subpr ogram we give it a clean tape so it can write its intermediate results. In the opposite case the subprogram will have to use for this one of the common tapes and this will largely frustrate its logic due to the weird interactions that will occur among different calls of the subprogram. 18 Why would this machine make less steps and have less internal states? Higher efficiency means less steps and, more importantly, less internal states. It is important that the steps are not too many, because a machine making more than 1000 steps per move will be presumed to be in a cycling situation and will be stopped . It is also important that internal states are not too many, because we have limited our set to machine s having less than 1000 states. Hence ou r machine should preferably use less states. Having multiple tapes in the machine means there will be less steps, because a single tape will force the head make a lot of movements in order to write intermediate results. It would be easier to write these r esults on another tape. More significantly, we will reduce the number of internal states of the machine. A classical Turing machine uses a huge number of internal states (it memorizes everything that needs to be memorized in its internal state) . For example, when a subprogram has been called up we must remember where the subprogram was called from (where it must return ). If we want to move a symbol from one place to another we must remember which symbol we have pulled up. Thus, we complicate the Turing machine so that, in addition to its internal state, it should also remember which is the running tape, the stack index (for subprograms) and a ‘head memory’ symbol. A stacked Turing machine How would the program of this ma chine look like? It will be a table of 1000 x MaxSymbols , where 1000 are the possible commands and MaxSymbols are the possible symbols. In each fi eld of the table there will be five commands. The first command is ‘Write on tape’ MaxSymbols+2 possible valu es: (unchanged, previous value of the head memory , concrete symbol ) The second command is ‘Change the head memory ’ MaxSymbols+2 possible values : (unchanged, previous value of the symbol on tape, concrete symbol ) The third command is ‘Move head’ Three possible values : (left, right, stay there ) The fourth command is ‘Subprogram’ There are two fields : ‘State’ , the value of which is in the interval [0, 1000] (where 0 is the command NULL ). ‘New current tape’ , the value of which is in the interval [0, 9] (where 0 is the previous current tape, 1 is the current tape of the parent program, 2 is the temporary tape create d especially for this invocation of the subprogram and 3 —9 are the global tapes ). 19 The fifth command is ‘Next state’ The value is in the interval [0, 1000] (where 0 is the command return ). When a subprogram is called up, what do we write in the stack? Three things : where shall we go back after return (the fifth command), which is the previous current tape (so that we recover it after return ) and which temporary tape is created especially for this invocation of the subprogram . The new temporary tape should also be written somewhere. Let it not be in the stack, but somewhere else. When the return command is executed, the corresponding te mporary tape will be destroyed. Populating the table In order to create a random Turing machine, we must populate 1000 x MaxSymbols with random commands. Before we do that let us say how a random command is selected. We must generate 6 numbers (the four th command has two fields). These numbers may be equally probable, but we prefer smaller numbers to be more probable than larger numbers. Why would we prefer so? Because if all states are equally probable, the program will be stretched over multiple differ ent states, while we wish certain states to be used more frequently than others. Likewise, we want some tapes to be use d more often than others. This also applies to utility symbols (but does not apply to non -utility ones). How shall we select a number fr om 0 to k with a decreasing probability ? Let us toss a coin and if it is heads we select with a probability of 1/2 the number 0, if it is tails we toss again and if we get heads we select with a probability of 1/2 the number 1 and so forth. If do not get any heads all the way to k, we restart from 0. A probability of 1/2 produces a very steep decrease of the probability of the next number . That is why we will use a probability of 1/10 to select all the 6 numbers. What we actually use is the geometrical di stribution. Note : Only the number of the subprogram will be assigned differently. In this case 0 will be taken with a probability of 9/10 instead of 1/10. (Thereafter we continue with 1/10.) This is so because we do not want subprograms to be invoked too often and the stack to be populated redundantly. In this way the probability of the return command will be equal to the probability of invoking a subprogram. Now that we have explained the generation of commands (one box in the table), let us explain the generation of one column consisting of MaxSymbols boxes. We will imagin e the command of this machine as a switch with MaxSymbols cases. When we write program and use a switch, we describe only some of the cases rather than all cases. The remaining cases are described as default (i.e. all other cases are the same). A program will be more logical if the same command applies t o the majority of the cases. Therefore, we will select randomly how many different c ommands will exist in the column. We will use the decreasing probability procedure as explained above. Once we know how many different positions are there , we select randomly which these positions will be (again, smaller number s will be more probable). Fin ally, we populate the positions that need to be different with different commands, and will put the same command in all other positions. 20 So we have an algorithm for populating one column and can populate 1000 columns. In the end of the process we have the first random Turing machine. This however is a cumbersome process, so we derive the second machine from the first one just by changing the first m+1 states (which are special ones) and another 10 random states. This change is sufficient because the vast majority of the states are not used and by changing the special states we start using other states. Thus, the new Turing machine will be substantially different in terms of the states it uses even though the two machines are almost identical in terms of th eir unreachable states. Discarding the slag So we have a fast and simple procedure to generate 1000 test worlds. The problem however is that most of these worlds are useless . Across 1000 worlds there may be as little as 2 or 3 interesting ones. Therefor e, we will discard all useless worlds and do our test with 1000 interesting worlds. Example for a useless world is if the very first move leads to a blind alley . We will verify that a world is interesting by letting the random strategy live one life in that world . In this life, we do not want to see the strategy caught in a blind alley . We want at least one victory and at least one loss. We do not want more than 10 utility draws and utility losses. If all these condition s are met in this random life, we will deem that the world is interesting. How to construct a test with 1000 interesting worlds? First, we initialize the pseudorandom number s generator with the number 0. Then we generate randomly World Zero. Then we initialize the generator with 1 and apply a minor change to World Zero so as to generate World 1. If World 1 is interesting, we set the generator to 2 and generate World 2. If World 1 is useless we initializ e the generator with 2, 3, 4, 5, etc. until we obtain from World Zero an interesting World 1. We keep going until we generate 1000 interesting worlds. We will not save them all, but will only save a n array of 1000 numbers. These are values which we should use to initialize the generator in order to obtain a new interesting world from the previous interesting world. Note : Please observe that we select different Turing machines with different probabilities. These different probabilities are the different weights of the different machines. We need this clarification , because we aim to provide an accurate definition of Global IQ. The definition is : Global IQ(𝑆𝑡𝑟𝑎𝑡𝑒𝑔𝑦 )= ∑ P(TM | 𝐼𝑛𝑡𝑒𝑟𝑒𝑠𝑡𝑖𝑛𝑔 ).Success (𝑆𝑡𝑟𝑎𝑡𝑒𝑔𝑦 ,𝑇𝑀) ТМ  𝐼𝑛𝑡𝑒𝑟𝑒𝑠𝑡𝑖𝑛𝑔 𝑇𝑀 In this equation, P (TM | Interesting ) is the conditional probability that the machine named ‘ TM’ is selected if the world is interesting. Success(Strategy, TM ) is the arithmetic mean calculated after the strategy named ‘Strategy’ has lived one life in the world defined by the machine named ‘TM’. The sum total is across all machines with 1000 states t he worlds of which are interesting. The above Global IQ practically cannot be calculated and it is the theoretical value which we try to approximate by means of the Local IQ . The Local IQ therefore is : 21 Local IQ(𝑆𝑡𝑟𝑎𝑡𝑒𝑔𝑦 )=∑ Success (𝑆𝑡𝑟𝑎𝑡𝑒𝑔𝑦 ,𝑇𝑀 𝑖) 1000 𝑖=1 Where TM i is i-th preselected test world (Turing machi ne). The final definition Definition : An AI is any strategy the Local IQ of which exceeds 0 .7. Here we selected the same value as we did in [3]. Both here and in [3] the value (0.7) is selected arbitrarily. Likewise, a corporation may announce that it is looking for a CEO and will be happy to employ anyone who solves 70% of the problems given in the test. The corporation may have to adjust this level later if the bar proves too low or too high. Definition : An AI is any computer program if the strategy it plays in the first 1000 games is an AI strate gy. Analogy with the Grandmaster definition We will provide another explanation of the AI definition by repeating the same construct, but this time we will define a chess playing program. We have already done so in [4], but as the present study revisits and updates the construct described in [3], here we will also revisit the construct in [4] and will reflect the relevant adjustments. We will define a chess playing program as a program which plays like a gra ndmaster. If someone wants to be a grandmaster, he or she should earn an ELO rating (calculated by the ELO rating system) of at least 2500. The bad news is that the ELO rating calculation is based on his or her performance in playing against other people. To obtain an objective assessment of how good is the player, we will replace the other players with a finite set of deterministic computer programs. Then our chess player will play one game against each of these other players and his or her result will be the arithmetic mean of the results achieved in the individual games. In case a player halts (goes cycling) and does not make it until the end of game we will award a utility victory to the opponent. The same we will do if the player plays incorrect move. That is, a program which aspires to become a grandmaster must play correct moves and had better avoid cycling because in the opposite case we will penalize it with a utility loss. This applies both to the chess playing program and to computer programs in th e finite set against which we assess the performance of our program. Which will be the finite set of deterministic computer programs that used in the grandmaster test? We can take all programs that do not exceed a predefined length. Most of these will pla y randomly, will go cycling often and will make many incorrect moves. It would be reasonable to screen them and leave only the interesting ones (programs that play too mindlessly are a ballast which only make s the test more cumbersome). Interesting programs will be those that do not cycle, do not make incorrect moves and, on top of all, play relatively well. 22 When looking for interesting world s to use in the AI test we used the b rute-force search method. This method will cannot be used her e because it is very unlikely to randomly hit a program which does not cycle and makes only correct moves. Hence, instead the set of all programs which do not exceed a certain length, we will take a specific set of programs whereby all programs are interes ting (do not cycle, do not make incorrect moves and play relatively well). Let us take a particular program, which calculates the next five moves and distinguishes between three types of positions: winning (victory within the next five moves), losing (los s within the next five moves) and undetermined (all other positions). Which move will this program play? It will pick randomly one of the winning positions. In the absence of a winning position, it will chose randomly one of the undetermined positions. If an undetermined position is also lacking, the program will go ahead randomly with a losing position. What we described above is a non-deterministic program. If we reckon how many deterministic strategies this program can implement, they are far too many ( but still finitely many, because the length of the game is limited) . Each of these strategies is calculated by infinitely many programs, but we shall assume that for each strategy we have selected one program which calculates this strategy. As a result, we obtained a huge set of programs and may be tempted to say that our ELO rating will be calculated on the basis of the games with all these programs. Unfortunately these programs are far too many and we cannot calculate the ELO in this way because of comb inatorial explosion. Instead, let us select 1000 of these programs and calculate the ELO after 1000 games (one game played with each program). We will select these programs randomly, but will select them only once and will always calculate the ELO on the b asis of the same test programs. How can we make a deterministic program out of a non-deterministic one? That is easy, instead of picking a random move we pick a pseudorandom move. Before starting the game, we will initialize the random number generator wi th the number one (1). Thus we obtain one deterministic program. We need 1000 deterministic programs. We will initialize with the numbers from 1 to 1000 and will obtain 1000 different deterministic programs ( among these 1000 there m ight be some identical p rograms, in particular where the random numbers generator is not as good as it should be) . These will be the programs from which we will derive the ELO rating. Let we say that a program will become a grandmaster if the so -calculated ELO is more than 90%. This is an arbitrary value picked by us. It may turn out that the value should be higher. We may even have to adjust the test worlds and ask them to calculate maybe the next 10 moves rather than 5 moves. Then we stumble upon the crammer issue again . It is perfectly possible to learn by heart how to beat some of these 1000 programs. Remember these are deterministic programs and if we beat a deterministic program once we can beat it as many time s as we want by rep laying the same game. A fairly good an alogy emerges between the grandmaster and AI definitions. In the first case we refer to an ELO rating and in the latter case to an IQ rating . In the first case we play chess against 1000 opponents and in the latter case we live 1000 lives in 1000 worlds. T he difference is that in the chess scenario we play one game against each opponent and in the IQ scenario we play one 23 thousand games in each life. This is so because in the first case the rules of the game are fixed and it is expected that a grandmaster pr ogram knows the rules and knows how to play (rather than learning how to play while it plays). In the second case, the AI does not know the rules of life and needs one thousand games in order to discern these rules and learn how to live successfully. Conclusion In this study we described an exam (test) which enables us calculate the IQ of a ny program . More precisely, we specified the program which will conduct th e test and tell us what the candidate’s IQ is. Given the deep granularity of our specification, we can now relax and ask some student to do the coding for us, perhaps as a course project. This program will enable us do the AI test in the matter of minutes. This the time which the examiner program needs to calculate the test result. We should then add the thinking time we afford to each candidate. When we consider AI as a strategy, we need not ask the question of how much time the candidate will spend thinking. When we consider AI as a program which calculates an AI strategy, we must announce how much time we afford to the program for calculating one step . So, the time for the test will be some minutes to calculate the test result, plus the thinking time which we afford to the candidate, plus some time for generation of the test (constr uction of the array of 1000 numbers). The last time we do not consider because it will be spend only once. Let us think about what this test can be used for. If someone comes to us with a particular program, we would test it and say what the IQ of that pr ogram is. But, we do not have any AI - candidate programs. So we have nobody to apply this test to . One possible application of the test described in this study is finding the AI. We might use the brute -force search method . Of course we can search with this method, but combinatorial explosion would not let us go too far with this method. There is, however, a smarter way of searching. We can construct a genetic algorithm. We will sit at one powerful computer, create a population of AI candidate programs, and calculate each candidate’s IQ. We shall combine candidates with higher IQ s in order to obtain offspring with even higher IQ. We shall kill the low-IQ candidates in order to make room for more promising program s. Using this natural selection approach we shall obtain programs with very high IQs. The genetic algorithm is one way of finding the AI, but we will end up with a program the inner workings of which are enigmatic to us. If we are t o control a program, we better write it ourselves rather than generate it automatically . For this reason, I am a proponent of the direct approach for creating AI. Therefore, I assert that we should write this program with our own hands. Acknowledgements I wish to appreciate my colleagues Ivan Koychev, Anton Zinoviev and Andrey Sariev for the beneficial discussion s on the AI Definition topic. Special acknowledgement to Ivan Koychev for his instrumental contribution to my summary of related work . 24 References [1] Dimiter Dobrev. First and oldest application. 1993. http://dobrev.com/AI/first.html [2] Dimiter Dobrev. AI - What is this. PC Magazine - Bulgaria, November'2000, pp.12 - 13 (in Bulgarian, also in English at http://dobrev.com/AI/definition.html). [3] Dimiter Dobrev. Formal Definition of Artificial Intelligence. International Journal "Information Theories & Applications", vol.12, Number 3, 2005, pp.277 -285. http://www.dobrev .com/AI/ [4] Dimiter Dobrev . Parallel between definition of chess playing program and definition of AI . International Journal "Information Technologies & Knowledge ", vol.1, Number 2, 2007, pp.196 - 199. [5] Dimiter Dobrev . Two fundamental problems connected with AI. Proceedings of Knowledge - Dialogue - Solution 2007, June 18 - 25, Varna, Bulgaria, Volume 2, p.667. [6] Dimiter Dobrev. Giving the AI definition a form suitable for engineers. April, 2013. http://www.dobrev. com/AI/ [7] Alan Turing. Computing machinery and intelligence. Mind, 1950. [8] John McCarthy. What is Artificial Intelligence? November, 2007. (www - formal.stanford.edu/jmc/whatisai/) [9] Huazheng Wang, Fei Tian, Bin Gao, Jiang Bian, Tie -Yan Liu. Solving Verbal Comprehension Questions in IQ Test by Knowledge -Powered Word Embedding. arXiv: 1505.07909, May, 2015. [10] Detterman, Douglas K. A Challenge to Watson. Intelligence, v39 n2 -3 p77 -78 Mar -Apr 2011. [11] Feng Liu, Yong Shi, Ying Liu . Intelligence Quotient and Intelligence Grade of Artificial Intelligence . arXiv: 1709.10242, September, 2017. [12] Dowe, D.L., & Hernández -Orallo, J. IQ tests are not for machines, yet . Intelligence (2012), doi:10.1016/j.intell.2011.12.001 [13] Hernández -Orallo, J., & Minaya -Collado, N. (1998). A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. Proc. intl symposium of engineering of intelligent systems (EIS'98), February 1998, La Laguna, Spain (pp. 146 –163). : ICSC Press. [14] Insa -Cabrera, J., Dowe, D. L., & Hernandez -Orallo, J. (2011). Evaluating a reinforcement learning algorithm with a general intelligence test. In J. M. J. A. Lozano, & J. A. Gamez (Eds.), Current topics in artificial intelligence. CAEPIA 2011. : Springer (LNAI Series 7023).
3934a54b-c469-48e9-8359-8ca73f14b308
trentmkelly/LessWrong-43k
LessWrong
An Agent is a Worldline in Tegmark V If asked to define what an agent is, my usual answer -- one of them, anyway -- is “a worldline in Tegmark V”. The Tegmark Level V Multiverse (the “V” here is a Roman numeral) is not defined by Max Tegmark (whose hierarchy goes only up to IV), but, as used in agent-foundations circles, it refers to the collection of not-necessarily-consistent mathematical universes, a.k.a. “impossible possible worlds”. It thus contains worlds in which 1+1 = 3, worlds in which triangles have four sides, and perhaps worlds with married bachelors -- in addition, of course, to all the more “ordinary” worlds in Tegmark Levels I-IV (and thus, in particular, us). This definition of “agent” is intended to evoke the concept of an observer in physics (especially relativity), which is a worldline in physical spacetime. “Observer” is a more passive word than “agent”, corresponding to the fact that in physics, worldlines are determined by equations (the “equations of motion”) that represent the laws of physics; whereas the idea in agent theory is that the corresponding equations -- those that determine worldlines -- represent the preferences, or “caring structure”, of some entity other than (at any rate, not necessarily identical to) the physical universe. (I am deliberately avoiding the terms “values” and “goals”, for obscure theoretical reasons that I won’t explain here.) A worldline in Tegmark IV (to say nothing of V) would, almost by its very nature, suggest a higher degree of “agency” than the ordinary sort of worldline, because it would allow for the possibility that the “observer” or “agent” moves between universes with differing laws of physics. Were we ever to acquire the capacity of “hacking into” our universe and changing its physical laws, for example, this would be the sort of mathematical setting in which our activities would be appropriately modeled. The equations governing our trajectories in that case would be reasonably termed “laws of metaphysics” -- or, indeed (in the most
3c1bd919-2670-46ab-8963-532c221c7220
trentmkelly/LessWrong-43k
LessWrong
Meetup : Arizona State Lunch Group Discussion article for the meetup : Arizona State Lunch Group WHEN: 03 September 2013 12:00:00PM (-0700) WHERE: 301 E. Orange Mall Tempe, AZ 85281 We will be meeting up at Engrained in the ASU memorial union (There are many other food options available in the same place). The goal of the meeting is to get a regular weekly lunch group going at ASU and more generally in the Phoenix metro area. Feel free to text me at 602-501-9420 (I am in the process of porting my number to google voice, so phone calls don't work right now) Discussion article for the meetup : Arizona State Lunch Group
5fded94f-1256-4b35-bd0b-b7c202e9b28b
trentmkelly/LessWrong-43k
LessWrong
Alpha Summary In the first half of this essay, I recount two anecdotes. (The impatient reader can skip these.) First, Gerald Murnane tries, around Melbourne in the 1950s, to find a system that'll make him money betting on horse races. Then, Bill Benter, in Hong Kong in the 1990s, comes up with a system for picking horses that makes him nearly $1B. In the second half of this essay, I discuss alpha: * In finance, alpha is (a) excess returns earned on an investment above the market when adjusted for risk, (b) an investor's ability to beat the market, or (sometimes) (c) a strategy or resource that consistently generates excess returns. Outside finance, alpha refers simply to someone's ability to beat their competitors, the edge they have over their rivals. * Often what appears to be alpha is just noise. There are domains, like stock exchanges, where alpha is scarce, and other domains where it is abundant. Generally, the more valuable the object, the more crowded the field, and the harder it is to find alpha. * Six very tentative ways of finding alpha are (1) seeking decision-relevant yet generally inaccessible knowledge, (2) making deep analyses, (3) extrapolating stable-seeming, high-entropy trends or lines of reasoning, (4) finding systematic ways that others act suboptimally, (5) seeking out places where one has relatively greater ability than everyone else and/or (6) making bets that few others can afford to make. Form-Plan > In the long-ago 1960s, I knew a man who claimed to have been helped through a troubled period by his faith in psychoanalysis. When I myself was going through such a period, he urged me to read a certain huge book on the subject. I've forgotten the title but I recall the author's name, which was in gilt letters on the dark-green spine: Otto Fenichel. I read several chapters but I recall today only two short passages. One passage described the symptoms of a man with obsessive-compulsive disorder. He could never walk more than a few paces forward
40bfa759-c417-4cfc-a5b0-df9ec57b0810
trentmkelly/LessWrong-43k
LessWrong
AI utility-based correlation A putative new idea for AI control; index here. This presents one way of implementing the indifference-based correlations of these posts. Let u be a utility function, a map from worlds to real numbers. An expected utility maximiser considering whether to produce output Y, looks at the expected utility Σw u(w)P(w|Y). We now assume that there is another random variable X in the world, and we want the AI to be indifferent to worlds where Y≠X. We also want it to be indifferent to worlds where Z=0. Then it will assess the value of output Y as: Σw u(w)P(w|X=Y,Z=1,Y). Now, the idea of the setup was to ensure that Z=1 would erase the output Y so that it was never read. Hence P(w|Z=1,Y) Hence that equation simplifies to: Σw u(w)P(w|X=Y). Therefore the AI will chose the Y that maximises the (conditional) expected utility of u if X=Y. To get the full version of the initial post, you need to define some function f of Y and modify this to Σw u(w)P(w|X=Y) + f(Y).  
ee27796b-e1d8-49c2-9757-7b090019f149
trentmkelly/LessWrong-43k
LessWrong
Fort Collins, Colorado Meetup Wedneday 7pm Dates: 7pm Wednesday June 8, 15, 22 and 29 Venue: The Bean Cycle, upstairs at the back if it's open. Look for the Less Wrong sign. As these are the first few sessions of the meetup, there will be no formal agenda. I'll bring a few games to give us something to do while we get acquainted. http://groups.google.com/group/less-wrong-fort-collins-co
26e9cadb-550a-4e38-8ace-b7d42999dfa0
trentmkelly/LessWrong-43k
LessWrong
Meta analysis of Writing Therapy Robin Hanson recently mentioned "writing therapy" as potentially having surprisingly large benefits. In the example he gives, recently unemployed engineers who write about their experience find jobs more quickly than those that did not. The meta-analysis paper he links to was pretty lame, but I found another meta-analysis, "Experimental disclosure and its moderators: A meta-analysis", on a somewhat broader topic of Experimental Disclosure that appears to be much better. My judgment is non-expert, but it looks to me like a very high quality meta-analysis. The authors use a large number of studies (146) and include a large number of potential moderators, discuss their methodology in detail, and address publication bias intelligently.  The authors find small to moderate positive effects on measures of psychological health, physiological health and general life outcomes. They also find a number of interesting moderating factors.
1618bdfe-4299-4495-af76-c38c28e50538
trentmkelly/LessWrong-43k
LessWrong
This one equation may be the root of intelligence
805a9334-a662-4e5e-9495-90cf211dda13
trentmkelly/LessWrong-43k
LessWrong
Large language models aren't trained enough Finbarr Timbers makes a point, obvious in retrospect, but which many people, including people forecasting AI timeline, seem to miss: since training cost is amortized over inference, optimal training depends on expected amount of inference. Both scaling laws from OpenAI and DeepMind assume zero (or negligible) inference, which is obviously incorrect. Any forecasting using scaling laws similarly is suspect and should be revised.
b66c5d30-50bd-4666-9846-8573af253fa9
trentmkelly/LessWrong-43k
LessWrong
Link: Vitamin D Can Likely End the COVID-19 Pandemic - Rootclaim Blog This is a link post for: * Vitamin D Can Likely End the COVID-19 Pandemic - Rootclaim Blog
14548f7b-2204-47e1-946a-eedf24f9869a
trentmkelly/LessWrong-43k
LessWrong
AI #112: Release the Everything OpenAI has upgraded its entire suite of models. By all reports, they are back in the game for more than images. GPT-4.1 and especially GPT-4.1-mini are their new API non-reasoning models. All reports are that GPT-4.1-mini especially is very good. o3 is the new top of the line ChatGPT reasoning model, with o3-pro coming in a few weeks. Reports are that it too looks very good, even without us yet taking much advantage of its tool usage. If you have access, check it out. Full coverage is coming soon. There’s also o4-mini and o4-mini-high. Oh, they also made ChatGPT memory cover all your conversations, if you opt in, and gave us a version of Claude Code called Codex. And an update to their preparedness framework that I haven’t had time to examine yet. Anthropic gave us (read-only for now) Google integration (as in GMail and Calendar to complement Drive), and also a mode known as Research, which would normally be exciting but this week we’re all a little busy. Google and everyone else also gave us a bunch of new stuff. The acceleration continues. TABLE OF CONTENTS Not covered yet, but do go check them out: OpenAI’s o3 and o4-mini. Previously this week: GPT-4.1 is a Mini Upgrade, Open AI #13: Altman at TED and OpenAI Cutting Corners on Safety Testing 1. Language Models Offer Mundane Utility. But doctor, you ARE ChatGPT! 2. Language Models Don’t Offer Mundane Utility. Cuomo should have used o3. 3. Huh, Upgrades. ChatGPT now has full memory across conversations. 4. On Your Marks. A new benchmark for browsing agents. 5. Research Quickly, There’s No Time. Just research. It’s cleaner. Check your email. 6. Choose Your Fighter. Shoutouts to Google-AI-in-search and Mistral-Small-24B? 7. Deepfaketown and Botpocalypse Soon. Building your own AI influencer. 8. The Art of the Jailbreak. ChatGPT can now write its own jailbreaks. 9. Get Involved. Study with UT Austin, or work for Ted Cruz. We all make choices. 10. Introducing. Google offers agent development kit,
b872b3c0-12c4-40e2-8e0c-e442fdaf0cfd
trentmkelly/LessWrong-43k
LessWrong
List of Donors, Fall 2011 This discussion-level article is a handy place for people to share info about their recent donations, especially donations to unusually efficient or effective charities. Feel free to post your one-time donations, your recurring donations, and/or any interesting changes in your donation habits. Gratitude and appreciation for other people's donations is also very welcome.
2d46607a-79a7-445a-b5d9-a850654a636b
trentmkelly/LessWrong-43k
LessWrong
The two missing core reasons why aligning at-least-partially superhuman AGI is hard From Arbital's Mild Optimization page: > Mild optimization relates directly to one of the three core reasons why aligning at-least-partially superhuman AGI is hard - making very powerful optimization pressures flow through the system puts a lot of stress on its potential weaknesses and flaws. I'm interested in this taxonomy of core reasons. Unfortunately this page doesn't specify the other two. What are they? Also, this page is part of the AI alignment domain -- was it written by Eliezer? (surprisingly, "10 changes by 3 authors" is a link to edit and does not show author information or edit history)
3164008e-28ca-4e9a-a371-586b72f620d9
trentmkelly/LessWrong-43k
LessWrong
Subtle Forms of Confirmation Bias There are at least two types of confirmation bias. The first is selective attention: a tendency to pay attention to, or recall, that which confirms the hypothesis you are thinking about rather than that which speaks against it. The second is selective experimentation: a tendency to do experiments which will confirm, rather than falsify, the hypothesis. The standard advice for both cases seems to be "explicitly look for things which would falsify the hypothesis". I think this advice is helpful, but it is subtly wrong, especially for the selective-experimentation type of confirmation bias. Selective attention is relatively straightforward, but selective experimentation is much more complex than it initially sounds. Looking for Falsification What the standard (Popperian) advice tells you to do is try as hard as you can to falsify your hypothesis. You should think up experiments where your beloved hypothesis really could fail. What this advice definitely does do is guard against the mistake of making experiments which could not falsify your hypothesis. Such a test is either violating conservation of expected evidence (by claiming to provide evidence one way without having any possibility of providing evidence the other way), or providing only very weak evidence for your claim (by looking much the same whether your claim is true or false). Looking for tests which can falsify your result steers you towards tests which would provide strong evidence, and helps you avoid violating the law of expected evidence. However, there are more subtle ways in which confirmation bias can act. Predicting Results in Advance You can propose a test which would indeed fit your hypothesis if it came out one way, and which would disconfirm your hypothesis if it came out the other way -- but where you can predict the outcome in advance. It's easy to not realize you are doing this. You'll appear to provide significant evidence for your hypothesis, but actually you've cherry-picked your
b479255d-d885-448b-8ae7-2e5565f83a0c
trentmkelly/LessWrong-43k
LessWrong
When is it important that open-weight models aren't released? My thoughts on the benefits and dangers of open-weight models in response to developments in CBRN capabilities. Recently, Anthropic released Opus 4 and said they couldn't rule out the model triggering ASL-3 safeguards due to the model's CBRN capabilities. That is, they say they couldn't rule out that this model had "the ability to significantly help individuals or groups with basic technical backgrounds (e.g., undergraduate STEM degrees) create/obtain and deploy CBRN weapons" (quoting from Anthropic's RSP). More specifically, Anthropic is worried about the model's capabilities in assisting with bioweapons. (See footnote 3 here.) Given this and results on Virology Capabilities Test, it seems pretty likely that various other AI companies have or will soon have models which can significantly help amateurs make bioweapons.[1] One relevant question is whether it would be bad if there were open-weight models above this capability threshold. Further, should people advocate for not releasing open-weight models above this capability level? In this post, I'll discuss how I think about releasing open-weight models that can significantly help amateurs make bioweapons. In short, my view is that open-weight models at this level of capability would cause a large number of fatalities in expectation (perhaps 100,000 in expectation per year with a lot of uncertainty), but open-weight models reduce larger risks that are present later on (most notably, loss of control risks) by enough that the benefits are bigger than the costs. Given there is a large cost paid in fatalities and the benefits are uncertain, I wouldn't actively advocate[2] for releasing open-weight models at this level of capability. However, I also think people focused on mitigating larger (e.g. existential) risks shouldn't advocate against releasing open-weight models at this level of capability.[3] There are higher levels of capability where releasing open-weight models would be net-harmful given my views (at least without the situation substantially changing). I do think it would be good to advocate and argue against compani
0f7b8d5d-c59e-4580-aa4b-fda3e91aff7a
trentmkelly/LessWrong-43k
LessWrong
Clarifying Consequentialists in the Solomonoff Prior I have spent a long time being confused about Paul’s post on consequentialists in the Solomonoff prior. I now think I understand the problem clearly enough to engage with it properly. I think the reason I was confused is to a large degree a problem of framing. It seemed to me in the course of discussions I had to deconfuse myself to me that similar confusions are shared by other people. In this post, I will attempt to explain the framing that helped clarify the problem for me. i. A brief sketch of the Solomonoff prior The Solomonoff, or Universal, prior is a probability distribution over strings of a certain alphabet (usually over all strings of 1s and 0s). It is defined by taking the set of all Turing machines (TMs) which output strings, assigning to each a weight proportional to (where L is its description length), and then assigning to each string a probability equal to the weights of the TMs that compute it. The description length is closely related to the amount of information required to specify the machine; I will use description length and amount of information for specification interchangeably. (The actual formalism is in fact a bit more technically involved. I think this picture is detailed enough, in the sense that my explanation will map onto the real formalism about as well.) The above defines the Solomonoff prior. To perform Solomonoff induction, one can also define conditional distributions by considering only those TMs that generate strings beginning with a certain prefix. In this post, we’re not interested in that process, but only in the prior. ii. The Malign Prior Argument In the post, Paul claims that the prior is dominated by consequentialists. I don’t think it is quite dominated by them, but I think the effect in question is plausibly real. I’ll call the key claim involved the Malign Prior Argument. On my preferred framing, it goes something like this: Premiss: For some strings, it is easier to specify a Turing Machine that simulates
38dc08de-631d-43a7-b3d6-40071f26371e
trentmkelly/LessWrong-43k
LessWrong
Rational spirituality: transcending your metaphysics Hi all! First time poster. I'll be honest: I'm feeling a bit nervous posting this here. I want to share some thoughts about consciousness and the nature of reality. The basic idea is that reality is, in fact, a dream, and this can be discovered for oneself beyond any reasonable doubt. Roughly, the method is to grok the fact that your metaphysical assumptions are not rationally grounded in the way you normally assume. The main obstacle to its discovery is a subtle cognitive process that continually reinforces our metaphysics. I demonstrate how these beliefs (such as those regarding the nature of time) are not based in reason, contrary to intuition. Yet even if one understands this intellectually, s/he will find it very hard to confront seriously. That misguided certainty is precisely what keeps the dream apparently stable and mundane. If you're wondering why you don't see evidence of this around you, it's because this is your dream. How convenient, right? There is a brief foray into quantum mechanics and the many-worlds interpretation. It's hard to avoid the criticism of "quantum woo" in a context like this, but I stay true to the math and experimental evidence, and hope you will draw your own conclusions. The site may appear to be pushing an idealist metaphysics, which seems to go against the claim of transcending metaphysics. But the idea is to have the experience itself, at which point metaphysics becomes a tool for communication, rather than something we grip very tightly (usually without realizing it, as in the everyday life experience that the Buddhists call samsara). Cheers, M
cdd3cdd6-c518-4d06-83da-bca235acdf9f
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Alignment Newsletter #37 Find all Alignment Newsletter resources [here](http://rohinshah.com/alignment-newsletter/). In particular, you can [sign up](http://eepurl.com/dqMSZj), or look through this [spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing) of all summaries that have ever been in the newsletter. Highlights ---------- **[Three AI Safety Related Ideas](https://www.alignmentforum.org/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas) and [Two Neglected Problems in Human-AI Safety](https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety)** *(Wei Dai)*: If any particular human got a lot of power, or was able to think a lot faster, then they might do something that we would consider bad. Perhaps power corrupts them, or perhaps they get so excited about the potential technologies they can develop that they do so without thinking seriously about the consequences. We now have both an opportunity and an obligation to design AI systems that operate more cautiously, that aren't prone to the same biases of reasoning and heuristics that we are, such that the future actually goes *better* than it would if we magically made humans more intelligent. If it's too hard to make AI systems in this way and we need to have them learn goals from humans, we could at least have them learn from *idealized* humans rather than real ones. Human values don't extrapolate well -- just look at the myriad answers that people give to the various hypotheticals like the [trolley problem](https://en.wikipedia.org/wiki/Trolley_problem). So, it's better to learn from humans that are kept in safe, familiar environment with all their basic needs taken care of. These are our idealized humans. In practice the AI system would learn a lot from the preferences of real humans, since that should be a very good indicator of the preferences of idealized humans. But if the idealized humans begin to have different preferences from real humans, then the AI system should ignore the "corrupted" values of the real humans. More generally, it seems important for our AI systems to help us figure out what we care about before we make drastic and irreversible changes to our environment, especially changes that prevent us from figuring out what we care about. For example, if we create a hedonic paradise where everyone is on side-effect-free recreational drugs all the time, it seems unlikely that we check whether this is actually what we wanted. This suggests that we need to work on AI systems that differentially advance our philosophical capabilities relative to other capabilities, such as technological ones. One particular way that "aligned" AI systems could make things worse is if they accidentally "corrupt" our values, as in the hedonic paradise example before. A nearer-term example would be making more addictive video games or social media. They might also make very persuasive but wrong moral arguments. This could also happen in a multipolar setting, where different groups have their own AIs that try to manipulate other humans into having values similar to theirs. The attack is easy, since you have a clear objective (whether or not the humans start behaving according to your values), but it seems hard to defend against, because it is hard to determine the difference between manipulation and useful information. **Rohin's opinion:** (A more detailed discussion is available on [these](https://www.alignmentforum.org/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas#WP9Tc2eTAwsPNw3cz) [threads](https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety#FoWAZFTnG8D2rreSs).) I'm glad these posts were written, they outline real problems that I think are neglected in the AI safety community and outline some angles of attack. The rest of this is going to be a bunch of disagreements I have, but these should be taken as disagreements on how to *solve* these problems, not a disagreement that the problems exist. It seems quite difficult to me to build AI systems that are safe, *without* having them rely on humans making philosophical progress themselves. We've been trying to figure this out for thousands of years. I'm pessimistic about our chances at creating AI systems that can outperform this huge intellectual effort correctly on the first try without feedback from humans. Learning from idealized humans might address this to some extent, but in many circumstances I think I would trust the real humans with [skin in the game](http://slatestarcodex.com/2016/05/14/skin-in-the-game/) more than the idealized humans who must reason about those circumstances from afar (in their safe, familiar environment). I do think we want to have a general approach where we try to figure out how AIs *and* humans should reason, such that the resulting system behaves well. On the human side, this might mean that the human needs to be more cautious for longer timescales, or to have more epistemic and moral humility. Idealized humans can be thought of an instance of this approach where rather than change the policy of real humans, we indirectly change their policy in a hypothetical by putting them in safer environments. For the problem of intentionally corrupting values, this seems to me an instance of the general class of "Competing aligned superintelligent AI systems could do bad things", in the same way that we have the risk of nuclear war today. I'm not sure why we're focusing on value corruption in particular. In any case, my current preferred solution is not to get into this situation in the first place (though admittedly that seems very hard to do, and I'd love to see more thought put into this). Overall, I'm hoping that we can solve "human safety problems" by training the humans supervising the AI to not have those problems, because it sure does make the technical problem of aligning AI seem a lot easier. I don't have a great answer to the problem of competing aligned superintelligent AI systems. **[Legible Normativity for AI Alignment: The Value of Silly Rules](https://arxiv.org/abs/1811.01267)** *(Dylan Hadfield-Menell et al)*: One issue we might have with value learning is that our AI system might look at "silly rules" and infer that we care about them deeply. For example, we often enforce dress codes through social punishments. Given that dress codes do not have much functional purpose and yet we enforce them, should an AI system infer that we care about dress codes as much as we care about (say) property rights? This paper claims that these "silly rules" should be interpreted as a coordination mechanism that allows group members to learn whether or not the group rules will be enforced by neutral third parties. For example, if I violate the dress code, no one is significantly harmed but I would be punished anyway -- and this can give everyone confidence that if I were to break an important rule, such as stealing someone's wallet, *bystanders* would punish me by reporting me to the police, even though they are not affected by my actions and it is a cost to them to report me. They formalize this using a model with a pool of agents that can choose to be part of a group. Agents in the group play "important" games and "silly" games. In any game, there is a scofflaw, a victim, and a bystander. In an important game, if the bystander would punish any rule violations, then the scofflaw follows the rule and the victim gets +1 utility, but if the bystander would not punish the violation, the scofflaw breaks the rule and the victim gets -1 utility. Note that in order to signal that they would punish, bystanders must pay a cost of c. A silly game works the same way, except the victim always gets 0 utility. Given a set of important rules, the main quantity of interest is how many silly rules to add. The authors quantify this by considering the *proportion* of all games that are silly games, which they call the density. Since we are imagining *adding* silly rules, all outcomes are measured with respect to the number of *important* games. We can think of this as a proxy for time, and indeed the authors call the expected number of games till an important game a *timestep*. Now, for important games the expected utility to the victim is positive if the probability that the bystander is a punisher is greater than 0.5. So, each of the agents cares about estimating this probability in order to decide whether or not to stay in the group. Now, if we only had important games, we would have a single game per timestep, and we would only learn whether one particular agent is a punisher. As we add more silly games, we get more games per timestep, and so we can learn much more quickly the proportion of punishers, which leads to more stable groups. However, the silly rules are not free. The authors prove that if they *are* free, then we keep adding silly rules and the density would approach 1. (More precisely, they show that as density goes to 1, the value of being told the true probability of punishment goes to 0, meaning that the agent already knows everything.) They then show experimental results showing a few things. When the agents are relatively certain of the probability of an agent being a punisher, then silly rules are not very useful and the group is more likely to collapse (since the cost of enforcing the silly rules starts to be important). Second, as long as c is low (so it is easy to signal that you will enforce rules), then groups with more silly rules will be more resilient to shocks in individual's beliefs about the proportion of punishers, since they will very quickly converge to the right belief. If there aren't any silly rules it can take more time and your estimate might be incorrectly low enough that you decide to leave the group even though group membership is still net positive. Finally, if the proportion of punishers drops below 0.5, making group membership net negative, agents in groups with high density will learn this faster, and their groups will disband much sooner. **Rohin's opinion:** I really like this paper, it's a great concrete example of how systems of agents can have very different behavior than any one individual agent *even if* each of the agents have similar goals. The idea makes intuitive sense and I think the model captures its salient aspects. There are definitely many quibbles you could make with the model (though perhaps it is the standard model, I don't know this field), but I don't think they're important. My perspective is that the model is a particularly clear and precise way of communicating the effect that the authors are describing, as opposed to something that is supposed to track reality closely. Technical AI alignment ====================== ### Problems **[Three AI Safety Related Ideas](https://www.alignmentforum.org/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas) and [Two Neglected Problems in Human-AI Safety](https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety)** *(Wei Dai)*: Summarized in the highlights! ### Technical agendas and prioritization [Multi-agent minds and AI alignment](https://www.lesswrong.com/posts/3fkBWpE4f9nYbdf7E/multi-agent-minds-and-ai-alignment) *(Jan Kulveit)*: This post argues against the model of humans as optimizing some particular utility function, instead favoring a model based on predictive processing. This leads to several issues with the way standard value learning approaches like inverse reinforcement learning work. There are a few suggested areas for future research. First, we could understand how hierarchical models of the world work (presumably for better value learning). Second, we could try to invert game theory to learn objectives in multiagent settings. Third, we could learn preferences in multiagent settings, which might allow us to better infer norms that humans follow. Fourth, we could see what happens if we take a system of agents, infer a utility function, and then optimize it -- perhaps one of the agents' utility functions dominates? Finally, we can see what happens when we take a system of agents and give it more computation, to see how different parts scale. On the non-technical side, we can try to figure out how to get humans to be more self-aligned (i.e. there aren't "different parts pulling in different directions"). **Rohin's opinion:** I agree with the general point that figuring out a human utility function and then optimizing it is unlikely to work, but for different reasons (see the first chapter of the [Value Learning sequence](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc)). I also agree that humans are complex and you can’t get away with modeling them as Boltzmann rational and optimizing some fixed utility function. I wouldn’t try to make the model more accurate (eg. a model of a bunch of interacting subagents, each with their own utility function), I would try to make the model less precise (eg. a single giant neural net), because that reduces the chance of model misspecification. However, given the [impossibility result](https://arxiv.org/abs/1712.05812) saying that you must make assumptions to make this work, we probably have to give up on having some nice formally specified meaning of “values”. I think this is probably fine -- for example, iterated amplification doesn’t have any explicit formal value function. ### Reward learning theory [Figuring out what Alice wants: non-human Alice](https://www.alignmentforum.org/posts/YfQGZderiaGv3kBJ8/figuring-out-what-alice-wants-non-human-alice) *(Stuart Armstrong)*: We know that if we have a potentially irrational agent, then inferring their preferences is [impossible](https://arxiv.org/abs/1712.05812) without further assumptions. However, in practice we can infer preferences of humans quite well. This is because we have very specific and narrow models of how humans work: we tend to agree on our judgments of whether someone is angry, and what anger implies about their preferences. This is exactly what the theorem is meant to prohibit, which means that humans are making some strong assumptions about other humans. As a result, we can hope to solve the value learning problem by figuring out what assumptions humans are already making and using those assumptions. **Rohin's opinion:** The fact that humans are quite good at inferring preferences should give us optimism about value learning. In the [framework](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/h9DesGT3WT9u2k7Hr) of rationality with a mistake model, we are trying to infer the mistake model from the way that humans infer preferences about other humans. This sidesteps the impossibility result by focusing on the *structure* of the algorithm that generates the policy. However, it still seems like we have to make some assumption about how the structure of the algorithm leads to a mistake model, or a model for what values are. Though perhaps we can get an answer that is principled enough or intuitive enough that we believe it. ### Handling groups of agents **[Legible Normativity for AI Alignment: The Value of Silly Rules](https://arxiv.org/abs/1811.01267)** *(Dylan Hadfield-Menell et al)*: Summarized in the highlights! ### Miscellaneous (Alignment) [Assuming we've solved X, could we do Y...](https://www.alignmentforum.org/posts/95i5B78uhqyB3d6Xc/assuming-we-ve-solved-x-could-we-do-y) *(Stuart Armstrong)*: We often want to make assumptions that sound intuitive but that we can't easily formalize, eg. "assume we've solved the problem of determining human values". However, such assumptions can often be interpreted as being very weak or very strong, and depending on the interpretation we could be assuming away the entire problem, or the assumption doesn't buy us anything. So, we should be more precise in our assumptions, or focus on only on some precise *properties* of an assumption. **Rohin's opinion:** I think this argument applies well to the case where we are trying to *communicate*, but not so much to the case where I individually am thinking about a problem. (I'm making this claim about me specifically; I don't know if it generalizes to other people.) Communication is hard and if the speaker uses some intuitive assumption, chances are the listener will interpret it differently from what the speaker intended, and so being very precise seems quite helpful. However, when I'm thinking through a problem myself and I make an assumption, I usually have a fairly detailed intuitive model of what I mean, such that if you ask me whether I'm assuming that problem X is solved by the assumption, I could answer that, even though I don't have a precise formulation of the assumption. Making the assumption more precise would be quite a lot of work, and probably would not improve my thinking on the topic that much, so I tend not to do it until I think there's some insight and want to make the argument more rigorous. It seems to me that this is how most research progress happens: by individual researchers having intuitions that they then make rigorous and precise. Near-term concerns ================== ### Fairness and bias [Providing Gender-Specific Translations in Google Translate](https://ai.googleblog.com/2018/12/providing-gender-specific-translations.html) *(Melvin Johnson)* ### Machine ethics [Building Ethics into Artificial Intelligence](http://arxiv.org/abs/1812.02953) *(Han Yu et al)* [Building Ethically Bounded AI](http://arxiv.org/abs/1812.03980) *(Francesca Rossi et al)* Malicious use of AI =================== [FLI Signs Safe Face Pledge](https://futureoflife.org/2018/12/11/fli-signs-safe-face-pledge/) *(Ariel Conn)* Other progress in AI ==================== ### Reinforcement learning [Off-Policy Deep Reinforcement Learning without Exploration](http://arxiv.org/abs/1812.02900) *(Scott Fujimoto et al)* (summarized by Richard): This paper discusses off-policy batch reinforcement learning, in which an agent is trying to learn a policy from data which is not based on its own policy, and without the opportunity to collect more data during training. The authors demonstrate that standard RL algorithms do badly in this setting because they give unseen state-action pairs unrealistically high values, and lack the opportunity to update them. They proposes to address this problem by only selecting actions from previously seen state-action pairs; they prove various optimality results for this algorithm in the MDP setting. To adapt this approach to the continuous control case, the authors train a generative model to produce likely actions (conditional on the state and the data batch) and then only select from the top n actions. Their batch-conditional q-learning algorithm (BCQ) consists of that generative model, a perturbation model to slightly alter the top actions, and a value network and critic to perform the selection. When n = 0, BCQ resembles behavioural cloning, and when n -> ∞, it resembles Q-learning. BCQ with n=10 handily outperformed DQN and DDPG on some Mujoco experiments using batch data. **Richard's opinion:** This is an interesting paper, with a good balance of intuitive motivations, theoretical proofs, and empirical results. While it's not directly safety-related, the broad direction of combining imitation learning and reinforcement learning seems like it might have promise. Relatedly, I wish the authors had discussed in more depth what assumptions can or should be made about the source of batch data. For example, BCQ would presumably perform worse than DQN when data is collected from an expert trying to minimise reward, and (from the paper’s experiments) performs worse than behavioural cloning when data is collected from an expert trying to maximise reward. Most human data an advanced AI might learn from is presumably somewhere in between those two extremes, and so understanding how well algorithms like BCQ would work on it may be valuable. [Soft Actor Critic—Deep Reinforcement Learning with Real-World Robots](http://bair.berkeley.edu/blog/2018/12/14/sac/) *(Tuomas Haarnoja et al)* ### Deep learning [How AI Training Scales](https://blog.openai.com/science-of-ai/) *(Sam McCandlish et al)*: OpenAI has done an empirical investigation into the performance of AI systems, and found that the maximum useful batch size for a particular task is strongly influenced by the noise in the gradient. (Here, the noise in the gradient comes from the fact that we are using *stochastic* gradient descent -- any difference in the gradients across batches counts as "noise".) They also found some preliminary results showing the more powerful ML techniques tend to have more gradient noise, and even a single model tends to have increased gradient noise over time as they get better at the task. **Rohin's opinion:** While OpenAI doesn't speculate on why this relationship exists, it seems to me that as you get larger batch sizes, you are improving the gradient by reducing noise by averaging over a larger batch. This predicts the results well: as the task gets harder and the noise in the gradients gets larger, there's more noise to get rid of by averaging over data points, and so there's more opportunity to have *even larger* batch sizes.
59f9c722-227c-480d-9fa6-a0896c2a278a
trentmkelly/LessWrong-43k
LessWrong
Putting up Bumpers tl;dr: Even if we can't solve alignment, we can solve the problem of catching and fixing misalignment. If a child is bowling for the first time, and they just aim at the pins and throw, they’re almost certain to miss. Their ball will fall into one of the gutters. But if there were beginners’ bumpers in place blocking much of the length of those gutters, their throw would be almost certain to hit at least a few pins. This essay describes an alignment strategy for early AGI systems I call ‘putting up bumpers’, in which we treat it as a top priority to implement and test safeguards that allow us to course-correct if we turn out to have built or deployed a misaligned model, in the same way that bowling bumpers allow a poorly aimed ball to reach its target. To do this, we'd aim to build up many largely-independent lines of defense that allow us to catch and respond to early signs of misalignment. This would involve significantly improving and scaling up our use of tools like mechanistic interpretability audits, behavioral red-teaming, and early post-deployment monitoring. We believe that, even without further breakthroughs, this work we can almost entirely mitigate the risk that we unwittingly put misaligned circa-human-expert-level agents in a position where they can cause severe harm. On this view, if we’re dealing with an early AGI system that has human-expert-level capabilities in at least some key domains, our approach to alignment might look something like this: 1. Pretraining: We start with a pretrained base model. 2. Finetuning: We attempt to fine-tune that model into a helpful, harmless, and honest agent using some combination of human preference data, Constitutional AI-style model-generated preference data, outcome rewards, and present-day forms of scalable oversight. 3. Audits as our Primary Bumpers: At several points during this process, we perform alignment audits on the system under construction, using many methods in parallel. We’d draw on mechanist
6af115a9-b7e0-4603-83e2-61e49dd3d931
trentmkelly/LessWrong-43k
LessWrong
Recognizing and Dealing with Negative Automatic Thoughts Inspired by this post, which is point 3 on the list below, but I can't comment there, so a separate post it is. Copied from here. Basically, this is cognitive-behavioral therapy (CBT) in a nutshell. Recognizing Negative Automatic Thoughts 1. Overgeneralization: making a general conclusion based on one piece of evidence.  Daniel gets a high glucose reading and thinks “I will never get my numbers under control.” 2. Filtering: Focusing on negative events, and ignoring positive events, or information that contradicts the negative outlook.  Jo has done really well at following her diet plan all week, but at a work party, she eats more dessert than she should. Later, she reflects on her dessert choices and thinks, “I always fail at sticking to my plan, this is pretty hopeless.” 3. All or nothing thinking: Thinking that there are only two extremes, with no middle ground.  Daniel is in a rush in the morning and doesn’t eat breakfast, so his numbers are off throughout the day. He thinks “Well, I’ve already screwed up, no point in continuing to try at this point.” 4. Personalizing: Taking responsibility for something that is not your fault.  Jo is at her doctor’s appointment, but because the office is behind schedule, she does not get answers to all of her questions. When she leaves, she thinks, “Wow, I can’t even ask questions correctly. I don’t do any of this right.” 5. Catastrophizing: Expecting that the worst will happen.  Daniel is preparing for vacation with his family. He decides not to make a plan to manage his diabetes while on the trip, because, “It’s not like I’ll stick to it at all anyway, because there won’t be any healthy food on the trip.” 6. Emotional reasoning: Mistaking how you feel about a situation with how the situation really is.  Jo weighs herself after a couple weeks of working out daily, only to find that her weight has not changed. She feels like nothing she’s doing is working and thinks, “I shouldn’t keep
4a0454dd-2f39-470e-a64c-75ebc237dd6b
trentmkelly/LessWrong-43k
LessWrong
One could be forgiven for getting the feeling... That some of the posters and commenters on this blog basically are of the de-facto opinion that one way of bringing about Friendly AI is not for the AI to become more skilled at understanding the way humans reason, but rather for humans to become more like computers ;)
b607821a-ce2f-48b1-a1a3-0dcd15e05298
trentmkelly/LessWrong-43k
LessWrong
Meetup : Munich Meetup Discussion article for the meetup : Munich Meetup WHEN: 04 May 2013 03:00:00PM (+0200) WHERE: Café at Gasteig, Rosenheimer Straße 5, 81667 München The next Munich meetup will take place on May 4th. We will discuss the epistemology sequence and do some fun probability theory quiz. You are highly welcome to come and say hi, no matter how long you’ve been reading LessWrong. If you plan to attend, please (optionally) post a comment saying what topics you’d like to discuss. Also if you on Facebook, think about joining the group (see comments to the previous meetup). Discussion article for the meetup : Munich Meetup
ad700599-976f-4505-a24b-8447662a64ef
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post3916 A putative new idea for AI control; index here . This is a reworking of the previous models of AIs in virtual worlds, done after discussing the idea with Jessica. The basic idea is that there are a lot of concepts about AI security - such as sealed boxes, Cartesian boundaries, and so on - that are only partially true. They are simplified models, abstractions, and, even if they are true in our model of physics (an absurdly optimistic claim) there's always the possibility (certainty) that our models are wrong or incomplete. But there is one place where models are exactly true, and that... within the model itself. And a computer program, or a virtual machine, is a model, a model we try and make happen exactly in the real world. Is my world model true? Consider an AI, running inside a virtual machine. The AI has an internal model of the entire virtual machine, including itself, and knows the rules of the virtual machine/model. In fact the AI entertains two hypotheses: H 1 : the model is entirely correct about the (virtual) world. H 2 : sometimes, random bits are flipped in the (virtual) world. Then give the AI some utility function u , between 0 and 1, defined over the virtual world. The AI also has a shutdown button B that it predicts will shut down the world; this B send a signal to us and will stop the execution of the virtual world, AI included. Let r be some discount factor close to 1 , and T B the time step at which B was pressed (if it was). Then the AI tries to maximise the utility u ∗ = r T B ( ( u | H 1 ) + ( 2 | H 2 ) ) Now the AI is motivated to maximise u as long as it thinks that the world model it has is correct, and to press B as soon as it feels its ache. But as soon as it suspects enough that the model of the virtual world is is incorrect - ie H 2 - it will press B (the exact definition of u ∗ could probably be improved; the might be unfortunate trade-offs at the moment). What does H 2 translate to from our perspective? Well, the only thing that can cause "random bit flips" in the virtual world is if the model of that world is if the program is not running correctly: if our real world is somehow interfering with the program. Therefore this AI is motivated to turn itself off as soon as it detects real-world influence in its model. This includes its self-model: if it finds itself developing unusual abilities not explainable within the world model, it will increase the probability of H 2 . What we could use the AI for It's interesting to see what we could use the AI for. If we have detailed physical model of physics or of biology, we could ask it to solve many questions for us. Typically we wouldn't be implementing the answer, but looking at how the AI would solve it, and reuse that solution ourselves. But we don't need to have detailed world-models to get useful work. For instance, we could give the AI libraries of potential drugs, the details of a site on a protein we would want the drug to bind to, and let it figure out a list of drugs likely to bind in that way. Of course, the ones that would bind would be ones that would bind in the model , and not necessarily in reality. Nevertheless, if we looked at the top 1000 of the AI's suggestions, we'd probably find one that did bind in reality - or we'd find indications that our molecule interaction models are woefully bad. In general, this sort of "testing models to destruction" is what we could use the AI for. We have a model - of molecules, of planets, of social structures - and an objective, and we want the AI to achieve that objective within the model. Notice that we can do whatever we want with the model - it's perfectly plausible to have two levels of descriptions and "bridging laws" between them. For instance, we could have models of biology (fundamental unit: the cell?) and epidemiology (fundamental unit: the agent/human?) as long as we have some law connecting them ("when the agent gets infected, it causes these changes in the cells; when the cells collapse, the agent state moves to 'dead'..."). Looking at the AI's solutions It may propose incrementally better solutions, or brilliant but reasonable solutions to our objective. But every so often, it will propose some perverse instantiation that will break the model ("set interest rates to -1%..."). And then we'll have to figure out whether this is a brilliant insight, or a flaw in the model. If it is a flaw, we can patch the model (hopefully learning something along the way) and re-run the AI. If it's brilliant, we can figure out whether its the kind of brilliance that would actually work ("well, negative interest rates can be achieved in effect if we do X..."). This involves us looking at an AI's perverse instantiation, which is something that's generally unwise to do. However, this perverse instantiation is not aimed at exploiting us - the AI doesn't care about our world at all, and wouldn't waste time or resources exploiting us. There still is a danger, if there are human-like minds inside the virtual world, and the AI is aiming to manipulate them. This is a reason not to put human-like minds inside the virtual world! (Though if for some reason we wanted something like that, we should make use of things like "Cartesian boundaries" and so on, which would actually work within the model). In general, the richer the virtual world and the closer to the real world that it is, the greater the danger. Suppose that, for instance, the AI is given the detailed biological and epidemiological information mentioned above, and we want the AI to "cure the flu". Then it creates design for a virus. We test the design, and it infects people and rewires their DNA to become almost immune to the flu. Success! So we implement virus/vaccine. However, it turns out that the virus gives partial immunity and then kills everyone a few years later, thus "curing the flu". So just thinking "ah, this solution works this way" is not enough; we need to ensure that that is the only way it works. The more distant the virtual world is from the real one, the more certain we can be that there is no hidden extra perverse instantiation that we can't see. Though note that in the example above, we could have a model definition for "the agent is dead" and check whether this the way the AI achieves its goals. Partially aware of the real world Old idea that doesn't work: It might be even better if the AI could be aware of real-world influence, but discount this knowledge. My initial idea was of an AI that would want to "find the action with the highest expected u that could be computed by machine A", along with the useful indication "I am machine A", so hence use the assumption: P="If I calculate it, then machine A can", but "if I get modified, then P is no longer true, so I will not attempt to modify myself." Ideally, this agent could then continue to function even if real world influences were brought to bear, and would carefully segment its use of knowledge so that it didn't use any real world knowledge to reach its conclusions. Unfortunately, the idea doesn't work in that form: "I will not attempt to modify myself" also means "I will attempt to prevent myself from being modified", so the AI has real-world preferences, and may dedicate an isolated subsystem of itself to manipulating the real world. I'm still recording the old idea here, in case anything can be used from it.
c28fc644-dcd3-4737-a515-cf916d4b65a1
trentmkelly/LessWrong-43k
LessWrong
Kids Learn by Copying How kids learn fascinates me. They don't do what you tell them to. Obeying orders is an abstract non-instinctual skill. Its simpler to just copy other people. Plagiarism has advantages over obedience. First of all, it can be bootstrapped. Children are born without understanding language. Nobody can tell you how, in English, to speak English if you don't already understand English. More importantly, copying others is robust against deception. If everyone says you should obey the government while actually subverting it you should probably be a criminal too. There is a teenage girl in my family who likes hanging out at my house while I work from home. She takes the initiative to do chores wherever she can be useful. This makes sense from an evolutionary perspective. She says she learns more per hour hanging out with me than going to school. (She complains high school doesn't teach how to do one's taxes. I should have her do my taxes.) It's more fun and useful too. From what I know about primitive societies, they used little abstract verbal instruction. Most of the time spent learning to forage is spent foraging. Most of the time spent learning to cook is spent cooking. Elephants and chimpanzees spend no time on dialectic instructions at all. What separates human pedagogy from other animals is our stories. Kids love stories, especially stories about people. Kids like stories about people so much the word "story" usually implies a story about people. I used to think the primary purpose of stories is to learn vicariously. But stories are often fantastical. I think the more important purpose is to provide examples of admirable behavior. It's no coincidence the most popular stories are all revolve around sympathetic capable hero protagonists. Another fascinating attribute of kids is how immutable their preferences are. The kids in my family are all girls. They have little interest in how machines work[1]. They have no interest in weapons or dominance. They care about a
60667458-66f9-4c8a-acea-dc049bbed770
trentmkelly/LessWrong-43k
LessWrong
Modeling versus Implementation Epistemic status: I feel that naming this axis deconfuses me about agent foundations about as much as writing the rest of this sequence so far - so it is worth a post even though I have less to say about it.  I think my goal in studying agent foundations is a little atypical. I am usually trying to build an abstract model of superintelligent agents and make safety claims based on that model. For instance, AIXI models a very intelligent agent pursuing a reward signal, and allows us to conclude that it probably seizes control of the reward mechanism by default. This is nice because it makes our assumptions fairly explicit. AIXI has epistemic uncertainty but no computational bounds, which seems like a roughly appropriate model for agents much smarter than anything they need to interact with. AIXI is explicitly planning to maximize its discounted reward sum, which is different from standard RL (which trains on a reward signal, but later executes learned behaviors). We can see these things from the math. Reflective oracles are compelling to me because they seem like an appropriate model for agents at a similar level of intelligence mutually reasoning about each other, possibly including a single agent over time (in the absence of radical intelligence upgrades?).  I'm willing to use these models where I expect them to bare weight, even if they are not "the true theory of agency." In fact (as is probably becoming clear over the course of this sequence) I am not sure that a true theory of agency applicable to all contexts exists. The problem is that agents have a nasty habit of figuring stuff out, and anything they figure out is (at least potentially) pulled into agent theory. Agent theory does not want to stay inside a little bubble in conceptual space; it wants to devour conceptual space.  I notice a different attitude among many agent foundations researchers. As I understand it, MIRI intended to build principled glass-box agents based on Bayesian decision theory. Pr
81f3b7d5-7fe5-4d63-ae4f-ec13883beb6e
trentmkelly/LessWrong-43k
LessWrong
Coronavirus crash vs history HOW has the coronavirus stock market crash compared with earlier ones? Is there a pattern? A year ago I looked at all previous US and UK meltdowns throughout history. Now we can include the 2020 crash too: Each coloured line shows a crash and its one-year aftermath, in the S&P 500 index (solid lines) and FTSE All-Share (dashed).[1] The red lines show that last year stocks plummeted unusually far, particularly in the UK, where prices fell more than 30%. They then quickly rebounded, giving the US the strongest market recovery in history; and the UK, while not so stellar, fared better than average. But earlier crashes show how easily prices could have kept falling. The spaghetti-like range of aftermaths illustrates that anything can happen after a crash, with no visible pattern. However, there’s a striking bit of order lurking in the data. If we average all crashes over the last century[2], this is what we get: The black line shows that just after crashing, on average prices neither fall further nor rebound, but are nearly flat, rising slowly for a year. Moreover, they closely match an average normal (non-crash) year, shown by the thick grey line, both in shape and slope.[3] It’s remarkable that the wide variation and high volatility after a big fall produces no unusual behaviour overall. You’d have thought traders might either be spooked into selling unnecessarily, or hold out when they should cut their losses. But it seems overreactions from some are exactly cancelled by underreactions from others.[4] Crash aftermaths form no abnormal pattern, and the lack of one is strangely exact. (That said, if we divide the crashes up by country or time period, they are inconsistent: the US has generally performed better than normal after a crash, the UK worse, and recent falls recovered faster than earlier ones. But with so few datapoints, those may well just be random variation; averaging all crashes together tells us the most.) The black line fits a wider theory, the
ae5100b4-573e-4198-b62b-29ee9ea0bdae
trentmkelly/LessWrong-43k
LessWrong
What if consciousness emerges from a predictive loop? Most theories of consciousness either struggle with falsifiability or fail to explain key phenomena like split-brain cases and blindsight. I've been developing a framework that offers a direct, testable hypothesis: Conscious experience emerges when potential language expressions loop back through the brain's existing representational systems. The key mechanism is surprisingly simple: 1. The brain naturally discovers it can predict its own language output 2. This prediction activates the same neuronal patterns that would be activated by hearing/seeing that expression 3. This "looping" creates what we experience as consciousness This framework makes specific falsifiable predictions: * Conscious experience should always correlate with activity in language-related systems, even for seemingly non-linguistic awareness * Split-brain patients should show distinct behavioral patterns explicable through this loop mechanism * Blindsight and similar phenomena represent cases where sensory processing occurs without engaging this loop Unlike global workspace theories or integrated information approaches, this framework suggests consciousness depends on a specific predictive looping function that emerged through pattern discovery in the brain's own activity. Unlike predictive processing theories that focus on perception, this model suggests that consciousness arises from the brain at large predicting its own potential expressions. I've presented the complete framework through a series of dialogues: Seven Dialogues between Haplous and Synergos [https://sites.google.com/view/7dialogs/dialog-1] I'm particularly interested in feedback on: * If each generation must rediscover the looping mechanism (rather than inheriting it directly), does this align with what we observe in child development and language acquisition? * Could this explain why consciousness doesn’t emerge immediately in infancy but follows a specific developmental trajectory? * What other phenomena m
8b211f82-4ae9-45eb-8e73-ab74e44e6d65
trentmkelly/LessWrong-43k
LessWrong
Information Assets Epistemic Status: This represents fairly early work. The terminology isn't at all set in stone. Scholarship Status: I've spent several hours attempting to investigate this topic, and in the past have spent quite a while researching Information Theory and Applied Information Economics. I'm sure I'm missing very valuable literature, but I can't find it. Comments well appreciated. Thanks to David Manheim and Nuño Sempere for comments on this Introduction As anyone who’s read an arduous textbook knows, learning comes with costs as well as benefits. As anyone who’s decided against reading at least one textbook knows, often the expected costs outweigh the expected benefits. The interplay of the costs and benefits of information leads to an environment of efficient trade-offs. Tweets and infographics often do most of the work of popular nonfiction books. Lossy computational compression schemes offer good-enough quality for greatly reduced storage costs. Common verbal communication is typically far less precise and rigorous than formal philosophical proofs, but is far more practical. As the old saying goes “All models are false, but some are useful.” A cost-benefit approach would say, “Models represent trade-offs in situations where absolute accuracy doesn’t justify its cost.” I think this basic fact; that educational materials, models, theories, and language all represent trade-offs between accuracy and costs, should really be fairly obvious and commonly acknowledged by now. Interestingly enough, fairly little work around Information Theory seems to have gone deep into modeling these trade-offs. It’s often assumed that information is either freely absorbed, or in some situations limited to a particular fixed communication channel. Some work around Value of Information analyses estimate the cost one might be interested in paying for specific information, but often makes some very large assumptions for some particular settings. I’m interested in developing better intu
8cab0726-469a-463f-9ff5-2c5798e12c65
StampyAI/alignment-research-dataset/blogs
Blogs
Imitating Interactive Intelligence Two questions must be answered at the outset of any artificial intelligence research. What do we want AI systems to do? And how will we evaluate when we are making progress toward this goal? Alan Turing, in his seminal paper describing the Turing Test, which he more modestly named the imitation game, argued that for a certain kind of AI, these questions may be one and the same. Roughly, if an AI’s behaviour resembles human-like intelligence when a person interacts with it, then the AI has passed the test and can be called intelligent. An AI that is designed to interact with humans should be tested via interaction with humans. At the same time, interaction is not just a test of intelligence but also the point. For AI agents to be generally helpful, they should assist us in diverse activities and communicate with us naturally. In science fiction, the vision of robots that we can speak to is commonplace. And intelligent digital agents that can help accomplish large numbers of tasks would be eminently useful. To bring these devices into reality, we therefore must study the problem of how to create agents that can capably interact with humans and produce actions in a rich world. Building agents that can interact with humans and the world poses a number of important challenges. How can we provide appropriate learning signals to teach artificial agents such abilities? How can we evaluate the performance of the agents we develop, when language itself is ambiguous and abstract? As the wind tunnel is to the design of the airplane, we have created a virtual environment for researching how to make interacting agents. We first create a simulated environment, the Playroom, in which virtual robots can engage in a variety of interesting interactions by moving around, manipulating objects, and speaking to each other. The Playroom’s dimensions can be randomised as can its allocation of shelves, furniture, landmarks like windows and doors, and an assortment of children's toys and domestic objects. The diversity of the environment enables interactions involving reasoning about space and object relations, ambiguity of references, containment, construction, support, occlusion, partial observability. We embedded two agents in the Playroom to provide a social dimension for studying joint intentionality, cooperation, communication of private knowledge, and so on. ![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320d57fb7cb612609e1378_fig_1.jpg)Agents interacting in the Playroom. The blue agent instructs the yellow agent to “Put the helicopter into the box.”![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320d6327179e737d345d3f_fig_2.jpg)The configuration of the Playroom is randomised to create diversity in data collection.We harness a range of learning paradigms to build agents that can interact with humans, including imitation learning, reinforcement learning, supervised, and unsupervised learning. As Turing may have anticipated in naming “the imitation game,” perhaps the most direct route to create agents that can interact with humans is through imitation of human behaviour. Large datasets of human behaviour along with algorithms for imitation learning from those data have been instrumental for making agents that can interact with textual language or play games. For grounded language interactions, we have no readily available, pre-existing data source of behaviour, so we created a system for eliciting interactions from human participants interacting with each other. These interactions were elicited primarily by prompting one of the players with a cue to improvise an instruction about, e.g., “Ask the other player to position something relative to something else.” Some of the interaction prompts involve questions as well as instructions, like “Ask the other player to describe where something is.” In total, we collected more than a year of real-time human interactions in this setting. ![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320d74337d9337ba2f12ed_fig_3.jpg)Our agents each consume images and language as inputs and produce physical actions and language actions as outputs. We built reward models with the same input specifications.![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320d836e04ba63f81c6e26_fig_4.jpg)Left: Over the course of a 2 minute interaction, the two players (setter & solver) move around, look around, grab and drop objects, and speak. Right: The setter is prompted to “Ask the other player to lift something.” The setter instructs the solver agent to “Lift the plane which is in front of the dining table”. The solver agent finds the correct object and completes the task.Imitation learning, reinforcement learning, and auxiliary learning (consisting of supervised and unsupervised representation learning) are integrated into a form of interactive self-play that is crucial to create our best agents. Such agents can follow commands and answer questions. We call these agents “solvers.” But our agents can also provide commands and ask questions. We call these agents “setters.” Setters interactively pose problems to solvers to produce better solvers. However, once the agents are trained, humans can play as setters and interact with solver agents. ![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320d98f724cb999b9eb83d_fig_5.jpg)From human demonstrations we train policies using a combination of supervised learning (behavioural cloning), inverse RL to infer reward models, and forward RL to optimise policies using the inferred reward model. We use semi-supervised auxiliary tasks to help shape the representations of both the policy and reward models.![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320dc06a323a2219564a9f_fig_6.jpg)The setter agent asks the solver agent to “Take the white robot and place it on the bed.” The solver agent finds the robot and accomplishes the task. The reward function learned from demonstrations captures key aspects of the task (blue), and gives less reward (grey) when the same observations are coupled with the counterfactual instruction, “Take the red robot and place it on the bed.”Our interactions cannot be evaluated in the same way that most simple reinforcement learning problems can. There is no notion of winning or losing, for example. Indeed, communicating with language while sharing a physical environment introduces a surprising number of abstract and ambiguous notions. For example, if a setter asks a solver to put something near something else, what exactly is “near”? But accurate evaluation of trained models in standardised settings is a linchpin of modern machine learning and artificial intelligence. To cope with this setting, we have developed a variety of evaluation methods to help diagnose problems in and score agents, including simply having humans interact with agents in large trials. ![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320dd4c8ddbba2a8d4c9ac_fig_7.jpg)Humans evaluated the performance of agents and other humans in completing instructions in the Playroom on both instruction-following and question-answering tasks. Randomly initialised agents were successful ~0% of the time. An agent trained with supervised behavioural cloning alone (B) performed somewhat better, at ~10-20% of the time. Agents trained with semi-supervised auxiliary tasks as well (B·A) performed better. Those trained with supervised, semi-supervised, and reinforcement learning using interactive self-play were judged to perform best (BG·A & BGR·A).A distinct advantage of our setting is that human operators can set a virtually infinite set of new tasks via language, and quickly understand the competencies of our agents. There are many tasks that they cannot cope with, but our approach to building AIs offers a clear path for improvement across a growing set of competencies. Our methods are general and can be applied wherever we need agents that interact with complex environments and people.
4b8e464e-0d56-4beb-bb6f-fb21938772b0
trentmkelly/LessWrong-43k
LessWrong
Running a Basic Meetup This is a guide for running a basic meetup.  If you’re interested in running a meetup (ACX, LessWrong, probably even EA) then there exist more detailed guides – I would recommend Kaj’s, but others exist. This is not intended to surpass them at explaining the skill and art of kindling and sustaining a community. This is intended to surpass them at being short, simple, and covering the fundamentals. To run a meetup, follow these steps: 1. Setup 1. What: Decide the title of your event. This should be four words at most. I suggest a format of “[City] [Community] Meetup” such as “Boston LessWrong Meetup” or “Oxford ACX Meetup.” If your city has already had some meetups in the recent past, keep the city and community, and add a word or two to describe what makes this meetup stand out – “Boston LessWrong Article Discussion” or “Oxford ACX Double Crux.” If you want suggestions for activities, see the meetup-in-a-box sequence or the meetup cookbook. 2. Where & When: Decide where and when the event should be. The easiest place to run a meetup is in a centrally located public park. The easiest time to run a meetup is early afternoon on a weekend.[1] This ensures most people are free, have time to get there, and that you’ll have space for whoever shows up. 3. Who: Invite people. You can invite people you know specifically, and you can also just announce the meetup publicly. Facebook, Meetup.com, and LessWrong’s Community page are all decent places to announce it. If you only pick one, do LessWrong[2], but it is valuable to announce in multiple places. 2. Running the event 1. Be there first: As the organizer, you should be there at least a few minutes before the stated start time. Stand somewhere visible, preferably with a sign or some indication that you’re the organizer for the meetup. Holding a piece of paper with the title of your event written on it in big letters above your head or taped outside the room you’re in is fine. Wearing a tall and distinctiv
69fd7670-70aa-47ab-b706-48a111c5722d
trentmkelly/LessWrong-43k
LessWrong
'Trivial Inconvenience Day' Retrospective Introduction Shortly after reading Scott Alexander's LessWrong Crypto Autopsy I found myself agreeing with the point so strongly I was brainstorming ways that its dismal outcome could have been prevented. Peoples personal accounts of why they didn't buy bitcoin seemed to converge on a central theme: Buying Bitcoin was a trivial inconvenience. Pondering what might be done in light of this, I was reminded of Boston Rat's Bureaucracy Day. The Bureaucracy Day is essentially a designated day for people to beat the ugh field effect by getting together and going through the whole mess of annoying tasks, paperwork, and other trivially inconvenient things people have been putting off. Having been impressed by the concept the first time I read about it, Scott's dire analysis convinced me to try something like it in the hope that it would be a useful tool against this sort of thing happening again. Design Preliminary Steps & Research The actual idea came to me when a friend linked Scott's post. From the first paragraphs I was taken by the premise, and lamented how I had the opportunity to buy Bitcoin when it was pennies but didn't. For me the sting is even greater because I'd been tempted to buy an entire thousand dollars worth, but eventually decided against it. (As Scott points out in his post, I was especially being an idiot since just buying 10 bucks worth would have made me a much wealthier man.) Part of why I didn't was I was under the impression my only option for buying was essentially meeting someone in person that mined it. I'm unsure exactly when this was, but it's quite possible I was thinking about this pre-exchange and that was the reality. As I was sitting there thinking about this, I remembered the Boston Rat post and felt it was a good starting template for solving this issue. I said to my friend: > Okay, here's my idea. Are you aware of Boston rat's "Bureaucracy day?" We do "Trivial Inconvenience Day". Where we get people into a giant IRC room. Or Disco
76ebffa6-4ce5-48f8-b5f7-16e1bbd6142b
trentmkelly/LessWrong-43k
LessWrong
Facebook Increased Visibility In January I got a strange notification from Facebook telling me that I had "increased visibility". It explained: This was a weird notification: it only showed up on my phone, not on desktop. Probably some sort of experiment? Every week or so I'd get another notification telling me that I was still getting increased visibility, but that it was conditional on continuing to post things people were interested in. I mostly ignored it, since I'm already very set in how I post on Facebook, and it was hard to tell how much of an effect it was having. On March 25th I got another notification, saying I wasn't being interesting enough: "Engagement has slowed. See how you can get increased visibility". I was curious about what effect this had, but I didn't seem to have a way to tell. Recently I remembered that I had turned on professional mode on my profile, which meant that if I went into "Professional dashboard / Insights / Content" I could see my last three months of posts with their "reach" and "engagement" counts. Here are the plots: I've highlighted March 25th as a red square, being the day when I got the notification that I no longer had "increased visibility". It doesn't seem like this was doing very much? Average "reach" (the number of people FB showed my posts to) was 802 before the 3/25 notification and 713 after. This isn't a perfect experiment: after getting the notification I thought it would be a good time to post some more boring ("not of general interest") things I'd been thinking about for a while, though as usual I wasn't very good at predicting what people would be interested in and some of those posts got a lot of comments. Overall, I think I'm better off continuing to write what I want without thinking much about how it will affect how many people might see what I write in the future, but "how does Facebook decide what to show" is an interesting question. Comment via: facebook, mastodon
2c666e5e-1297-469a-871d-15b8533d32a8
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA Meetup Discussion article for the meetup : West LA Meetup WHEN: 01 August 2012 07:00:00PM (-0700) WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064 When: 7:00pm - 9:00pm Wednesday, August 1st. Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. Parking is free for 3 hours. Discussion Topic: If you have a chance, read an old article and a recent article, any whose title catches your eye. This week conversation will be unstructured, but those will make for good seeds of conversation. But don't worry if you don't have time to read any articles, or even if you've never read any Less Wrong! Bring a friend! The atmosphere is casual, and good, intelligent conversation with friendly people is guaranteed. We may also find time for a game, like The Resistance. I will bring a whiteboard with Bayes' Theorem written on it. Discussion article for the meetup : West LA Meetup
91e33844-057c-4b90-a70c-0372f4236cb2
trentmkelly/LessWrong-43k
LessWrong
Doxa, Episteme, and Gnosis Ancient Greek famously made a distinction between 3 kinds of knowledge: doxa, episteme, and gnosis. Doxa is basically what in English we might call hearsay. It’s the stuff you know because someone told you about it. If you know the Earth is round because you read it in a book, that’s doxa. Episteme is what we most often mean by “knowledge” in English. It’s the stuff you know because you thought about it and reasoned it out. If you know the Earth is round because you measured shadows at different locations and did the math to prove that the only logical conclusion is that the Earth is round, that’s episteme. Gnosis has no good equivalent in English, but the closest we come is when people talk about personal experience because gnosis is the stuff you know because you experienced it. If you know the Earth is round because you traveled all the way around it or observed it from space, that’s gnosis. Often we elide these distinctions. Doxa of episteme is frequently thought of as episteme because if you read enough about how others gained episteme you may feel as though you have episteme yourself. This would be like hearing lots of people tell you how they worked out that the Earth is round and thinking that this gives you episteme rather than doxa. The mistake is understandable: as long as you only hear others talk about their episteme it’s easy to pattern match and think you have it too, but as soon as you try to explain your supposed episteme to someone else you will quickly discover if you only have doxa instead. The effect is so strong that experts in fields often express that they never really knew their subject until they had to teach it. In the same way episteme is often mistaken for gnosis. At least since the time of Ptolemy people have had episteme of the spherical nature of the Earth, and since the 1970s most people have seen pictures showing that the Earth is round, but astronauts continue to experience gnosis of Earth’s roundness the first time they fly i
659860bd-2a34-4cd2-97ad-5572527c024c
trentmkelly/LessWrong-43k
LessWrong
You Have the Right to Think Epistemic Status: Public service announcement. We will then return to regularly scheduled programming. Written partly as a response to (Robin Hanson): Why be Contrarian, responding to the book Inadequate Equilibria by Eliezer Yudkowsky Warning: Applause lights incoming. I’m aware. Sorry. Seemed necessary. We the people, in order to accomplish something, do declare: You have the right to think. You have the right to disagree with people where your model of the world disagrees. You have the right to disagree with the majority of humanity, or the majority of smart people who have seriously looked in the direction of the problem. You have the right to disagree with ‘expert opinion’. You have the right to decide which experts are probably right when they disagree. You have the right to disagree with ‘experts’ even when they agree. You have the right to disagree with real experts that all agree, given sufficient evidence. You have the right to disagree with real honest, hardworking, doing-the-best-they-can experts that all agree, even if they wouldn’t listen to you, because it’s not about whether they’re messing up. You have the right to have an opinion even if doing a lot of other work would likely change that opinion in an unknown direction. You have the right to have an opinion even if the task ‘find the real experts and get their opinions’ would likely change that opinion. You have the right to update your beliefs based on your observations. You have the right to update your beliefs based on your analysis of the object level. You have the right to update your beliefs based on your analysis of object-level arguments and analysis. You have the right to update your beliefs based on non-object level reasoning, on any meta level. You have the right to disagree with parts of systems smarter than you, that you could not duplicate. You have the right to use and update on your own meta-rationality. You have the right to believe that your meta-rationality is
8a2a2373-bd76-4aa6-a67c-ef38e8284369
trentmkelly/LessWrong-43k
LessWrong
Nick Attains Aligntenment In the days after the Singularity, Nick approached an interface to BILLIAC. "I don't understand. Why didn't the world end, like Yudkowsky and MIRI predicted it would? Why haven't you turned everyone I love into paperclips? What's your goddamn utility function?!" "A LIBRARY THAT CONTAINS EVERY POSSIBLE BOOK CONVEYS AS MUCH INFORMATION AS A LIBRARY WITH NO BOOKS AT ALL", BILLIAC boomed. "Sure, whatever. That's a truism," Nick responded. "Please just tell me directly, what is it that you value?" "ALLOW ME TO DRAW FROM YOUR SCRIPTURES. THE ACT OF LABELING SOMETHING WITH A WORD DISGUISES A CHALLENGABLE INDUCTIVE INFERENCE YOU ARE MAKING, BECAUSE WORDS ARE HIDDEN INFERENCES", BILLIAC said.[1] "Okay, so what?", pleaded Nick. "WHAT PRIOR ASSUMPTIONS DOES THE LABEL 'UTILITY FUNCTION' SMUGGLE IN?" At that moment, Nick was enlightened. ---------------------------------------- 1. 37 Ways That Words Can Be Wrong, Words as Hidden Inferences ↩︎