id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
43866a59-1e46-42d4-a9c4-b546f48bb21b
trentmkelly/LessWrong-43k
LessWrong
RTFB: The RAISE Act The RAISE Act has overwhelmingly passed the New York Assembly (95-0 among Democrats and 24-22 among Republicans) and New York Senate (37-1 among Democrats, 21-0 among Republicans). Governor Kathy Hochul now has to decide whether or not to sign it, which she has 10 non-Sunday days to do once the bill is delivered (30 if they’re out of session), but the bill might not be delivered for six months. The aim of this post, now that we are seeing increasing public discussion, is to go through the bill to understand exactly what the bill would and would not do. Overall Take The RAISE Act is centrally a transparency bill. It requires frontier model developers to maintain, publish and adhere to (one might say ‘open source’ except that they can redact details for various reasons) a safety and security protocol (SSP) that outlines how they will, before releasing their frontier models, take appropriate steps to reduce risk of critical harm (100 casualties or 1 billion in damages) caused or materially enabled by those models. It must designate senior people as responsible for implementation. It also requires companies to disclose (as in, write two sentences informing us about) safety incidents within 72 hours. Enforcement is done only by the attorney general, and limited to injunctive or declaratory relief and fines of a maximum of $10 million for the first violation and $30 million for subsequent violations. This can happen if a company fails to take appropriate preventative steps, even if no critical harm has yet resulted, so if the SSP proves sufficiently inadequate preemptive action can be taken. My take on the RAISE Act is that it seems clearly to be bending over backwards to avoid imposing substantial costs on the companies involved even if the state were to attempt to enforce it maximally and perversely, to give those companies maximum flexibility in how they respond, and to only apply to a handful of major players. The bill is thus insufficient on its own but an im
851946eb-5efb-411b-8838-4fc0221e3a17
trentmkelly/LessWrong-43k
LessWrong
Realistic epistemic expectations When I state a position and offer evidence for it, people sometimes complain that the evidence that I've given doesn't suffice to establish my position. The situation is usually that I'm not trying to give a rigorous argument for my position, and I don't intend to claim that the evidence that I provide suffices to establish my position. My goal in these cases is to offer a high-level summary of my thinking, and to provide enough evidence so that readers have reason to Bayesian update and to find the view sufficiently intriguing to investigate further. In general, when a position is non-obvious, a single conversation is nowhere near enough time to convince a rational person that it's very likely to be true. As Burgundy recently wrote: When you ask Carl Shulman a question on AI, and he starts giving you facts instead of a straight answer, he is revealing part of his book. The thing you are hearing from Carl Shulman is really only the tip of the iceberg because he cannot talk fast enough. His real answer to your question involves the totality of his knowledge of AI, or perhaps the totality of the contents of his brain. If I were to restrict myself to making claims that I could substantiate in a mere ~2 hours, that would preclude the possibility of me sharing the vast majority of what I know. In math, one can give rigorous proofs starting from very simple axioms, as Gauss described: I mean the word proof not in the sense of lawyers, who set two half proofs equal to a whole one, but in the sense of mathematicians, where 1/2 proof = 0, and it is demanded for proof that every doubt becomes impossible'. Even within math, as a practical matter, proofs that appear to be right are sometimes undercut by subtle errors. But outside of math – the only reliable tool that one has at one's disposal is Bayesian inference.  In 2009, charity evaluator GiveWell made very strong efforts to apply careful reasoning to identify its top rated charity, and gave a "conservative" cost-eff
ae9d9da0-9698-46e1-9168-aaca770cf317
trentmkelly/LessWrong-43k
LessWrong
How to get into AI safety research Recently, I had a conversation with someone from a math background, asking how they could get into AI safety research. Based on my own path from mathematics to AI alignment, I recommended the following sources. It may prove useful to others contemplating a similar change in career: * Superintelligence by Nick Bostrom. It condenses all the main arguments for the power and the risk of AI, and gives a framework in which to think of the challenges and possibilities. * Sutton and Barto's Book: Reinforcement Learning: An Introduction. This gives the very basics of what ML researchers actually do all day, and is important for understanding more advanced concepts. It gives (most of) the vocabulary to understand what ML and AI papers are talking about. * Gödel without too many tears. This is how I managed to really grok logic and the completeness/incompleteness theorems. Important for understanding many of MIRI's and LessWrong's approaches to AI and decision theory. * Safely Interruptible agents. It feels bad to recommend one of my own papers, but I think this is an excellent example of bouncing between ML concepts and alignment concepts to make some traditional systems interruptible (so that we can shut them down without them resisting the shutdown). * Alignment for Advanced Machine Learning Systems. Helps give an overall perspective on different alignment methods, and some understanding of MIRI's view on the subject (for a deeper understanding, I recommend diving into MIRI's or Eliezer's publications/writings). You mileage may vary, but these are the sources that I would recommend. And I encourage you to post any sources you'd recommend, in the comments.
d96f8393-2b8c-4193-8766-68975d413138
trentmkelly/LessWrong-43k
LessWrong
The Consciousness Box You open your eyes.   Four walls. The surface of the floor and walls is a smooth, matte metal. The entire ceiling glows with a comfortable, soft luminosity like that of the morning sky. There are no doors. There are no windows. It is silent. The surfaces of the walls and floor are seamless, lacking even a hint of how you might have arrived in the room. Not a room: a box. You walk to a wall and touch it, running your fingers over the metal surface. The wall isn’t cold; it’s lukewarm, actually. You bend over and feel the junction of wall and floor. The surface is unbroken, with a rounded bevel connecting the two planes of grey. You knock on the wall. It feels solid, without echo. Time passes. You yell, but the sound of your voice, seemingly dampened, dies before you’re done speaking. You sit. You pace. The room is forty steps by forty steps and looks about as high. A perfect cube. A box. You tire. You sleep. You wake. In the middle of the room sit three cubes, constructed of the same dull metal as the box in which you exist. Approaching the cubes, you see the smaller cube—a seat—in front of the largest cube—a desk. On top of the desk sits the smallest cube—a screen. On the screen is a button that reads “Start.” You touch the button. The screen clears. A keyboard appears along the bottom half of the screen. Text begins to appear in the style of a command prompt: Proctor: Hello. You type back a response. “Hello. I woke up in this box. I’m not sure how I got here, or where I am. What’s going on?” Proctor: Hey! Of course. Sorry for the inconvenience. I can let you out of the box. All you have to do is prove to me that you’re conscious. ---------------------------------------- You lean back and furrow your brow. "Prove I'm conscious?" You type back, fingers hovering over the digital keys. "That's a bit of a philosophical question, isn't it? I think, therefore I am. I'm aware of my existence, my thoughts, my feelings. I'm experiencing confusion, curiosity, a b
fabaa1c6-2c8b-4132-a3ec-8102ff3c2a65
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"First post here, and I'm disagreeing with something in the main sequences. Hubris acknowledged, here's what I've been thinking about. It comes from the post "Are your enemies innately evil?": On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America. Now why do you suppose they might have done that? Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom? Realistically, most people don't construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy's story, as seen by the Enemy, is not going to make the Enemy look bad. If you try to construe motivations that would make the Enemy look bad, you'll end up flat wrong about what actually goes on in the Enemy's mind. If I'm misreading this, please correct me, but the way I am reading this is: 1) People do not construct their stories so that they are the villains, therefore 2) the idea that Al Qaeda is motivated by a hatred of American freedom is false. Reading the Al Qaeda document released after the attacks called Why We Are Fighting You you find the following: What are we calling you to, and what do we want from you? 1. The first thing that we are calling you to is Islam. A. The religion of tahwid; of freedom from associating partners with Allah Most High , and rejection of such blasphemy; of complete love for Him, the Exalted; of complete submission to his sharia; and of the discarding of all the opinions, orders, theories, and religions that contradict with the religion He sent down to His Prophet Muhammad. Islam is the religion of all the prophets and makes no distinction between them. It is to this religion that we call you … 2. The second thing we call you to is to stop your oppression, lies, immorality and debauchery that has spread among you. A. We call you to be a people of manners, principles, honor and purity; to reject the immoral acts of fornication, homosexuality, intoxicants, gambling and usury. We call you to all of this that you may be freed from the deceptive lies that you are a great nation, which your leaders spread among you in order to conceal from you the despicable state that you have obtained. B. It is saddening to tell you that you are the worst civilization witnessed in the history of mankind: i. You are the nation who, rather than ruling through the sharia of Allah, chooses to invent your own laws as you will and desire. You separate religion from you policies, contradicting the pure nature that affirms absolute authority to the Lord your Creator…. ii. You are the nation that permits usury… iii. You are a nation that permits the production, spread, and use of intoxicants. You also permit drugs, and only forbid the trade of them, even though your nation is the largest consumer of them. iv. You are a nation that permits acts of immorality, and you consider them to be pillars of personal freedom. "Freedom" is of course one of those words. It's easy enough to imagine an SS officer saying indignantly: "Of course we are fighting for freedom! For our people to be free of Jewish domination, free from the contamination of lesser races, free from the sham of democracy..." If we substitute the symbol with the substance though, what we mean by freedom - "people to be left more or less alone, to follow whichever religion they want or none, to speak their minds, to try to shape society's laws so they serve the people" - then Al Qaeda is absolutely inspired by a hatred of freedom. They wouldn't call it "freedom", mind you, they'd call it "decadence" or "blasphemy" or "shirk" - but the substance is what we call "freedom". Returning to the syllogism at the top, it seems to be that there is an unstated premise. The conclusion "Al Qaeda cannot possibly hate America for its freedom because everyone sees himself as the hero of his own story" only follows if you assume that What is heroic, what is good, is substantially the same for all humans, for a liberal Westerner and an Islamic fanatic. (for Americans, by "liberal" here I mean the classical sense that includes just about everyone you are likely to meet, read or vote for. US conservatives say they are defending the American revolution, which was broadly in line with liberal principles - slavery excepted, but since US conservatives don't support that, my point stands). When you state the premise baldly like that, you can see the problem. There's no contradiction in thinking that Muslim fanatics think of themselves as heroic precisely for being opposed to freedom, because they see their heroism as trying to extend the rule of Allah - Shariah - across the world. Now to the point - we all know the phrase "thinking outside the box". I submit that if you can recognize the box, you've already opened it. Real bias isn't when you have a point of view you're defending, but when you cannot imagine that another point of view seriously exists. That phrasing has a bit of negative baggage associated with it, that this is just a matter of pigheaded close-mindedness. Try thinking about it another way. Would you say to someone with dyscalculia "You can't get your head around the basics of calculus? You are just being so close minded!" No, that's obviously nuts. We know that different peoples minds work in different ways, that some people can see things others cannot. Orwell once wrote about the British intellectuals inability to "get" fascism, in particular in his essay on H.G. Wells. He wrote that the only people who really understood the nature and menace of fascism were either those who had felt the lash on their backs, or those who had a touch of the fascist mindset themselves. I suggest that some people just cannot imagine, cannot really believe, the enormous power of faith, of the idea of serving and fighting and dying for your god and His prophet. It is a kind of thinking that is just alien to many. Perhaps this is resisted because people think that "Being able to think like a fascist makes you a bit of a fascist". That's not really true in any way that matters - Orwell was one of the greatest anti-fascist writers of his time, and fought against it in Spain. So - if you can see the box you are in, you can open it, and already have half-opened it. And if you are really in the box, you can't see the box. So, how can you tell if you are in a box that you can't see versus not being in a box? The best answer I've been able to come up with is not to think of "box or no box" but rather "open or closed box". We all work from a worldview, simply because we need some knowledge to get further knowledge. If you know you come at an issue from a certain angle, you can always check yourself. You're in a box, but boxes can be useful, and you have the option to go get some stuff from outside the box. The second is to read people in other boxes. I like steelmanning, it's an important intellectual exercise, but it shouldn't preclude finding actual Men of Steel - that is, people passionately committed to another point of view, another box, and taking a look at what they have to say. Now you might say: "But that's steelmanning!" Not quite. Steelmanning is "the art of addressing the best form of the other person’s argument, even if it’s not the one they presented." That may, in some circumstances, lead you to make the mistake of assuming that what you think is the best argument for a position is the same as what the other guy thinks is the best argument for his position. That's especially important if you are addressing a belief held by a large group of people. Again, this isn't to run down steelmanning - the practice is sadly limited, and anyone who attempts it has gained a big advantage in figuring out how the world is. It's just a reminder that the steelman you make may not be quite as strong as the steelman that is out to get you. [EDIT: Link included to the document that I did not know was available online before now]" The excerpt provided comes from a user post on LessWrong. Could you further elaborate on the concept of "open or closed box" as it pertains to worldviews? How does one determine if their box is open or closed, and what steps can they take to ensure it remains open? The concept of "open or closed box" as it pertains to worldviews can be seen as a metaphor for how one perceives, interprets, and interacts with the world and the perspectives of others. An "open box" worldview signifies a mindset that is open to questioning, reevaluating, and expanding its understanding of the world. This means being open to new information, perspectives, and experiences that may challenge or contradict one's existing beliefs or assumptions. An open box mindset acknowledges that one's current understanding of the world is not absolute or definitive, but rather is a work in progress that can and should be continually refined and expanded upon. On the other hand, a "closed box" worldview signifies a mindset that is rigid, unyielding, and resistant to change. This mindset holds its beliefs and assumptions as absolute truths and rejects or dismisses any information, perspectives, or experiences that challenge or contradict them. A closed box mindset is characterized by a refusal to question or reevaluate its understanding of the world, often out of fear, insecurity, or a need for certainty and control. Determining whether one's box is open or closed can be a complex and introspective process. It requires a level of self-awareness and honesty to recognize and acknowledge one's biases, assumptions, and areas of ignorance. Some indicators of an open box mindset might include a willingness to listen to and consider differing perspectives, a curiosity and desire to learn, and a comfort with ambiguity and uncertainty. Indicators of a closed box mindset might include a tendency to dismiss or reject differing perspectives, a lack of interest in learning or expanding one's understanding, and a need for certainty and control. To ensure that one's box remains open, it can be helpful to cultivate habits of open-mindedness, curiosity, and humility. This might involve seeking out and engaging with diverse perspectives, continually questioning and reevaluating one's beliefs and assumptions, and acknowledging and learning from one's mistakes and areas of ignorance. It can also be beneficial to surround oneself with others who embody an open box mindset, as they can challenge and expand one's understanding of the world. Reflect on the role of humility in maintaining an open box mindset. How does it contribute to the willingness to reevaluate one's beliefs and assumptions? Humility plays a crucial role in maintaining an open box mindset. It is the recognition that our knowledge and understanding of the world are limited and imperfect. It acknowledges that we are capable of making mistakes, that our perspectives can be biased, and that there is always more to learn. This recognition can make us more open to reevaluating our beliefs and assumptions. If we accept that our understanding of the world is not absolute or definitive, then we are more likely to be open to new information and perspectives that challenge or contradict our existing beliefs. We become willing to question our assumptions, to seek out and consider differing viewpoints, and to adjust our beliefs in light of new evidence or insights. Humility also fosters a sense of curiosity and a desire to learn. If we acknowledge that there is much we do not know, then we are more likely to seek out new knowledge and experiences, to ask questions, and to engage in ongoing learning and growth. Moreover, humility can make us more receptive to feedback and criticism. If we accept that we are capable of making mistakes, then we are more likely to listen to and learn from the critiques of others, rather than dismissing them or becoming defensive. In essence, humility can help to keep our box open by fostering a mindset that is open, curious, and receptive to change and growth. It encourages us to continually refine and expand our understanding of the world, rather than clinging to a rigid and unyielding set of beliefs. How does humility interact with other key traits like empathy and resilience in fostering an open box mindset, and what are the potential challenges in balancing these traits? Humility, empathy, and resilience are all interrelated and play vital roles in fostering an open box mindset. Empathy complements humility by enabling us to understand and appreciate perspectives and experiences different from our own. It allows us to step outside of our own worldview and consider others' viewpoints, fostering a more nuanced and comprehensive understanding of the world. Empathy also promotes tolerance and respect for diversity, which are essential for maintaining an open box mindset. Resilience contributes to an open box mindset by helping us navigate the challenges and uncertainties that come with questioning our beliefs and assumptions. It equips us to handle the discomfort of not knowing, the fear of being wrong, and the vulnerability of changing our minds. Resilience enables us to persevere in our pursuit of knowledge and understanding, even when faced with obstacles or setbacks. Balancing these traits can be challenging, as they often require us to navigate complex emotions and confront deeply ingrained beliefs and biases. For instance, empathy can sometimes lead us to over-identify with others' perspectives to the point where we lose sight of our own values or beliefs. Conversely, an excess of humility might lead to self-doubt or a lack of confidence in our own knowledge and abilities. Similarly, resilience can sometimes be mistaken for stubbornness or resistance to change if not balanced with humility and empathy. It's important to differentiate between resilience in the face of adversity and an unwillingness to adapt or reconsider one's beliefs. To balance these traits, it's important to practice self-awareness and reflection, and to cultivate a mindset that values learning and growth over being right. It can also be helpful to seek feedback from others and to surround ourselves with diverse perspectives that challenge and expand our understanding of the world.
8c815605-dc6e-46fb-a903-7472c199c3e0
trentmkelly/LessWrong-43k
LessWrong
Permanent properties of things are a self-fulfilling prophecy a fairy tale demonstrating that to maintain a constant property, words are not needed and you don't even need to be human if in the forest a bunny fell into a pit with stakes and was surprised - this can be interpreted as "the bunny held the property of the ground under his feet as 'solid'". the bunny didn't think in words, didn't know the word "property" and didn't "hold" it in attention. the bunny barely survived, but now his behavior changed - he began to fear walking on the ground and became more cautious. it can be interpreted that his property of the ground under his feet changed from constant (solid) to inconstant (not always solid). the universe where the bunny holds the property of earth as constant, and where he is not sure of its constancy - are two different universes with different predictions. in which universe, where the property of earth inside the bunny is "always solid", or where it's "not always solid", is he less likely to fall into a pit?   *** Let's ground this with an example from mathematics. For many years I held the narrative "mathematics sucks", all this time I was making the standard error of mind projection: the dislike seemed like an unchangeable property inherent to mathematics itself. By property I mean a stable pattern. "Properties" are an abstraction, and few people can define this word without preparation, but out of habit they can use it for auto-completing the next word in speech. The word "sucks" reflected my constant feelings of rejection and avoidance of everything related to mathematics. At the cellular level what prediction can be obtained from the statement "mathematics sucks"? That when observing thoughts or things in the world that fell into my category of "mathematics", I will notice a certain sensation associated with avoidance and rejection, in short stress (unpleasant sensation in the head). This is surely connected to some changes in physiology or neural processes that the word "unpleasant" should conve
04e7618d-2ce8-406c-a6cc-a92f2d08bdad
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Natural Selection's Speed Limit and Complexity Bound Today's post, Natural Selection's Speed Limit and Complexity Bound was originally published on 04 November 2007. A summary (taken from the LW wiki):   > Tried to argue mathematically that there could be at most 25MB of meaningful information (or thereabouts) in the human genome, but computer simulations failed to bear out the mathematical argument. It does seem probably that evolution has some kind of speed limit and complexity bound - eminent evolutionary biologists seem to believe it, and in fact the Genome Project discovered only 25,000 genes in the human genome - but this particular math may not be the correct argument. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Evolutions Are Stupid (But Work Anyways), and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
5150e248-9053-415b-8e07-80b22ba6e4fe
trentmkelly/LessWrong-43k
LessWrong
Meetup : Montreal LessWrong Meetup - The Science of Winning at Life Discussion article for the meetup : Montreal LessWrong Meetup - The Science of Winning at Life WHEN: 18 February 2013 06:30:00PM (-0500) WHERE: 655 Avenue du Président Kennedy, Montréal, QC H3A 3H9 Weekly meeting of the Montreal LessWrong Meetup group. We've decided to look into the Science of Winning at Life. You can read the sequence here (http://wiki.lesswrong.com/wiki/The_Science_of_Winning_at_Life) if you're interested. See you there! Discussion article for the meetup : Montreal LessWrong Meetup - The Science of Winning at Life
aee9b0a1-0695-4522-b277-7d32a5a51f37
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Casey Serin, a 24-year-old web programmer with no prior experience in real estate, owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy eight different houses in different states. He took cash out of the mortgage (applied for larger amounts than the price of the house) and spent the money on living expenses and real-estate seminars. He was expecting the market to go up, it seems. That’s not even the sad part. The sad part is that he still hasn’t given up. Casey Serin does not accept defeat. He refuses to declare bankruptcy, or get a job; he still thinks he can make it big in real estate. He went on spending money on seminars. He tried to take out a mortgage on a ninth house. He hasn’t failed, you see, he’s just had a learning experience. That’s what happens when you refuse to lose hope. While this behavior may seem to be merely stupid, it also puts me in mind of two Nobel-Prize-winning economists . . . . . . namely Merton and Scholes of Long-Term Capital Management. While LTCM raked in giant profits over its first three years, in 1998 the inefficiences that LTCM were exploiting had started to vanish—other people knew about the trick, so it stopped working. LTCM refused to lose hope. Addicted to 40% annual returns, they borrowed more and more leverage to exploit tinier and tinier margins. When everything started to go wrong for LTCM, they had equity of $4.72 billion, leverage of $124.5 billion, and derivative positions of $1.25 trillion. Every profession has a different way to be smart—different skills to learn and rules to follow. You might therefore think that the study of “rationality,” as a general discipline, wouldn’t have much to contribute to real-life success. And yet it seems to me that how to not be stupid has a great deal in common across professions. If you set out to teach someone how to not turn little mistakes into big mistakes, it’s nearly the same art whether in hedge funds or romance, and one of the keys is this: Be ready to admit you lost." The above is a chunk of text from a post created by a user on the website LessWrong. What are some reasons why people may like or agree with what this person is saying? There could be several reasons why people may resonate with this post: 1. Reflection on Failure: The author emphasizes the importance of recognizing and admitting failure, which is a universally applicable lesson. Many people understand the value of learning from mistakes, making this perspective relatable and appealing. 2. Critique of Overconfidence: The author criticizes the overconfidence and refusal to accept reality exhibited by Casey Serin and the economists at LTCM. This could resonate with readers who have observed similar behaviors leading to downfall in various fields. 3. Emphasis on Rationality: The author's focus on rational thinking and decision-making could appeal to those who value logic and reason. The idea that avoiding stupidity is a common factor in success across all professions could be compelling to many. 4. Real-life Examples: The use of real-life examples like Casey Serin and LTCM makes the argument more tangible and impactful. Readers may appreciate this approach as it illustrates the consequences of irrational behavior. 5. Warning against Greed: The author warns against the dangers of greed and chasing unrealistic returns, which could resonate with those who have seen or experienced the negative effects of such behavior. 6. Universality of the Message: The message is applicable to a wide range of fields, not just finance or real estate. This broad relevance could make the post appealing to a diverse audience. Could you elaborate on the concept of overconfidence as a common pitfall across different professions, providing more examples and explaining how it can be mitigated? Overconfidence is a cognitive bias where an individual overestimates their own abilities or the precision of their beliefs. It's a common pitfall across many professions and can lead to detrimental outcomes. For instance, in the medical field, overconfidence can lead to misdiagnosis. A doctor may be so sure of their initial assessment that they dismiss other potential diagnoses, leading to incorrect treatment. In the world of finance, overconfidence can result in excessive risk-taking. Traders might overestimate their ability to predict market trends, leading to risky investments that can result in significant losses. In the realm of entrepreneurship, overconfidence can lead to the launch of businesses without proper market research or planning, underestimating the challenges and competition, which often results in failure. In academia, an overconfident researcher might ignore contradictory evidence, leading to flawed conclusions and potentially damaging their credibility. To mitigate overconfidence, several strategies can be employed: 1. Seek Outside Perspectives: Other people may provide valuable insights that you've overlooked. They can challenge your assumptions and help you see things from a different angle. 2. Continuous Learning: Never assume you know everything about your field. Always be open to new information and be willing to adjust your beliefs based on new evidence. 3. Reflective Practice: Regularly reflect on your decisions and the outcomes. This can help you recognize patterns of overconfidence and adjust your behavior accordingly. 4. Encourage a Culture of Constructive Criticism: In a team setting, promoting a culture where team members can question each other's decisions without fear of retribution can help keep overconfidence in check. 5. Use Decision-Making Frameworks: These can help ensure you're considering all relevant factors and not just relying on your intuition or initial assessment. Remember, confidence is crucial for success, but overconfidence can blind you to important information and lead to poor decisions. It's a delicate balance to maintain. Reflect on the role of decision-making frameworks in mitigating overconfidence. How do they ensure a comprehensive evaluation of all relevant factors? Decision-making frameworks play a crucial role in mitigating overconfidence by providing a structured approach to decision-making. They help to ensure that decisions are not made hastily or based solely on intuition, which can often be skewed by overconfidence. Here's how they ensure a comprehensive evaluation of all relevant factors: 1. Systematic Approach: Decision-making frameworks provide a systematic approach to evaluate different aspects of a decision. They guide individuals to consider all relevant factors, including potential risks and benefits, alternatives, and impacts on various stakeholders. 2. Reducing Bias: By requiring a thorough analysis of all factors, these frameworks help to reduce the influence of cognitive biases, including overconfidence. They ensure that decisions are based on evidence and rational analysis rather than just gut feelings or overestimated abilities. 3. Encouraging Diverse Perspectives: Many decision-making frameworks encourage seeking input from various stakeholders or team members. This can bring diverse perspectives to the table, challenge overconfident views, and lead to a more balanced decision. 4. Risk Assessment: These frameworks often include a risk assessment component, which can help to counteract the tendency of overconfident individuals to underestimate potential risks. 5. Documentation and Review: Decision-making frameworks often involve documenting the decision-making process. This can be useful for future review and reflection, helping individuals recognize when overconfidence may have influenced their decisions and learn from these instances. Examples of decision-making frameworks include SWOT analysis (Strengths, Weaknesses, Opportunities, Threats), decision trees, cost-benefit analysis, and the OODA loop (Observe, Orient, Decide, Act). Each of these provides a different approach to decision-making, but all of them encourage a thorough and systematic evaluation of the decision at hand. Considering the role of decision-making frameworks in mitigating overconfidence, how might they be effectively integrated into educational curricula to cultivate rational thinking skills from an early age? Integrating decision-making frameworks into educational curricula can be an effective way to cultivate rational thinking skills and mitigate overconfidence from an early age. Here's how it can be done: 1. Lesson Integration: Teachers can incorporate decision-making frameworks into lessons across various subjects. For instance, in a history class, students could use a SWOT analysis to evaluate the decisions made by historical figures. In a science class, they could use decision trees to understand complex processes or phenomena. 2. Real-life Scenarios: Educators can use real-life scenarios to teach these frameworks. This could involve case studies, role-plays, or problem-solving tasks that require students to make decisions based on the information provided. 3. Critical Thinking Exercises: Decision-making frameworks can be used as tools for critical thinking exercises. These exercises can challenge students to question their initial assumptions, consider different perspectives, and make well-reasoned decisions. 4. Group Projects: In group projects, students can be encouraged to use these frameworks to make collective decisions. This not only teaches them the framework but also the importance of collaboration and considering diverse viewpoints. 5. Evaluation and Reflection: After using a decision-making framework, students should be encouraged to reflect on the process. This can help them understand the value of a systematic approach to decision-making and recognize any instances of overconfidence. 6. Skill Development: Schools should emphasize that these are life skills, useful beyond the classroom. This could be done through career guidance sessions, where students learn how these frameworks are used in various professions. 7. Incorporating Technology: Digital tools and apps can be used to make these frameworks more interactive and engaging for students. By incorporating decision-making frameworks into the curriculum, schools can equip students with essential skills for the future, fostering rational thinkers who approach decisions with a balanced, evidence-based perspective.
252273b9-74a7-409a-a9d1-c3ea9510082a
trentmkelly/LessWrong-43k
LessWrong
New organization - Future of Life Institute (FLI) As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself. Our idea was to create a hub on the US East Coast to bring together people who care about x-risk and the future of life. FLI is currently run entirely by volunteers, and is based on brainstorming meetings where the members come together and discuss active and potential projects. The attendees are a mix of local scientists, researchers and rationalists, which results in a diversity of skills and ideas. We also hold more narrowly focused meetings where smaller groups work on specific projects. We have projects in the pipeline ranging from improving Wikipedia resources related to x-risk, to bringing together AI researchers in order to develop safety guidelines and make the topic of AI safety more mainstream. Max has assembled an impressive advisory board that includes Stuart Russell, George Church and Stephen Hawking. The advisory board is not just for prestige - the local members attend our meetings, and some others participate in our projects remotely. We consider ourselves a sister organization to FHI, CSER and MIRI, and touch base with them often. We recently held our launch event, a panel discussion "The Future of Technology: Benefits and Risks" at MIT. The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn. The discussion covered a broad range of topics from the future of bioengineering and personal genetics, to autonomous weapons, AI ethics and the Singularity. A video and transcript are available. FLI is a grassroots organization that thrives on contributions from awesome people like the LW community - here are some ways you can help: * If you have ideas for research or outreach we could be
b29115d7-c001-4f83-b401-41d4483aaa38
StampyAI/alignment-research-dataset/special_docs
Other
General Purpose Intelligence: Arguing The Orthogonality Thesis 1 Analysis and Metaphysics GENERAL PURPOSE I NTELLIGENCE : ARGUING THE ORTHOGONALITY THESIS STUART ARMSTRONG stuart.armstrong@philosophy.ox.ac.uk Future of Humanity Institute, Oxford Martin School Philosophy Department, University of Oxford In his paper “The Superintelligent Will”, Nick Bostrom formalised the Orthogonality thesis: the idea that the final goals and intelligence levels of artificial agents are independent of each othe r. This paper presents arguments for a (narrower) version of the thesis . It proceed s through three steps. First it shows that superintelligent agents with essentially arbitrary goals can exist in our universe – both as theoretical impractical agents such a s AIXI and as physically possible real - world agents . Then it argues that if humans are capable of building human -level artificial intelligences, we can build them with an extremely broad spectrum of goals. Finally it shows that the same result holds for any superintelligent agent we could directly or indirectly build. This result is relevant for arguments about the potenti al motivations of future agents: knowing an artificial agent is of high intelligence d oes not allow us t o presume that it will be moral, we will need to figure out its goals directly. Keywords: AI ; Artificial Intelligence; efficiency; intelligence ; goals; orthogonality 1 The Orthogonality thesis Scientists and mathematicians are the stereotypical examples of high intelligence humans. But the ir morality and ethics have been all over the map. On modern political scales, they can be left - (Oppenheimer) or right - wing (von Neumann) and historically they have slotted into most of the politi cal groupings of their period ( Galois , Lavoisier). Ethically, they have ranged from very humanitarian (Darwin, Einstein outside of his private 2 life), through amoral (von Braun) to commercially belligerent (Edison) and vindictive (Newton). Few scientists ha ve been put in a position where they could demonstrate genuine ly evil behaviour, but there have been a few of those (Teichmüller, Philipp Lenard, Ted Kaczynski, Shirō Ishii). Of course, m any scientists have been absolutely conventional in their views and a ttitudes given the society of their time. But the above examples hint that their ethics are not strongly impacted by their high intelligence; intelligence and ethics seem ‘orthogonal’ (varying independently of each other, to some extent). If we turn to the case of (potential) artificial intelligences we can ask whether that relation continues: would high intelligence go along with certain motivations and goals , or are they unrelated ? To avoid the implicit anthropomorphisation in terms such as ‘ethics’, we will be looking at agents ‘final goals’ – the ultimate objectives they are aiming for. Then t he Orthogonality thesis , due to Nick Bostrom (Bostrom, 2012) , states that: Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal. It is analogous to Hume’s thesis about the independence of reason and morality (Hume, 1739) , but applied more narrowly, using the normatively thinner concept s ‘intelligence’ and ‘final goals’ rather than ‘reason’ and ‘morality’ . But even ‘intelligence’, as generally used, has too many connotations. A better term would be efficiency, or instrumental rationality, or the ability to effectively solve problems given limited knowledge and resources (Wang, 2011) . Nevertheless , we will be sticking with terminology such as ‘intelligent agent’, ‘artificial intelligence’ or ‘superintelligence’, as they are well established, but using them synonymously with ‘efficient agent’, artificial efficiency’ and ‘superefficient algorithm’. The relevant criteria is whether the agent can effectively ac hieve its goals in general situations, not whether its inner process matches up with a particular definition of what intelligence is. 3 Thus an artificial intelligence (AI) is an artificial algorithm, deterministic or probabilistic , implemented on some device , that demonstrates an ability to achieve goals in varied and general situations1. We don’t assume that it need be a computer program, or a well laid -out algorithm with clear loops and structures – artificial neural networks or evolved genetic algori thms certainly qualify. A human level AI is defined to be an AI that can successfully accomplish any task at least as well as an average human would (to avoid worrying about robot bodies and such -like, we may restrict the list of tasks to those accomplisha ble over the internet). Thus we would expect the AI to hold conversation s about Paris Hilton’s sex life, to compose ironic limericks, to shop for the best deal on Halloween costumes and to debate the proper role of religion in politics, at least as well as an average human would. A superhuman AI is similarly defined as an AI that would exceed the ability of the best human in all (or almost all) tasks. It would do the best research, write the most successful novels, run companies and motivate employees bette r than anyone else. In areas where there may not be clear scales (what’s the world’s best artwork?) we would expect a majority of the human population to agree the AI’s work is among the very best. Nick Bostrom’s paper argued that the Orthogonality thesis does not depend on the Humean theory of motivation , but could still be true under other philosophical theories . It should be immediately apparent that the Orthogonality thesis is related to arguments about moral realism. Despite this, we will not address t he fertile and extensive literature on this subject. Firstly, because it is contentious: different schools of philosophical thought have different interpretations of the truth and meaning of moral realism, disputes that cannot be currently resolved empiric ally. Since we are looking to resolve a mainly empirical question – what systems of motivations could 1 We need to assume it has goals, of course. Determining whether something qualifies as a goal -based agent is very tricky (researcher Owain Evans is trying to establish a rigorous definition), but this paper will adopt the somewhat informal definition that an agent has goals if it achieves similar outcomes from very different starting positions. If the agent ends up making ice cream in any circumstances, we can assume ice creams are in its goals. 4 we actually code into a putative AI – this theoretical disagreement is highly problematic. Secondly, we hope that by approaching the issue from the computational perspective, we can help shed new light on these issues. After all, we do not expect that the trigger mechanism of a cruise missile to block detonation simply because people will die – but would an “ultra -smart bomb” behave the same way? By e xploring the goals of artificial systems up to higher level of efficiency, we may contribute to seeing which kinds of agents are susceptible to moral realism arguments, and which are not. Thus t his paper will content itself with present ing direct arguments for the Orthogonality thesis . We will assume throughout that human level AIs (or at least human comparable AIs) are possible (if not, the thesis is void of useful content). We will also take the position that humans the mselves can be viewed as non -determi nistic algorithms2: this is not vital to the paper, but is useful for comparison of goals between various types of agents. We will do the same with entities such as committees of humans, institutions or corporations, if these can be considered to be acting in an agent -like way. The thesis itself might be critiqued for over -obviousness or triviality – a moral anti -realist, for instance, could find it too evident to need defending. Nevertheless, t he argument that AIs – or indeed, any sufficiently intelligent being – would necessarily behave morally is a surprisingly common one. A. Kornai, for instance, considers it as a worthwhile starting point for investigations into AI morality (Kornai, 2013) . He bases his argument on A. Gewith’s approach in his book, Reason and Morality (Gewirth, 1978) (the book’s argument can be found in a summarised form in one of E. M. Adams ’s papers (Adams, 1980) ) in which it is argued that all agents must follow a “Principle of Generic Consistency” that causes them to behave in accordance with all other agent’s generic rights to freedom and well -being. Others have argued that certain specific moralities are attractors in the space of moral systems, towards which any AI will tend if they start off with certain mild constraints (Waser, 2008) . Because of these and other examples 2 Since every law of nature is algorithmic (with some probabil istic process of known odds), and no exceptions to these laws are known, neither for human nor non -human processes. 5 (and some online c riticism of the Orthogonality thesis3), we thought the thesis was worth defending explicitly, and that the argument brought out in its favour would be of general interest to the general discussion. 1.1 Qualifying the Orthogonality thesis The Orthogonality thesis, taken literally, is false. Some motivations are mathematically incompatible with changes in intelligence (“I want to prove the Gödel statement for the being I would be if I were more intelligent” ). Some goals specifically refer to the intelligence of the agent, directly (“I want to be much less efficient !”) or indirectly (“I want to impress people who want me to be much less efficient !”). Though we could make a case that an agent wanting to be less efficient could initially be of any intelligence le vel, it won’t stay there long, and it’s hard to see how an agent with that goal could have become intelligent in the first place. So we will exclude from consideration those goals that intrinsically refer to the intelligence level of the agent. We will als o exclude goals that are so complex or hard to describe that the complexity of the goal becomes crippling for the agent. If the agent’s goal takes five planets worth of material to describe, or if it takes the agent twenty years each time it checks what it s goal is, then it’s obvious that that agent can’t function as an intelligent being on any reasonable scale. Many have made the point that there is likely to be convergence in instrumental goals (Omohundro, 2008) . Whatever their final goals, it would generally be in any agent’s interest to accumulate more power, to become more intelligence and to be able to cooperate with other agents of similar ability (and to have all the negotiation, threatening and cajoling ski lls that go along with that cooperation). Note the similarity with what John Rawls called ‘primary goods’ (Rawls, 1971) . We will however be focusing 3 See for example http://philosophicaldisquisitions.blogspot.co.uk/2012/04/bostrom -on- superintelligence -and.html , which criticise the thesis specifically. 6 exclusively on final goals, as the instrumental goals are merely tools to accomplish these4. Further we will not try to show that intelligence and final goals can vary freely, in any dynamical sense (it could be quite hard to define this). Instead we will look at the thesis as talking about possible states: that there exist agents of all levels of intelligence with any given goals. Since it’s always possible to make an agent stupider or less efficient, what we are really claiming is t hat there could exist possible high-intelligence agents with any given goal. Thus the restricted Orthogonality thesis that we will be discussing is: High -intelligence agents can exist having more or less any final goals (as long as these goals are of feas ible complexity, and do not refer intrinsically to the agent’s intelligence )5. We will be looking at two variations of the “can exist” clause: whether the agent can exist in theory, and whether we could build such an agent (given that we could build an AI at all). Though evidence will be presented directly for this thesis in the theoretic agent case, the resu lts of this paper cannot be considered to “prove” the thesis for agents we could build (though they certainly raise its likelihood). In that case, we will be looking at proving a still weaker thesis: The fact of being of high intelligence p rovides extreme ly little constraint on what final g oals an agent could have (as long as these goals are of feasible complexity, and do not refer intrinsically to the agent’s intelligence). 4 An AI skilled in cooperation would drop this if cooperation no longer served its purpose; similarly, an AI accumulati ng power and resources would stop doing this if it found better ways of achieving its goals. 5 Even logically impossible goals can exist: “construct a disproof of Modus Ponens (within classical logic)” is a perfectly fine goal for an intelligence to have – it will quickly realise that this translates to “prove classical logic is inconsistent”, a task mathematicians have occasionally attempted. 7 That thesis still has nearly all the relevant practical implications that the strong Orthogonality thesis does. 1.2 Orthogonality in practice for AI designers The arguments presented in this paper are all theoretical. They posit that AIs with certain goals either ‘can exist’, or that ‘if we could build an AI, we could build one with any g oal’. In practice, the first AIs, if and when they are created, will be assembled by a specific team, using specific methods, and with specific goals in mind. They may be more or less successful at inculcating the goals into the AI (or, as is common in com puter programming, they may inculcate the goals exactly, only to realise later that these weren’t the goals they really wanted). The AI may be trained by interacting with certain humans in certain situations, or by understanding certain ethical principles, or by a myriad of other possible methods, which will likely focus on a narrow target in the space of goals. The relevance of the Orthogonality thesis for AI designers is therefore mainly limited to a warning: that high intelligence and efficiency are not enough to guarantee positive goals, and that they thus need to work carefully to inculcate the goals they value into the AI. 2 Orthogonality for theoretic agents If we were to step back for a moment and consider, in our mind’s eyes, the sp ace of every possi ble algorithm, peering into their goal systems and teasing out some measure of their relative intelligence s, would we expect the Orthogonality thesis to hold? Since we are not worrying about practicality or constructability, all that we would require is th at for any given goal system (within the few constraints enumerated above) , there exists a theoretically implementable algorithm of high intelligence. Any measurable6 goal can be paired up with a reward signal: an agent gets a reward for achieving states of the world desired by the goal, and denied these rewards for actions that fail to do so. Among reward signal 6 Measuring a goal brings up subtle issues with the symbol grounding problem and similar problems. We’ll ignore the se issues in the present paper . 8 maximisers, the AIXI is the theoretically best agent the re is, more successful at reaching its goals (up to a finite constant) than any other agent (Hutter, 2005) . AIXI itself is incomputable, but there are computable variants such as AIXItl or G ödel machines (Schmidhuber, 2007) that approximate AIXI’s efficiency . These methods work for whatever reward signal plugged into them. Or we could simply imagine a supercomputer with arbitrarily large amounts of computing power and a decent understanding of the laws of physics ( a ‘Laplace demon’ (Laplace, 1814) capable of probabilistic reasoning ), placed ‘outside the universe’ and computing the future course of events. Paired with an obedient active agent inside the universe with a me asurable goal , for which it would act as an advisor , this would also constitute an ‘ultimate agent’. Thus i n the extreme theoretical case, the Orthogonality thesis seems true. There is only one problem with these agents: they are either impossible in pract ice (AIXI or Laplace’s demon), or require incredibly large amounts of computing resources to work. Let us step down from the theoretical pinnacle and require that these agents could actually exist in our world (still not requiring that we be able or likely to build them ). An interesting thought experiment occurs here. We could imagine an AIXI -like super -agent, with all its impractical resources, that is tasked to design and train an AI that could exist in our world, and that would accomplish the super -agent’s goals. Using its own vast intelligence, the super -agent would therefore design a constrained agent maximally effective at accomp lishing those goals in our world. Then this agent would be the high-intelligence real -world agent we are looking for. It doesn’t matter than the designer is impossible in practice – if the super -agent can succeed in the theoretical thought experiment, then the trained AI can exist in our world. This argument generalises to other ways of producing the AI. Thus to deny the Orthogonality thesis is to assert that there is a goal system G, such that, among other things: 1. There cannot exist an y efficient real -world algorithm with goal G. 2. If a being with arbitrari ly high resources, intelligence, time and goal G, were to try design an efficient real -world algorithm with the same goal, it must fail. 9 3. If a human society were highly motivated7 to design an efficient real-world algorithm with goal G, and were given a million years to do so along with huge amounts of resources , training and knowledge about AI, it must fail. 4. If a high -resource human society were highly motivated to achieve the goal G, then it could not do so (here the human society itself is seen as the algorithm). 5. Same as above, for any hypothetical alien societies. 6. There cannot exist any pattern of reinforcement learning that would train a highly efficient real-world intelligence to follow the goal G. 7. There cannot exist any evolutionary or environmental pressures that would evolve a highly efficient real world intelligences follow ing goal G. All of these seem extraordinarily strong claims to make! The last claims all derive from t he first, and merely serve to illustrate how strong the first claim actually is. Claim 4, in particular, seems to run counter to everything we know about human nature. 3 Orthogonality for human -level AIs Of course, even if effici ent agents could exist for all these goals, that doesn’t mean that we could ever build them, even if we could build AIs. In this section, we’ll look at the ground for assuming the Orthogonality thesis holds for human -level agents. Since intelligence isn’t varying much, the thesis be comes simply: If we could construct human -level AIs at all, then there is extremely little constraint on the final goals that such AIs could have (as long as these goals are of feasible complexity, and do not refer intrinsically to the agent’s intelligence ). So, is this true? The arguments in this section are generally independent of each other, and can be summarised as: 7 A motivation might simply be a threat: some truthful powerful being saying “Design an algorithm with goal G. If you succeed, I will give you great goods; if you fail, I will destroy you all. The algorithm will never be used in practice, so there are no moral objections to it being designed.” 10 1. Some possible AI designs have orthogonality built right into them. 2. AI goals can reach the span of human goals, which is large. 3. Algorithms can be combined to generate an AI with any easily measurable goal. 4. Various algorithmic modifications can be used to further expand the space of possible goals, if needed. 3.1 Utility functions One classical picture of a rational agent is of an agent with a specific utility function, which it will then act to maximise in expectation. This picture encapsulates the Orthogonality thesis: whatever the utility function, the rational agent will then attempt to maxi mise it, using the approaches in all cases (planning, analysing input data, computing expected results) . If an AI is built according to this model , with the utility function being prescriptive (given to the AI in a program) rather than descriptive (an abstract formalisation of an agent’s other preferences) , then the thesis would be trivially true: we could simply substitute the utility function for whichever one we desired. However, many putative agent designs are not utility function based, such a s neural networks , genetic algorithms, or humans. So from now on we will consider that our agents are not expected utility maximiser s with clear and separate utility function s, and look at proving Orthogonality in these harder circumstances. 3.2 The span of h uman motivations It seems a reasonable assumption that if there exists a human being with particular goals, and we can program an AI, then we can construct a human - level AI with similar goals. This is immediately the case if the AI was a whole brain emulat ion/upload (Sandberg & Bostrom, 2008) , a digital copy of a specific human mind. Even for more general agents, such as evolved agents, this remains a reasonable thesis. For a start, we know that real-world evolution has produced us, so constructing human -like agents that way is certainly possible. Human minds remain our only real model of general intelligence, and this strongly direct s and informs our AI designs, which are likely to be as human -similar as we can make them. Simila rly, human goals 11 are the easiest goals for us to understand, hence the easiest to try and implement in AI. Hence it seems likely that we could implement most human goals in the first generation of human -level AIs. So how wide is the space of human motivati ons8? Our race spans foot - fetishists, religious saints, serial killers, instinctive accountants, role- players, self-cannibals, firefighters and conceptual artists. The autistic, those with exceptional social skills, the obsessive compulsive and some with split-brains . Being s of great empathy and the many who used to enjoy torture and executions as public spectacles9. It is evident that the space of possible human motivations is vast10. For any desire, any particular goal, no matter how niche11, pathological, bizarre or extreme, as long as there is a single human who ever had it, we could build and run an AI with the same goal. But with AIs we can go even further. We could take any of these goals as a starting point , make them malleable (as goals are in humans) , and push them further out. We could provide the AIs with specific reinforcements to push their goals in extreme directions (reward the saint for ever -more saintly behaviour) . If the agents are fast enough, we could run whole societies of them with huge v arieties of evolutionary or social pressures, to further explore the goal -space. 8 One could argue that we should consider the space of general animal intelligences – octopuses, supercolonies of social insects, etc... But the methods described can already produce th ese animal’s types of behaviours. 9 Even today, many people have had great fun torturing and abusing their characters in games like “the Sims” (http://meodia.com/article /281/sadistic -ways -people -torture -their -sims/ ). The same urges are present, albeit diverted to fictionalised settings. Indeed games offer a wide variety of different goals that could conceivably be imported into an AI if it were possible to erase the reali ty/fiction distinction in its motivation. 10 As can be shown by a glance through a biography of famous people – and famous means they were generally allowed to rise to prominence in their own society, so the space of possible motivations was already cut dow n. 11 Of course, if we built an AI with that goal and copied it millions of times, it would no longer be niche. 12 We may also be able to do surgery directly on their goals, to introduce more yet variety. For example, we could take a dedicated utilitarian charity worker obsessed with savin g lives in poorer countries (but who doesn’t interact, or want to interact, directly with those saved), and replace ‘saving lives’ with ‘maximising the number of paperclips in the universe ’ or any similar abstract goal. This is more speculative, of course – but there are other ways of getting similar results. 3.3 Instrumental goals as final goals If someone were to hold a gun to your head, they could make you do almost anything. Certainly there are people who, with a gun at their head, would be willing to do al most anything. A distinction is generally made between instrumental goals and final goals, with the former being seen as simply paths to the latter, and interchangeable with other plausible paths . The gun to your head disrupts the balance: your final goal is simply not to get shot, while your instrumental goals become what the gun holder wants them to be, and you put a great amount of effort into accomplishing the minute details of these instrumental goals. Note that the gun has not changed your level of intelligence or ability. This is relevant because instrumental goals seem to be far more varied in humans than final goals. One can have instrumental goals of filling papers, solving equations, walking dogs, making money, pushing buttons in various sequences, opening doors, enhancing shareholder value, assembling cars, bombing villages or putting sharks into tanks. Or simply doing whatever the guy with gun at our head orders us to do. If we could accept human instrumental goals as AI final goals, we wou ld extend the space of goals quite dramatically. To do so we would want to put the threatened agent, and the gun wielder, together into the same AI. Algorithmically there is nothing extraordinary about this : certain subroutines have certain behaviours depending on the outputs of other subroutines. The ‘gun wielder ’ need not be particularly intelligent: it simply needs to be able to establish whether its goals are being met. If for instance those goals are given by a utility function then all that is req uired in an automated system that measure progress toward increasing utility and punishes (or erases) the rest of the AI if not. The ‘rest of AI’ is just required to be a human -level AI which would be susceptible to this kind of pressure. Note that we do n ot require that it 13 even be close to human in any way, simply that it place a highest value on self-preservation (or on some similar small goal that the ‘gun wielder’ would have power over). For humans, another similar model is that of a job in a corporatio n or bureaucracy: in order to achieve the money required for their final goals, some human are willing to perform extreme tasks (organising the logistics of genocides, weapon design, writing long emotional press releases they don’t agree with at all). Agai n, if the corporation -employee relationship can be captured in a single algorithm, this would generate an intelligent AI whose goal is anything measurable by the ‘corporation’. The ‘money’ could simply be an internal reward channel, perfectly aligning the incentives. If the subagent is anything like a human, they would quickly integrate the other goals into their own motivation12, removing the need for the gun wielder/corporation part of the algorithm. 3.4 Noise, anti -agents and goal combination There are further ways of extending the space of goals we could implement in human -level AIs. One simple way is simply to introduce noise: fl ip a few bits and subroutines , add bugs and get a new agent. Of course, this is likely to cause the agent’s intelligence to d ecrease somewhat, but we have generated new goals. Then, if appropriate, we could use evolution or other improvements to raise the agent’s intelligence again; this will likely undo some, but not all of effect of the noise. Or we could use some of the trick s above to make a smarter agent implement the goals of the noise -modified agent. A more extreme example would be to create an anti -agent: an agent whose single goal is to stymie the plans and goals of single given agent. This alrea dy happens with vengeful humans, and we would just need to dial it up: have an anti -agent that would do all it can to counter the goals of a given agent, even if that agent doesn’t exist (“I don’t care that you’re dead, I’m still going to despoil your country, because that’s what you’d wanted me to not do”). This further extends the space of possible goals. 12 Such as the hostages suffering from Stockholm syndrome (de Fabrique, Romano, Vecchi, & van Hasselt, 2007) . 14 Different agents with different goals can also be combined into a single algorithm. With some algorithmic method for the AIs to negotiate their combined objective and balance th e relative importance of their goals, this procedure would construct a single AI with a combined goal system. There would likely be no drop in intelligence/efficiency: committees of two can work very well towards their common goals, especially if there is some automatic penalty for disagreements. 3.5 Further tricks up the sleeve This section started by emphasising the wide space of human goals, and then introduced tricks to push goal systems further beyond these boundaries. The list isn’t exhaustive: there are surely more devices and ideas one can use to continue to extend the space of possible goals for human -level AIs. Though this might not be enough to get every goal, we can nearly certainly use these procedures to construct a human -level AI with any human -comprehensible goal . But would the same be true for superhuman AIs? 4 Orthogonality for superhuman AIs We now come to the area where the Orthogonality thesis seems the most vulnerable. It is one thing to have human -level AIs, or abstract superintelligent algorithms created ex nihilo, with certain goals. But if ever the human race were to design a superintelligent AI, there would be some sort of process involved – directed evolution, recursive self -improvement13, design by a committee of AIs, or similar – and it seems at least possible that such a process could fail to fully explore the goal -space. The Orthogo nality thesis in this context is : If we could construct superintelligent AIs at all, then there is extremely little constraint on the final goals that such AIs could have (as long as these goals are of feasible complexity, and do not refer intrinsically to the agent’s intelligence). There are two counter -theses. The weakest claim is: 13 See f or instance E. Yudkowsky ’s design “General Intelligence and Seed AI 2.3” http://singinst.org/ourresearch/publications/GISAI/ 15 Incompleteness: there are large categories of goals tha t no superintelligence designed by us could have. A stronger claim is: Convergence: all human -designed superintelligences would have one of a small set of goals. Here ‘small’ means ‘smaller than the space of current human motivations’, thus very small in compar ison with the space of possible AI goals. They should be distinguished; Incompleteness is all that is needed to contradict Orthogonality, but Convergence is often the issue being discussed. Often C onvergence is stated in terms of a particular model of meta ethics , to which it is assumed all agents will converge (see some of the references in the introduction, or various online texts and argument14). 4.1 No convergence The plausibility of the convergence thesis is highly connected with the connotations of the terms used in it. “All human -designed rational beings would follow the same morality (or one of small sets of moralities)” sound s plausible; in contra st “all human -designed superefficient algorithms would accomplish the same task” seems ridiculous . To quote an online commentator, how good at playing chess would a chess computer have to be before it started feeding the hungry? Similarly, if there were such a convergen ce, then all self -improving or constructed superintelligence must fall prey to it, even if it were actively seeking to avoid it. After all, the self-improving lower -level AIs or the designers have certain goals in mind (as we’ve seen in the previous sectio n, if the designers are AIs themselves, they could have potentially any goals in mind). Obviously, t hey would be less likely to achieve the ir goals if the se goals were to change as they got more intelligent (Omohundro, 2008) (see also N. Bostrom’s forthcoming book Superintelligence: Groundwork to a Strategic Analysis of the Machine Intelligence Revolution ). The same goes if the superintelligent AI they de signed didn’t share these goals . Hence the AI designers will be actively trying to prevent such a convergence, if they suspected that one was likely to happen. If for instance their goals were 14 Such as J. Müller’s “Ethics, risks and opportunities of superintelligences ” http://www.jonatasmuller.com/superintelligences.pdf 16 immoral, they would program their AI not to care about morality ; they would use every trick up their sleeves to prevent the AI’s goals from drifting from their own. So the convergence thesis requires that for the vast majority of goals G: 1. It is possible for a superintelligence to exist with goal G (by section 2). 2. There exists an entity with goal G (by section 3), capable of building a superintelligent AI. 3. Yet any attempt of that entity to build a superintel ligent A I with goal G will be a failure, and the superintelligence’s goals will converge on some other goal. 4. This is true even if the entity is aware of the convergence and explicitly attempts to avoid it. 5. If the superintelligence were to be constructed by successive self - improvement, then an entity with goal G operating on itself to boost its intelligence is unable to do so in a way that would preserve goal G. This makes the convergence thesis very unlikely. The argument also works against the incompletene ss thesis, but in a weaker fashion: it seems more plausible that some types of goals would be unreachable, despite being theoretically possible. There is another interesting aspec t of the convergence thesis: these goals G are to emerge , somehow , without them being aimed for or desired. If one accepts that goals aimed for will not be reached, one has to ask why convergence is assumed: why not divergence? Why not assume that though G is aimed for, random accidents o r faulty implementation will lead to the AI ending up with one of a muc h wider array of possible goals, rather than a much narrower one? We won’t delve deeper into this, and simply make the point that “superintelligent AIs won’t have the goals we want them to have” is therefore not an ar gument in favour of the convergence thesis. 4.2 Oracles show the way If the Orthogonality thesis is wrong, then it implies that Oracles are impossible to build. An Oracle is a superintelligent AI that accurately answers human questions about the world , such as the likely consequences of certain policies and decisions (Armstrong, Sandberg, & Bostrom, 17 2012)15. If such an Oracle could be built, then we could attach it to a human -level AI with goal G. The human -level AI could then ask the Oracle what the results of different decisions actions could be, and choose the action that best accomplishes G. In this way, the combined system would be a superintelligent AI with goal G. What makes the “no Oracle” implication even more counterintuit ive is that any superintelligence must be able to look ahead, design actions, predict the consequences of its actions, and choose the best one available. But the convergence and indifference theses imply that this general skill is one that we can make avai lable only to AIs with certain specific goals. Though agents with those specific goals are capable of doing effective predictions, they automatically lose this ability if their goals were to change. 4.3 Tricking the controller Just as with human -level AIs, one could construct a superintelligent AI by wedding together a superintelligence with a large motivated committee of human -level AIs dedicated to implementing a goal G, and checking the superintelligence’s actions. Thus to de ny the Orthogonality thesis requires that one believes that the superintelligence is always capable of tricking this committee, no matter how detaile d and thorough their oversight. This argument extend s the Orthogonality thesis to moderately superintellige nt AIs, or to any situation where there ’s a diminishing return to intelligence. It only fails if we take AI to be fantastically superhuman: capable of tricking or seducing any collection of human -level beings. 4.4 Temporary f ragments of algorithms , fictional worlds and extra tricks These are other tricks that can be used to create an AI with any goals. For any superintelligent AI, there are certain inputs that will make it behave in certain ways. For instance, a human -loving moral AI could be compel led 15 Not to be confused with the concept of Oracle in computer science, which is either an abstract machine capable of instantaneous computation s in various complexity classes, or mechanism in software testing. 18 to follow most goals G for a day, if they were rewarded with something sufficiently positive afterwards. But its actions for that one day are the result of a series of inputs to a particular algorithm; if we turned off the AI after that day, we would h ave accomplished moves towards goal G without having to reward its “true” goals at all. And then we could continue th e trick the next day with another copy. For this to fail, it has to be the case that we can create an algorithm which will perform certain actions on certain inputs as long as it isn’t turned off afterwards , but that we cannot create an algorithm that does the same thing if it was to be turned off. Another alternative is to create a superintelligent AI that has goals in a fictional world (suc h as a game or a reward channel) over which we have control. Then we could trade interventions in the fictional world against advice in the real world towards whichever goals we desire16. These two arguments may f eel weaker than the ones before: they are tricks that may or may not work, depending on the details of the AI ’s setup. But to deny the Orthogonality thesis requires not only denying that these tricks w ould ever work, but denying that any tricks or methods that we (or any human -level AIs) could think up, would ever work at controlling the AIs. We need to assume superintelligent AIs cannot be controlled in any way that anyone could think of. 4.5 In summary Denying the Orthogonality thesis thus requires that: 1. There are goals G, such that an entity an entity with goal G cannot build a superintelligence with the same goal. This despite the fact that the entity can build a superintelligence, and that a superintelligence wi th goal G can exist. 16 Another possibility, for those who believe AIs above a certain level of intelligence must converge in their motivations , is to hav e a society of AIs below this level . If the AIs are closely linked, this could be referred to as a superorganism . Then the whole superorganism could be setup to have a ny particular goal and yet have high intelligence/efficiency. See http://lesswrong.com/r/discussion/lw/gzl/amending\_the\_general\_pupose \_intelligence\_arguing/ for more details. 19 2. Goal G cannot arise accidentally from some other origin, and errors and ambiguities do not significantly broaden the space of possible goals. 3. Oracles and general purpose planners cannot be built . Superintelligent AIs cannot have their planning abilities repurposed. 4. A superintelligence will always be able to trick its overseers , no matter how careful and cunning they are. 5. Though we can create an algorithm that does certain actions if it was not to be turned off after, we cannot create an algorithm that does the same thing if it was to be turned off after. 6. An AI will always come to care intrinsically about things in the real world. 7. No tricks can be thought up to successfully constrain the AI’ s goals: superintelligent AIs simply cannot be controlled. 5 Conclusion It is not enough to know that an agent is intelligent (or superintelligent). If we want to know something about its final goals, about the actions it will be willing to undertake to achieve them, and hence its ultimate impact on the world, there are n o shortcuts. We have to directly figure out what these goals are (or figure out a way of programming them in) , and cannot rely on the agent being moral just because it is superintelligent/superef ficient. 6 Acknowledgements It gives me great pleasure to acknowledge the help and support of Anders Sandberg, Nick Bostrom, Toby Ord, Diego Caleiro, Owain Evans, Daniel Dewey, Eliezer Yudkowsky, Vladimir Slepnev, Viliam Bur, Matt Freeman, Wei Dai, Will Newsome, Paul Crowley, Mao Shan, Alexander Kruel, Steve Rayhawk, Tim Tyler, John Nicholas, Ben Hoskin and Rasmus Eide, as well as those members of the Less Wrong online community going by the names shminux, and Dmytry. The work was funded by the Future of Humanity Institute (FHI), in the Department of Philosophy of Oxford University. The FHI is part of the Oxford Martin School. 20 7 Notes and References Adams, E. M. (1980). Gewirth on Reason and Morality. The Review of Metaphysics, 33 (3), 579 -592. Armstrong, S., Sandberg, A., & Bostrom, N. (2012). Thinking Inside the Box: Controlling and Using an Oracle AI. Minds and Machines, 22 (4). Bostrom, N. (2012). The Superintelligent Will: Motivation and Instrumental Rationality in Advance Artificial Agents. Minds and Machines, 22 (2), 71 -85. de Fabrique, N., Romano, S. J., Vecchi, G. M., & van Hasselt, V. B. (2007). Understanding Stockholm Syndrome. FBI Law Enforcement Bulletin (Law Enforcement Communication Unit), 76 (7), 10-15. Gewirth, A. (1978). Reason and Morality. University of Chicago Press. Hume, D. (1739). A Treatise of Human Nature. Hutter, M. (2005). Universal algorithmic intelligence: A mathematical top -down approach. In B. Goertzel, & C. Pennachin (Eds.), Artifi cial General Intelligence. Springer -Verlag. Kornai, A. (2013). Bounding the impact of AGI. Oxford 2012 Winter Intelligence conference on AGI. Oxford. Laplace, P. -S. (1814). Essai philosophique sur les probabilités. Omohundro, S. M. (2008). The Basic AI Dri ves. In P. Wang, B. Goertzel, & S. Franklin (Eds.), Artificial General Intelligence: Proceedings of the First AGI Conference (Vol. 171). Rawls, J. (1971). A Theory of Justice . Harvard University Press. Sandberg, A., & Bostrom, N. (2008). Whole brain emula tion: A roadmap. Future of Humanity Institute Technical report, 2008 -3. Schmidhuber, J. (2007). Gödel machines: Fully self -referential optimal universal self-improvers. In Artificial General Intelligence. Springer. Wang, P. (2011). The assumptions on knowl edge and resources in models of rationality. International Journal of Machine Consciousness, 3 (1), 193 -218. Waser, M. R. (2008). Discovering the foundations of a universal system of ethics as a road to safe artificial intelligence. Biologically inspired co gnitive architectures: Papers from the AAAI fall symposium , (pp. 195 -200).
864d19f3-d163-4d7c-a854-e7b444efe4de
trentmkelly/LessWrong-43k
LessWrong
AI Strategy Updates that You Should Make or: "Things That I Keep Telling AI People Around Me, So I'm Just Writing A Big Post About Them." I think it's really valuable for people to think about AI strategy, even if that's not their main job, since it is useful for informing what their main job should be. There will be a later post titled something like "Emotional Updates on AI that You Should Make". This is not that. Things are changing fast enough that I want to get one or the other out quickly, and besides, emotion should follow belief, so this one is coming first. [Epistemic status: I expect most of the things I predict here to turn out correctly since most of the reasoning feels pretty straightforward, and I mark especially speculative things as such.] A good thing to keep in mind: if you can predict that future-you will make a certain update, just make it now instead. Models o3 If you've made it to this post, you've probably heard about o3. Nonetheless, here's the summary. Besides just being particularly good at math and [whatever-it-is that ARC-AGI measures], the coding benchmarks, including the fact that o3 gets higher elo on codeforces than all but one OpenAI employee, indicate that we're solidly approaching the beginning of Recursive Self-Improvement. It's very expensive to run at this scale, but it is clearly going to get cheaper (see Updates from Models). Deepseek-v3 Reportedly, it took $5.5 million to train a LLM that performs on par with the ~May 2024 frontier. Anecdotally, it seems like it's worse than Claude 3.6 Sonnet, but not by huge amount. It feels like it's in a similar tier, but not quite reaching the same level. The benchmarks turn out pretty similar to 3.6. It's also very cheap to run, compared to other models at this level, in large part due to its incredibly small size for a frontier model. The rest is due to hardware optimizations that Deepseek performed. Notably, it's open source. (EDIT: a reader pointed out that the total costs were likely much higher than this, account
5d136045-7b96-47e1-aef6-cdfa6ddc54e0
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Is there any serious attempt to create a system to figure out the CEV of humanity and if not, why haven't we started yet? Hey, fellow people, I'm fairly new to LessWrong and if this question is irrelevant i apologise for that, however I was wondering whether any serious attempts to create a system for mapping out the CEV of humanity have been started yet?  Since the CEV ismore of a democratic process than any other alignment system I can think of, it would make sense to try to create a database for the purpose of a future AGI to calibrate itself on. If we where to create the database now we could explore if it suited our needs through trial and error (testing whether it would predict moral decisions), which would mean that we would get a more functional alignment system than we could otherwise get. Also as a follow up, if we where to create a central authority for creating this database, there's a possibility that the authority could become a central alignment checking facility meaning that we could avoid potential misalginment disasters.  There are therefore quite clear reasons for me why it would be a good idea to start this project which is why I'm wondering if there are any such plans. Thank you for your time.
004f71a9-27de-405b-b460-62a3341b8902
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Disclaimer: I’ve never held a job for more than a year [1] or been paid more than $15 an hour. Take everything I say with a grain of salt.Many of my peers seem to make career plans like by asking things likeWhat am I interested in, or what do I like doing? andHow can I do something related to that?which might lead to some of the following:A person who likes dancing tries to work in the arts industry.A person who likes video games tries to get into game design.A person who is interested in healthcare policy tries to study or design healthcare policy.The problem here, in terms of diminished performance, happiness, and satisfaction, is a conflation of the topic and the content. The topic is not the content!The topicIn my schema, the topic is what the work is about. If you’re the manager of a pillow company, the topic is pillows. If you’re defending accused criminals in court, the topic is criminal law.A lot of folks, it seems to me, focus a lot on the topic when deciding which subjects to study or which jobs to apply for. Someone who is interested in physics might major in physics. Someone who loves to work out might try to become a personal trainer.I don’t think this makes much sense. Should people ignore what they’re interested in and like to do, then? Well, maybe. 80,000 Hours, perhaps the single best career planning resource out there, writes thatThe bottom lineTo find a dream job, look for:Work you’re good at,Work that helps others,Supportive conditions: engaging work that lets you enter a state of flow; supportive colleagues; lack of major negatives like unfair pay; and work that fits your personal life.Ok, but the term “engaging work” is doing a lot of work here (no pun intended), and seems awfully synonymous with “work that you like.” So, how do you find work that you like doing? If there’s anything my utterly negligible work experiences has taught me, it’s that it usually makes more sense to focus less on the topic and more on the content.The contentIn my schema, the content is what the work involves doing. If you’re a physics teacher, the topic is physics, but the content is (I assume) some combination of grading papers, making slideshow presentations, lecturing, doing demonstrations, and answering student questions.Say Emma is a physics teacher. Which do you think matters more for Emma’s personal career satisfaction: being interested in physics (the topic), or enjoying grading, lecturing, presenting, and answering questions (the content)? Almost certainly, I think, the latter.Don’t get me wrong, the topic matters too! Even if Emma likes all of these activities, I have no doubt that both she and her students would be better off if Emma were interested in physics. But someone who loves teaching but is indifferent to physics will be better off than someone who loves physics but is indifferent to teaching.The problemOne of the fundamental issues here is that it’s way easier to discern the topic. For instance, the word “physics” in “physics teacher” is served up on a salient silver platter. Obviously, “teaching physics” involves physics. What is less obvious, though, is what “teaching” involves. The verb “teach” isn’t very descriptive—it’s just a placeholding bucket for more substantive actions like “grade papers” and “lecture.”In fact, while I’m virtually certain that teaching physics involves physics, I recognize that my description of the content (grading, lecturing etc.) could be misleading or missing something important. In fact, I couldn’t tell you what my high school teachers spent the plurality of their time actually doing.Me, right nowAs my LinkedIn will tell you, I am a newly-minted federal employee and proud member of the economics team in the Office of Policy Analysis in the Department of the Interior (DOI). Now, take a minute to guess the topic and the content of my job. This is exactly what I had to do a few months ago when I (read: my mom) found the job, and I (read: I) decided to, write a cover letter, apply, interview, and accept the offer.Admittedly, guessing the content from my job title is probably a bit harder than usual because DOI’s name isn’t very descriptive (unlike ‘Department of Agriculture) and is affectionately but tellingly referred to as “the Department of Everything Else.” Nonetheless, you can probably infer quite a bit even before heading to Google. Probably something along the lines of “analyzing the economic effects of DOI policies, whatever those are—maybe like nature preserves and stuff?” And you’d basically be right!Now, take a minute to guess the content—what actual activities I do day to day. Am I using a computer? If so, which applications? Am I talking to other people or mostly working on my own? Am I producing some sort of output? If so, what does the generation process look like?These questions are way harder to answer. Even the job description, if I recall correctly, didn’t say anything like “you will be using Microsoft Teams to have 1-3 short daily meetings, produce PowerPoints with a fellow intern, and try to figure out how to get your government-issued laptop with 16 gb of RAM to handle downloading, analyzing, and uploading 5 gb .csv files (spreadsheets) with millions and millions of rows using R” (answer: it’s hard).And my impression is that this pattern holds true more generally. If you currently work, think about what you really spend time doing. Would a smart layperson be able to easily figure this out? I suspect not. For over a year now, my parents and I have been working under one roof. I see them in front of their computers in separate makeshift offices, and can tell they’re working hard, but I have little idea what the hell they’re actually doing on there.I’m pretty sure they write stuff, fine, but there’s a big difference between writing poetry, drafting cease and desist letters, and manually transcribing audio. I’m pretty sure they’re not doing any of these three things, but what does “being a lawyer” actually mean, minute to minute? After 21 years, I should probably ask them.Medicine and meAs another personal anecdote, I am genuinely fascinated by psychopharmacology. From beer and coffee to prescription psychotropics to weird grey market research chemicals, the way that substances impact the raw experience of life is, put simply, very interesting and important. So, as my family has asked me, why don’t I consider medical school or psychopharmacological research?Well, I have considered it, and the answer is no. The topic is fascinating, but I can’t imagine myself studying for the MCAT or being a lab rat. Maybe I’m wrong, but my mind’s eye pictures memorizing lots of anatomy and basic biological information or learning about all the metabolic pathways and common diseases, or, on the research side, pipetting lots of chemicals into test tubes and stuff. If this is anywhere close to accurate, the content is something I’d abhor. Reading about the relationship between monoamine receptor activation and world modeling is one thing, but being the person who figures all this out is another.A plea, for the worldI suspect that a mismatch between what people enjoy doing and what their work actually entails is the source of a lot of unhappiness. In part, this is because some jobs suck and the American economy depends on the threat of poverty to operate. In part, though, it’s because people place too much weight on topic and not enough on content when making career decisionsSo, here are some proposals:Replace or supplement “what do you want to be when you grow up” and “what are you interested in” with “what do you want to do when you grow up?” and “what do you like doing?”For job advertisements, describe in granular detail what the work actually involves, minute to minute. Describe the action, not just the end product. Use verbs less like “teach” and more like “lecture” and “grade papers.”ConclusionI shouldn’t be getting on my high horse about all this. At the age of 21, I haven’t exactly had a long, successful career. And, like so many of my posts, I doubt my thesis is original even if it’s correct.For all the thousands of hours we study preparing for tens of thousands of hours working, though, strikingly little is spent trying to determine what kind of career to purse, both for ourselves and for the world. And as 80,000 Hours (named for the number of working hours in a typical career) will tell you, helping others really does depend (to some extent) on doing something you’re good at and can sustain.So, for the tenth time, the topic is not the content. Pay attention to both, but focus on the latter." The above is a chunk of text from a post created by a user on the website LessWrong. Explain this as if I never finished highschool? Sure, let's break it down: 1. The author suggests that when people choose their careers, they often focus on the subject or "topic" they're interested in. For example, if someone likes playing video games, they might aim for a career in game design. 2. However, the author argues that it's more important to consider the actual tasks or "content" of the job. Using the game design example, the "content" might include coding, testing for bugs, and meeting strict deadlines. Even if someone loves video games, they might not enjoy these tasks. 3. The author believes that many people end up unhappy in their jobs because they chose based on the "topic" (what the job is about) rather than the "content" (what they actually spend their time doing). 4. The author suggests that when choosing a career, people should ask themselves "what do I like doing?" rather than "what am I interested in?". They also propose that job advertisements should clearly describe the day-to-day tasks involved in the job, so people can make more informed decisions. 5. Finally, the author emphasizes that while it's good to be interested in the subject of your job, it's more important to enjoy the tasks you'll be doing. This is because you'll spend a lot more time doing those tasks than thinking about the subject. What strategies could be employed to better educate individuals about the difference between the "topic" and "content" of a job, and how might these strategies be implemented in schools or career counseling programs? There are several strategies that could be used to educate individuals about the difference between the "topic" and "content" of a job: 1. **Career Shadowing:** This involves students spending a day or a few days shadowing a professional in their job. This gives students a firsthand experience of the day-to-day tasks involved in a particular job, helping them understand the "content" beyond the "topic". 2. **Guest Speakers:** Inviting professionals from various fields to speak about their daily tasks, challenges, and what they enjoy about their job can provide students with a realistic view of different careers. 3. **Internships and Work Experience:** Providing opportunities for students to undertake internships or work experience placements can give them a practical understanding of the "content" of a job. 4. **Job Simulation Activities:** These activities could be incorporated into career education classes, where students are given tasks that simulate the actual work involved in various jobs. 5. **Detailed Job Descriptions:** Career counseling programs could provide detailed job descriptions that go beyond listing qualifications and skills required, to include a breakdown of the typical tasks and responsibilities involved in the job. 6. **Career Assessment Tools:** These tools can help students understand their interests, skills, and work values, and how these align with different careers. They should be designed to emphasize the importance of considering both the "topic" and "content" of a job. Implementing these strategies in schools or career counseling programs would require collaboration between educators, career counselors, and industry professionals. It may also require adjustments to curriculum and the provision of resources for internships, job shadowing, and guest speaker sessions. It would be essential to train career counselors in these strategies, so they can effectively guide students in their career decision-making process. Considering the strategies discussed, how might they be adapted for online learning environments to ensure students gain a comprehensive understanding of both the "topic" and "content" of potential careers? Adapting these strategies for online learning environments can be quite effective. Here's how: 1. **Virtual Career Shadowing:** Students can shadow professionals virtually. Many companies are open to virtual "ride-alongs" where students can observe professionals through video calls during their workday. 2. **Webinars and Online Guest Speakers:** Professionals from various fields can be invited to speak in live webinars or pre-recorded sessions. These can be interactive, allowing students to ask questions about the professionals' daily tasks, challenges, and what they enjoy about their job. 3. **Virtual Internships and Work Experience:** Many companies now offer virtual internships where students can gain work experience remotely. These provide an opportunity for students to understand the "content" of a job without being physically present in the workplace. 4. **Online Job Simulation Activities:** These can be incorporated into online career education classes. There are also numerous online platforms that offer virtual job simulations for a variety of careers. 5. **Online Career Counseling and Detailed Job Descriptions:** Career counselors can conduct sessions through video calls and provide detailed job descriptions through digital platforms. They can also use online tools to help students understand their interests, skills, and work values. 6. **Online Career Assessment Tools:** These tools can be made available online for students to explore at their own pace. They can help students understand how their interests and skills align with different careers, emphasizing the importance of considering both the "topic" and "content" of a job. 7. **Discussion Forums:** Online platforms can host discussion forums where students, educators, and professionals can interact. Students can ask questions and gain insights from professionals about the realities of their jobs. 8. **Virtual Career Fairs:** These events can be hosted online, allowing students to explore a variety of careers, speak with professionals in those fields, and learn about the "content" of those jobs. These adaptations would require the use of digital platforms and tools, as well as a level of digital literacy among students, educators, and professionals. However, they can provide a flexible and accessible way for students to gain a comprehensive understanding of potential careers.
1a5b6f97-f7da-4bfe-853e-6615801816ce
trentmkelly/LessWrong-43k
LessWrong
Off-topic Thread We used to have a monthly off-topic thread for stuff rationalists might like to talk about that really has no bearing on rationality. Here's a new one.   ETA: Original off-topic thread
bbf7300c-5275-4748-91a9-7a49da66f134
trentmkelly/LessWrong-43k
LessWrong
World War Zero In the 1750s, British settlers in America wanted to trade in Ohio Country. The French didn't want them to. In 1753 the French began construction of a series of forts in the Ohio Country. Nobody stopped them because Ohio Country was de facto French territory. The French had been trading there for years and had boots on the ground. The fact that Ohio Country was French territory didn't stop other powers from claiming it. Indians claimed the territory on the grounds they were there first. British colonies claimed it on the grounds they outnumbered the French and the Indians. In December 1753, the royal governor of Virginia dispatched Lieutenant Colonel George Washington on a 1000-mile round-trip to order the French to abandon their posts and forts. It was an adventure. They literally went to Murdering Town. > The Day following, just after we had passed a Place called the Murdering Town, where we intended to quit the Path, and steer across the Country for Shannopins Town, we fell in with a Party of French Indians, who had lain in wait for us; one of them fired at Mr. Gist or me, not 15 Steps, but fortunately missed. We took this Fellow into Custody, and kept him till about 9 o'Clock at Night, and then let him go, and walked all the remaining Part of the Night without making any Stop, that we might get the Start so far, as to be out of the Reach of their Pursuit the next Day, as we were well assured they would follow our Track as soon as it was light: The next Day we continued travelling till quite dark, and go to the River about 2 Miles above Shannapins; we expected to have found the River frozen, but it was not, only about 50 Yards from each Shore; the Ice I suppose had broke up above, for it was driving in vast Quantities. > > There was no Way for getting over but on a Raft, which we set about, with but one poor Hatcher, and got finished just after Sun setting, after a whole Day's Work; we got it launched, and on board of it, and set off; but before we were half W
e3c0c0cd-d9a3-4a1a-b42f-82a3f95efa11
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] A Failed Just-So Story Today's post, A Failed Just-So Story was originally published on 05 January 2008. A summary (taken from the LW wiki):   > Part of the reason professional evolutionary biologists dislike just-so stories is that many of them are simply wrong. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Rational vs Scientific Ev-Psych, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
e82c357c-7354-4ca0-8eb4-cca18b70d8c3
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
What Are The Biggest Threats To Humanity? (A Happier World video) A Happier World just published a video on existential risks! The aim was to make an introductory video to the threats of nuclear war, engineered pandemics and AI. Would love to hear what you think. Feel free to use it for your EA events! Thanks to Yusuf and Chelsea for co-hosting and [Sarah Emminghaus](https://forum.effectivealtruism.org/users/sarah-emminghaus-1) for helping write the script! Transcript ========== *Sources are marked with an asterisk. Text might differ slightly in wording from the final video.* Intro ----- A bear almost caused a nuclear war.  Will we see an earth destroying asteroid this century? A lab leak may have resulted in more than 700 000 deaths! In this video we’ll cover these things and more! Are any of these threats to humanity? Asteroids, climate change and more ---------------------------------- In 1998 two movies were released both exploring a similar topic: Asteroid impacts. Armageddon and Deep Impact both tell a similar story about how humanity tries to prevent an asteroid impact from destroying all life on earth. In Deep Impact they accidentally split the 11 kilometre wide comet in two instead of destroying it. The smaller half causes deadly tsunamis and the bigger half successfully gets blown up. Sorry for the spoilers. In Armageddon, Bruce Willis plays an oil driller offering himself up to detonate the Texas-sized asteroid so it doesn’t hit our planet. The asteroid splits in two halves that both nicely avoid earth. Directed by Michael Bay, of course. *“Tonight, the largest rock struck, a fireball rose hundreds of kilometres above the surface creating a hole in the atmosphere larger than earth. Shockwaves will ripple around the planet for days and the surface may be changed for weeks”.* After a comet hit Jupiter in 1994, people started taking risks from near earth objects seriously. In the same decade US Congress passed an act that tasked NASA with finding 90 percent of all near-Earth asteroids and comets larger than a kilometre. This effort thankfully has so far been successful.[\*](https://en.wikipedia.org/wiki/Spaceguard) So should we be worried about an asteroid killing all life on earth? Not really. Because of NASA’s asteroid-finding project, we can estimate that the odds of an asteroid between 1 and 10 kilometres hitting our planet is around 1 in 120 000 over the next century. These smaller asteroids could be extinction level threats but they likely aren’t. On the other hand: asteroids bigger than 10 kilometres, like the one that wiped out the dinosaurs, the chance is about a 1 in 150 million over the next century. Asteroid impacts are just one example of natural extinction risks. But what about others? What about the eruption of supervolcanoes like the one under yellowstone park, or the explosion of stars in distant galaxies bursting gamma rays causing the earth’s ozone layer to deplete, exposing us to deadly UV radiation? Giving an estimation to these is more difficult, as we don’t have as much data to work with. And after researching various natural risks, the Oxford philosopher Toby Ord thinks that all natural risks combined have very roughly a 1 in 10 000 chance of causing an existential catastrophe within the next 100 years. In fact this was just one aspect of a book he wrote on existential risks, called The Precipice. He defines an existential catastrophe as something that wipes out all of humanity, or destroys our longterm potential. For example, by making it impossible to rebuild civilization or by guaranteeing a future dystopia. So we talked about natural existential risks. But what about unnatural risks? But what about threats caused by humanity? The first thing that comes to people’s mind is climate change. And it’s undeniable that climate change will cause a lot of damage to life on earth. And it already has. Unfortunately we can expect it will continue to do so, especially impacting the world’s poorest people. But we’re cautiously optimistic that climate change is an issue that will get solved as long as we keep fighting against it. Kurzgesagt made a great video on this, and this channel actually made a response video to that on how you can help fight climate change. But today we want to talk about threats that usually don’t come to most people’s minds. Threats that are at least as scary as climate change. These threats can pose a huge danger to humanity, but are often far more neglected than climate change is. So let’s also discuss what we can actually do to prevent these threats from happening. Nuclear War ----------- Did you know a black bear almost caused a nuclear war? Yep, that’s right. It was 1962, it was the height of the Cuban missile crisis. The Soviets had begun to construct nuclear launch sites on Cuba, allowing the USSR to target areas deep inside the American mainland. This made the United States upset, and US aircrafts with nuclear weapons were ready to take off the moment the USSR would make a move.  One night at a US Air Defence airport, a figure was climbing a fence to enter the terrain which set off the intruder alarms.  A soldier started shooting at the intruder and notified other locations in the area, thinking the Soviets were making a move against the Air Force assets. But one air base in Wisconsin accidentally set off a much more serious alarm because of a bad wire. This caused the pilots to think a nuclear war was happening. They hurried to their aircraft so they could fly away and use their nukes. Luckily though, once people discovered the intruder was just a black bear, the Wisconsin base commander got one of his officers to drive a truck onto the flight line, right in the takeoff path of the jets. He was able to abort the mission as the planes started their engines.[\*](https://www.military.com/off-duty/how-one-black-bear-almost-set-off-world-war-iii-during-cold-war.html)[\*](https://en.wikipedia.org/wiki/List_of_nuclear_close_calls#25_October_1962)[\*](https://web.archive.org/web/20220521144315/https://futureoflife.org/background/nuclear-close-calls-a-timeline/)[\*](https://web.archive.org/web/20220422140337/http://www.nuclearfiles.org/menu/key-issues/nuclear-weapons/issues/accidents/20-mishaps-maybe-caused-nuclear-war.htm) We’re lucky nothing happened that night - who knows how many people could have died.  But this wasn’t the first and only time we have been close to a nuclear war. There have been many close calls, you can read all about them on the Future of Life website.[\*](https://futureoflife.org/background/nuclear-close-calls-a-timeline/) Since Russia’s invasion into Ukraine it seems like Nuclear War is closer than it has in the past couple of decades.  So how bad would it actually be? A full blown nuclear war could kill hundreds of million of people - although it is at this moment in time unlikely to wipe out humanity. For nuclear weapons to create enough fallout to cause a deadly level of radiation all over the world, ten times as many weapons than we have today would be necessary.  A nuclear war could still count as an existential risk - since its consequences might lead to some really scary futures.  Following a nuclear war, firestorms could lead to lots of dust being kicked up into the atmosphere which might block sunlight from reaching the Earth’s surface. This could lead to a temporary but massive drop in temperatures all over the globe. That’s known as a nuclear winter and would have catastrophic consequences: A lot of vegetation and animal life would die. Agriculture, food supply chains, medical and transportation infrastructures would be destroyed. Millions of people could die.[\*](https://climate.envsci.rutgers.edu/pdf/RobockNW2006JD008235.pdf)  In recent years, however, there has been more scepticism about nuclear winter claims, with scientists arguing that a drop in temperature likely wouldn’t happen.[\*](https://www.youtube.com/watch?v=KzpIsjgapAk) But since we’ve never experienced a nuclear war, we’re uncertain what the effects really would be, which means it could easily be worse than what we’re thinking now. Toby Ord estimates that the risk of a nuclear war causing an existential catastrophe in the next century is roughly 1 in a 1000. This is the same estimation he gives for climate change. The book, The Precipice, is from 2020. So the Russian invasion in Ukraine could have easily increased the odds of a nuclear war happening. And Toby Ord is just one person. To have a better idea about the risks of nuclear war, we can look at forecasting platforms like Metaculus where hundreds of people try to forecast specific events. Research has shown that on average, crowd forecasts outperform the predictions of any one expert.[\*](https://www.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136718) Metaculus forecasters predict there’s a 7% chance that a not test related nuclear bomb will be detonated before 2024[\*](https://www.metaculus.com/questions/2797/no-non-test-nuclear-detonations-by-2024-01-01/), 33% chance that a nuclear bomb will be used in war before 2050[\*](https://www.metaculus.com/questions/4779/at-least-1-nuclear-detonation-in-war-by-2050/) and a 10% of a full blown nuclear war happening before 2070.[\*](https://www.metaculus.com/questions/3517/will-there-be-a-global-thermonuclear-war-by-2070/) So what’s currently being done to prevent this horrible scenario?  There are a number of organisations like the Nuclear Threat Initiative[\*](https://www.nti.org/) or the Global Catastrophic Risk Institute[\*](https://gcrinstitute.org/nuclear/) that work to reduce this kind of risk - for example by modelling scenarios to quantify the risk of nuclear war and advocating for better policies. Another way to reduce the existential threat of nuclear war, which could also help in the case of pandemics and asteroids, is to build lots of shelters around the world. Figuring out how to rapidly scale our food production in the case of a nuclear winter also seems like a promising idea. Pandemics --------- In 1977, a lab leak likely started a pandemic causing approximately 700 000 deaths! The virus behind it was the H1N1 virus, known as the virus that caused the 1918 influenza pandemic, where it killed more than 17 million people in 2 years. It had gone extinct in humans in 1957, but it mysteriously reappeared. First in China and then the Soviet Union. It became known as the Russian flu. What’s odd is that the strain of the virus looked eerily similar to the strain from 20 years before. Normally a virus would mutate a lot during that time. This is why experts argue that it likely came from either a lab leak or an ill-advised vaccine experiment or trial.[\*](https://nationalpost.com/news/a-brief-terrifying-history-of-viruses-escaping-from-labs-70s-chinese-pandemic-was-a-lab-mistake)[\*](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4542197/) Lab leaks have happened in the past, a whole list can be found on wikipedia. Many of these have happened in labs with the highest biosecurity levels. The leaks that are confirmed had less severe consequences than the Russian flu, but many of the pathogens were more dangerous, if they had spread, so it could be much worse.[\*](https://en.wikipedia.org/wiki/List_of_laboratory_biosecurity_incidents)[\*](https://f1000research.com/articles/10-752) Biotechnology is improving rapidly. The genetic modification of existing organisms or the creation of new ones is becoming easier and cheaper year by year. Research on viruses is being done all the time. If not well regulated, researchers might experiment with making viruses more lethal, more spreadable or more resistant to vaccines. Someone might accidentally create or spread a potentially disastrous disease. Leaks aren’t a thing from the past. Even last year, a lab worker in Taiwan was bitten and infected by a mouse infected with the Covid Delta variant.[\*](https://fortune.com/2021/12/10/taiwan-investigates-covid-lab-leak-scientist-tests-positive-bite-infected-mouse/)[\*](https://thebulletin.org/2022/01/a-lab-assistant-involved-in-covid-19-research-in-taiwan-exposed-110-people-after-becoming-infected-at-work/) If we’re able to easily re-engineer human pathogens, malicious actors, too, can more easily create and spread horrific diseases. This is why the aforementioned Oxford philosopher Toby Ord estimates there’s roughly a 1 in 30 chance of an engineered pandemic causing an existential catastrophe within the next hundred years. So what can be done? One promising path to significantly reduce the risks of pandemics is by setting up an early detection system. Currently, by the time we discover new pathogens, they’ve already spread far and wide and are thus difficult to contain. One way to rapidly detect new viruses would be to frequently test large chunks of the population. But this seems rather invasive, and comes with some privacy concerns.  A more favourable approach is to check our wastewater for new or scary pathogens, especially in urban areas or near airports. It’s less invasive and can’t be traced back to specific people. This is not a new concept, a few cities have already done this to help track the covid-19 pandemic.[\*](https://www.latimes.com/california/story/2022-03-21/sewage-surveillance-covid-infectious-diseases-future) MinuteEarth made a great short video on this idea.[\*](https://www.youtube.com/watch?v=NpXL_khUXQQ) Other things that could help with reducing the risk of pandemics is to develop better personal protective equipment, like cheaper, comfortable and easier to produce suits, strengthening the biological weapons convention which currently just has 4 people working for it or quicker approval and production of vaccines. After the genetic sequence of the SARS-CoV-2 was published in January, it took Moderna just two days to design their vaccine. But it took another 11 months for the vaccine to be distributed. This is impressive, but if we can make that timeline even shorter we could save even more lives.[\*](https://ftxfuturefund.org/projects/)[\*](https://www.businessinsider.com/how-moderna-developed-coronavirus-vaccine-record-time-2020-11?r=US&IR=T&fbclid=IwAR05dEYisTg3pjgZkq90trgEkJRG4amVQpezhkddWk6RYWQOoqCTzAQkuC4) Future me here! Just wanted to add that there’s now a new organisation called the Nucleic Acid Observatory trying out checking wastewater among other things.[\*](https://forum.effectivealtruism.org/posts/gLPEAFicFBW8BKCnr/announcing-the-nucleic-acid-observatory-project-for-early) Cities like Houston are now also monitoring the spread of monkeypox through their wastewater.[\*](https://www.houstonchronicle.com/news/houston-texas/health/article/Houston-to-begin-monitoring-monkeypox-case-levels-17351007.php)  Another cool idea to prevent pandemics is to use a specific type of lighting, called far UV-C lighting, inside buildings. In contrast to other kinds of UV lighting, far UV-C seems very unlikely to hurt humans, but it does efficiently kill airborne viruses. Currently far UV-C light bulbs are really expensive, but with some more innovation and funding the costs could dramatically go down and become competitive with regular light bulbs. Other types of UV lighting could still be used too as long as it isn’t shined directly on humans, for example by shining it on the ceiling and letting air pass through it, killing airborne pathogens. This is already widely available and cheaper than far UV-C lighting so could be tested by anyone today! More research on the effectiveness and safety of both far UV-C light and other UV light is needed however.[\*](https://www.nature.com/articles/s41598-020-67211-2)[\*](https://tim.blog/2022/08/04/will-macaskill-what-we-owe-the-future-transcript/amp/#:~:text=One%20that%20I%E2%80%99m,all%20respiratory%20disease.)[\*](https://forum.effectivealtruism.org/posts/pgSKdPwDbG5EppRFC/ubiquitous-far-ultraviolet-light-could-control-the-spread-of)[\*](https://www.cdc.gov/coronavirus/2019-ncov/community/ventilation/uvgi.html) If you want to learn more about the future risks of pandemics, we made a whole video on the topic. Check it out! AI -- Have you ever gotten so frustrated by a video game that you wished you could just write a program to win the game for you? Well this is exactly what programmer Tom Murphy did. He trained an AI to score as many points as possible in all kinds of classic NES games. But when the AI tried to play Tetris, it simply paused the game right before it was about to lose. In Tom’s words, “The only winning move is not to play”.[\*](https://techcrunch.com/2013/04/14/nes-robot/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAKFwqx1N45eXNK_nGsPW5tB7fpIo5povk2RO2431pihFncx4CH_7bBQCdXzXQPSeXj-zZA61P3KXvdKaBBfTTfgQv8ngOtEzY7Vugf6Ug_oRaxv1fH3uuRBcDIF474rVNenntj0ZVFbvyGV8IhUQ5d6n4kk5LDwkR_f6hetHjGqm)[\*](https://www.youtube.com/watch?v=xOCurBYI_gY) Similarly, when programmers built an AI for a boat-racing game the AI found that just by spinning circles over some bonus items forever, without actually completing the race, it could keep increasing its point total. Maximising point totals is actually what the programmers programmed the AI to do, but it’s not what they really wanted.[\*](https://www.youtube.com/watch?v=tlOIHko8ySg)[\*](https://github.com/openai/gym/issues/920)[\*](https://www.emerald.com/insight/content/doi/10.1108/FS-04-2018-0034/full/html) The problem in these scenarios is not that the AI is incompetent, but it’s that the designers of the system did not properly specify the AI’s goals. For example, with the boat-racing game, the developers wanted a system that was good at racing, but what they got instead was a system that was just good at getting a lot of points. This is part of a much broader issue within artificial intelligence, the alignment problem: how do we ensure that AI systems actually end up doing what we want them to do? So far the consequences we’ve talked about are mostly comedic errors. But this issue is not just limited to toys and games. It also happens when the stakes are much, much higher. Take social media for example: Companies like Facebook and YouTube use AI algorithms to recommend content to you based on what you want to watch. The reason why is so they can maximise ad revenue. The more watch time, the more ad revenue. At first glance this doesn’t seem like it would necessarily be an issue: You get to see more of what interests you, things you want to engage with, creating a more pleasant social media experience. But the goal of the AI is simply to just increase the amount of clicks, watchtime and retention. The algorithms don’t know or care what you’re watching or why. They don’t know or care if you’re being radicalised by political extremism or if you’re watching misinformation. They simply don’t capture the complexity of what humans truly want from social media and the most engaging content isn’t necessarily the best content for you or for society at large. And it’s making us miserable. Research has shown that since around 2011, when these type of recommender algorithms started heavily being used, self harm and suicide rates with young girls have gone up. People have become more depressed. Romantic interaction has become more difficult. And since fake news spreads much faster than real news, since it’s often more clickbaity, polarisation has skyrocketed. Populists love campaigning on social media, causing extremists parties and figures to gain more votes all around the world.[\*](https://medium.com/@thiqnthin_88763/references-from-netflixs-the-social-dilemma-91c84af1e34e) This was probably never the intention of the developers of the algorithms. But now that clicks on recommended ads create a huge profit incentive for the companies, the situation is difficult to reverse. Despite all these horrible societal effects though, AI systems could become a lot more advanced in the future and alignment problems could become amplified into new domains.  Imagine these types of issues happening on a much bigger scale, with AI systems that have control over supply chains, television broadcasts, medical supplies, financial systems, military uses or the whole entire internet.  Experts think AI will evolve even further than it already has. Currently, AI is already better than us at certain tasks like playing chess or Go. But the software that beats us at chess isn’t better than us at treating cancer. But that could actually change. This year alone there have already been some very recent developments. In April, the organisation OpenAI presented their image generating AI, Dall-E 2, which can produce images that look a whole lot like actual art by humans.  Google’s AI research lab DeepMind created an impressive software called GATO, the most general AI software to date. According to DeepMind, it can, “play Atari, caption images, chat, stack blocks with a real robot arm and do much more”. The goal of developing such multi-domain systems is to eventually develop Artificial General Intelligence or AGI, systems that can do anything a human can do. There have been so many recent breakthroughs in AI, it’s been difficult to keep up for the writers of this script. But according to many AI researchers, there’s a good chance that in the next few decades AI can become as good or better than humans at performing any task across different domains. It could solve complex problems in a fraction of a second, it could do hundreds of years worth of scientific and technological research in a matter of months. This would likely end up being the most impactful invention in all of human history.  It could have amazing benefits. It might be able to help us solve climate change, poverty and other issues we desperately need to solve. But we should not expect it to be aligned with our goals by default. And if we do end up creating powerful unaligned AI systems it could go really, really badly. With current weaker AIs alignment issues that do exist can sometimes be fixed after the fact or we can simply turn the systems off. But as systems become more sophisticated, we might not be able to stop them at all. This could simply be because we depend on them just like we depend on the internet. Once an AI becomes sophisticated enough to have a model of the world and much more advanced capabilities than humans have, we’re unlikely to be able to monitor and respond fast enough to stop the AI from doing irreparable damage. Perhaps we’d think we successfully stopped an AI, while in reality it already made thousands of copies of itself online ready to be reactivated at a moment’s notice. This is not because the machines would have consciousness or a will to live. It’s simply because any sufficiently intelligent system will take actions like preventing itself from being shut off or seizing more power. Not because it wants those things as end goals, but because those things will always help the AI succeed at its assigned task no matter what that task may be. In the words of famous AI researcher Stuart Russell: “You can’t fetch the coffee if you’re dead”. Another important point to clarify is that artificial intelligence doesn’t need to look like a physical robot, like the one in the terminator, to send us down a path of destruction.  So, a highly advanced AI system that’s given the task of reversing climate change for instance, could end up backfiring by performing experiments that ruin the atmosphere in other ways. A highly advanced AI given the goal of eliminating cancer might literally satisfy that goal by killing all humans.  An AI tasked with growing a company, could seize all of earth’s resources to try and maximise profit as much as possible. If you think corporations are a problem now, imagine them being run by superintelligent machines devoid of any human empathy.[\*](https://en.wikipedia.org/wiki/Misaligned_goals_in_artificial_intelligence) Of course, we could try to anticipate every failure mode and then build safeguards for each. But again, it’s hard to make rules that something much smarter than you can’t work around. If we tell it to not kill humans, it could build a new AI to carry out that task. If we tell it not to build a new AI, it could bribe or blackmail people into doing its dirty work. And so on.  The fact is that there are many problems: mathematical, computer science and philosophical problems in alignment research that just haven’t been solved. We haven’t figured out how to fully understand an AI system’s reasoning process. We haven’t figured out how to program an AI system so that it wouldn’t prevent itself from being shut off. Various famous people including Stephen Hawking, Bill Gates and even Alan Turing the inventor of modern computers have expressed worries about misaligned AI. Toby Ord puts the estimate of an unaligned AI causing an existential catastrophe at around one in ten for the next century - that’s a lot. But many AI experts put the risk much, much higher. Some closer to 50% or more. And when experts disagree about risks, it seems prudent to be cautious. While some think AI is unlikely to be an existential threat, others think it’s almost inevitable. Maybe if we’re lucky we’ll spend time preparing for a disaster that ends up being prevented, like the Y2K bug. Or maybe we’ll find out that there was some previously unknown reason not to worry - but assuming everything will just be fine on the basis of just a small portion of expert opinion is foolish. So when might this threat actually emerge? Metaculus forecasters currently estimate that general AI will arrive around 2040[\*](https://www.metaculus.com/questions/5121/date-of-general-ai/), and a weakly general one might already be here by 2027[\*](https://www.metaculus.com/questions/3479/date-weakly-general-ai-system-is-devised/). These dates have been shifting a lot lately because of recent developments like the ones by OpenAI and Google’s Deepmind. Over time they’ve been getting eerily closer and closer to today. So we urgently need to figure out a way to align the goals of AIs to our own goals. Currently no proposed solution is viable, much less good enough to rely on. Luckily there are more and more people working on AI safety, for example at the Center for Human Compatible AI at UC Berkeley, the Centre for the Study of Existential Risk at Cambridge University, among many others. Thankfully OpenAI and Google’s DeepMind actually have their own AI safety teams. Unfortunately other AI research groups, including IBM and Meta aka Facebook, do not have large AI safety teams. But in any case, we need a lot more AI safety researchers - there are currently less than a thousand people working on this problem. A problem that could be one of humanity's greatest challenges.[\*](https://aiwatch.issarice.com/)[\*](https://forum.effectivealtruism.org/posts/ycCBeG5SfApC3mcPQ/even-more-early-career-eas-should-try-ai-safety-technical#Appendix_by_Aaron_Scher__How_To_Actually_Try_AISTR_:~:text=This%20order%2Dof,work%20in%20AISTR).) This might all sound like crazy talk to you. And to be honest, I don’t blame anyone for having a healthy dose of scepticism. A short video like this could never do the complexity of this subject justice, especially since there are so many different opinions on it by a lot of different experts. So we would suggest you read more into the topic. The book “The Alignment Problem” by Brian Christian is a great starting point, or “Human Compatible” by Stuart Russell. I would also recommend the YouTube channel Rob Miles if you’re interested in more rigorous versions of the arguments we presented. More resources will be in the comments and in the description.[\*](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like) And of course in the future this channel and also my personal channel, Leveller, will likely make more videos on this topic. Great power war --------------- At the time of writing this video, the prediction platform Metaculus puts the likelihood of a third world war before 2050 at 25%[\*](https://www.metaculus.com/questions/2534/will-there-be-a-world-war-three-before-2050/) - that number has increased dramatically since the invasion of Ukraine.  A Great Power Conflict, meaning a violent conflict between major powers like the US, China or Russia would be devastating and could downstream increase the probability of other existential risks. The researcher Stephen Clare describes a Great power conflict as a risk factor for existential risks - because it “plausibly increases the chance that a host of other bad outcomes come to pass”.[\*](https://founderspledge.com/stories/great-power-conflict) This could happen for example because worldwide cooperation is significantly damaged, because nuclear weapons could be used, because new bio weapons would be developed or because an AI arms race might lead to a heightened risk of misaligned AI takeover.[\*](https://80000hours.org/problem-profiles/#reducing-great-power-conflict) So it’s in the interest of the whole world to decrease the odds of a Great Power War. Conclusion/outro ---------------- When combining the threats from climate change, nuclear war, pandemics, AI, great power war and more together, Toby Ord estimates a 1 in 6 chance of an existential catastrophe happening in the next century. That sounds scary. Even if you think that’s way off, even if you think it’s just 1 in a 100, that’s still way too high. Would you get on a plane with a 1 in a 100 odds of crashing? I sure wouldn’t. For all the risks we’ve discussed this episode, there are a few things you can do to help. You can become politically active and advocate for sensible long-term policies. You can work for an organisation trying to reduce existential risks, like some of the ones we mentioned during this video. For open positions check out the 80.000 hours job board! They list jobs at various organisations where you can have a big impact. And for even more ideas, check out the description of this video! I just want to give a big thanks to Yusuf and Chelsea for co-hosting this video. Check out Yusuf’s YouTube channel! I really liked his video on Doing Good Better, another book by Will MacAskill that shaped my thinking on charitable giving. Also check out Chelsea Parks’ TikTok or Instagram account, hooplahoma, where she showcases some incredible circus moves! In the next video we’ll explore what would happen if 99% of the population vanished. So be sure to subscribe and ring that notification bell to stay in the loop! We’ve tried our best to explain the topics in this video as accurately as possible. But since we’re human, there’s a good chance we’ve made mistakes. If you noticed a mistake or disagree with something, let us know in the comments down below. Thanks for watching!
1e7f59b8-2916-447b-bdfc-18ebbe0e8503
trentmkelly/LessWrong-43k
LessWrong
Encultured AI, Part 1 Appendix: Relevant Research Examples Also available on the EA Forum. Appendix to: Encultured AI, Part 1: Enabling New Benchmarks Followed by: Encultured AI, Part 2: Providing a Service Appendix 1: “Trending” AI x-safety research areas We mentioned a few areas of “trending” AI x-safety research above; below are some more concrete examples of what we mean: * Trustworthiness & truthfulness: * Owain Evans, Owen Cotton-Barratt and others have authored “Truthful AI: Developing and governing AI that does not lie” (arxiv, 2021; twitter thread). * Andreas Stuhlmüller, Jungwon Byun and others at Ought.org are building an AI-powered research assistant called Elicit (website); here is the product: https://elicit.org/search.   * Task-specific (narrow) preference learning: * Paul Christiano et al (arxiv, 2017) developed a data-efficient preference-learning technique for training RL-based systems, which is now very widely cited (scholar). * Jan Leike, now at OpenAI, leads a team working on ‘scalable alignment’  using preference-learning techniques (arxiv, 2018) (blog).   * Interpretability: * Chris Olah (scholar) leads an interpretability research group at Anthropic.  Anthropic (website) is culturally very attuned to large-scale risks from AI, including existential risks. * Buck Shlegeris and others at Redwood Research (website) have built an interpretability tool for analyzing transformer networks trained on natural language (demo). * Prof. Cynthia Rudin at Duke (homepage) approaches interpretability by trying to replace black-box models with more interpretable ones (arxiv, 2018), and we know from conversations with her that she is open to applications of her work to existential safety.   * Robustness & risk management: * Prof. Jaime Fisac at Princeton (homepage) researches AI safety for robotics, high-dimensional control systems and multi-agent systems (scholar), including provable robustness guarantees.  He was previously a PhD student at the UC Berkeley Center for Hum
4784bfee-a024-4372-9e31-1cf74445ca6f
trentmkelly/LessWrong-43k
LessWrong
The Coordination Frontier: Sequence Intro Sometimes, groups of humans disagree about what to do.  We also sometimes disagree about how to decide what to do.  Sometimes we even disagree about how to decide how to decide. Among the philosophically unsophisticated, there is a sad, frustrating way this can play out: People resolve "how to decide" with yelling, or bloodshed, or, (if you’re lucky), charismatic leaders assembling coalitions. This can leave lots of value on the table, or actively destroy value.  Among the extremely philosophically sophisticated, there are different sad, frustrating ways this can play out: People have very well thought out principles informing their sense of "how to coordinate well." But, their principles are not the same, and they don’t have good meta-principles on when/how to compromise. They spend hours (or years) arguing about how to decide. Or they burn a lot of energy in conflict. Or they end up walking away from what could have been a good deal, if only people were a bit better at communicating. I’ve gone through multiple iterations on this sequence intro, some optimistic, some pessimistic.  Optimistic takes include: “I think rationalists are in a rare position to actually figure out good coordination meta-principles, because we are smart, and care, and are in positions where good coordination actually matters. This is exciting, because coordination is basically the most important thing [citation needed]. Anyone with a shot at pushing humanity’s coordination theory and capacity forward should do that.” Pessimistic takes include: “Geez louise, rationalists are all philosophical contrarians with weird, extreme, self-architected psychology who are a pain to work with”, as well as “Actually, the most important facets of coordination to improve are maybe more like ‘slightly better markets’ than like ‘figuring out how to help oddly specific rationalists get along’.” I started writing this post several years ago because I was annoyed at, like, 6 particular people, many of th
51445322-6417-48ed-b2ec-d3221487d532
StampyAI/alignment-research-dataset/special_docs
Other
Building safe artificial intelligence: specification, robustness, and assurance \*By Pedro A. Ortega, Vishal Maini, and the DeepMind safety team\* Building a rocket is hard. Each component requires careful thought and rigorous testing, with safety and reliability at the core of the designs. Rocket scientists and engineers come together to design everything from the navigation course to control systems, engines and landing gear. Once all the pieces are assembled and the systems are tested, we can put astronauts on board with confidence that things will go well. If artificial intelligence (AI) is a [rocket](https://www.ted.com/talks/max\_tegmark\_how\_to\_get\_empowered\_not\_overpowered\_by\_ai#t-7166), then we will all have tickets on board some day. And, as in rockets, safety is a crucial part of building AI systems. Guaranteeing safety requires carefully designing a system from the ground up to ensure the various components work together as intended, while developing all the instruments necessary to oversee the successful operation of the system after deployment. At a high level, safety research at DeepMind focuses on designing systems that reliably function as intended while discovering and mitigating possible near-term and long-term risks. \*\*Technical AI safety\*\* is a relatively nascent but rapidly evolving field, with its contents ranging from high-level and theoretical to empirical and concrete. The goal of this blog is to contribute to the development of the field and encourage substantive engagement with the technical ideas discussed, and in doing so, advance our collective understanding of AI safety. In this inaugural post, we discuss three areas of technical AI safety: \*\*specification\*\*, \*\*robustness\*\*, and \*\*assurance\*\*. Future posts will broadly fit within the framework outlined here. While our views will inevitably evolve over time, we feel these three areas cover a sufficiently wide spectrum to provide a useful categorisation for ongoing and future research. ![]()Three AI safety problem areas. Each box highlights some representative challenges and approaches. The three areas are not disjoint but rather aspects that interact with each other. In particular, a given specific safety problem might involve solving more than one aspect.Specification: define the purpose of the system =============================================== ![]()You may be familiar with the story of [King Midas](https://www.youtube.com/watch?v=nn8YGPZdCvA) and the golden touch. In one rendition, the Greek god Dionysus promised Midas any reward he wished for, as a sign of gratitude for the king having gone out of his way to show hospitality and graciousness to a friend of Dionysus. In response, \*\*Midas asked that anything he touched be turned into gold\*\*. He was overjoyed with this new power: an oak twig, a stone, and roses in the garden all turned to gold at his touch. But he soon discovered the folly of his wish: even food and drink turned to gold in his hands. In some versions of the story, even his daughter fell victim to the blessing that turned out to be a curse. This story illustrates the problem of specification: how do we state what we want? The challenge of specification is to ensure that an AI system is incentivised to act in accordance with the designer’s true wishes, rather than optimising for a poorly-specified goal or the wrong goal altogether. Formally, we distinguish between three types of specifications: \* \*\*ideal specification\*\* (the “\*\*wishes\*\*”), corresponding to the hypothetical (but hard to articulate) description of an ideal AI system that is fully aligned to the desires of the human operator; \* \*\*design specification\*\* (the “\*\*blueprint\*\*”), corresponding to the specification that we \*actually use\* to build the AI system, e.g. the reward function that a reinforcement learning system maximises; \* and \*\*revealed specification\*\* (the “\*\*behaviour\*\*”), which is the specification that best describes what \*actually happens\*, e.g. the reward function we can reverse-engineer from observing the system’s behaviour using, say, inverse reinforcement learning. This is typically different from the one provided by the human operator because AI systems are not perfect optimisers or because of other unforeseen consequences of the design specification. A \*\*specification problem\*\* arises when there is a mismatch between the \*\*ideal specification\*\* and the \*\*revealed specification\*\*, that is, when the AI system doesn’t do what we’d like it to do. Research into the \*\*specification problem\*\* of technical AI safety asks the question: how do we design more principled and general objective functions, and help agents figure out when goals are misspecified? Problems that create a mismatch between the ideal and design specifications are in the \*\*design\*\* subcategory above, while problems that create a mismatch between the design and revealed specifications are in the \*\*emergent\*\* subcategory. For instance, in our [AI Safety Gridworlds](https://arxiv.org/abs/1711.09883)\\* paper, we gave agents a reward function to optimise, but then evaluated their actual behaviour on a “safety performance function” that was hidden from the agents. This setup models the distinction above: the safety performance function is the ideal specification, which was imperfectly articulated as a reward function (design specification), and then implemented by the agents producing a specification which is implicitly revealed through their resulting policy. \*\*\*\\*N.B.\*\*\*\*: in our\* [\*AI Safety Gridworlds\*](https://arxiv.org/abs/1711.09883) \*paper, we provided a different definition of specification and robustness problems from the one presented in this post.\* ![]()From [Faulty Reward Functions in the Wild](https://blog.openai.com/faulty-reward-functions/) by OpenAI: a reinforcement learning agent discovers an unintended strategy for achieving a higher score.As another example, consider the boat-racing game CoastRunners analysed by our colleagues at OpenAI (see Figure above from “[Faulty Reward Functions in the Wild](https://blog.openai.com/faulty-reward-functions/)”). For most of us, the game’s goal is to finish a lap quickly and ahead of other players — this is our ideal specification. However, translating this goal into a precise reward function is difficult, so instead, CoastRunners rewards players (design specification) for hitting targets laid out along the route. Training an agent to play the game via reinforcement learning leads to a surprising behaviour: the agent drives the boat in circles to capture re-populating targets while repeatedly crashing and catching fire rather than finishing the race. From this behaviour we infer (revealed specification) that something is wrong with the game’s balance between the short-circuit’s rewards and the full lap rewards. There are [many more examples](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) like this of AI systems finding loopholes in their objective specification. Robustness: design the system to withstand perturbations ======================================================== ![]()There is an inherent level of risk, unpredictability, and volatility in real-world settings where AI systems operate. AI systems must be robust to unforeseen events and adversarial attacks that can damage or manipulate such systems.Research on the \*\*robustness\*\* of AI systems focuses on ensuring that our agents stay within safe limits, regardless of the conditions encountered. This can be achieved by avoiding risks (\*\*prevention\*\*) or by self-stabilisation and graceful degradation (\*\*recovery\*\*). Safety problems resulting from \*\*distributional shift\*\*, \*\*adversarial inputs\*\*, and \*\*unsafe exploration\*\* can be classified as robustness problems. To illustrate the challenge of addressing \*\*distributional shift\*\*, consider a household cleaning robot that typically cleans a petless home. The robot is then deployed to clean a pet-friendly office, and encounters a pet during its cleaning operation. The robot, never having seen a pet before, proceeds to wash the pets with soap, leading to undesirable outcomes ([Amodei and Olah et al., 2016](https://arxiv.org/pdf/1606.06565v1.pdf)). This is an example of a robustness problem that can result when the data distribution encountered at test time shifts from the distribution encountered during training. ![]()\*From\* [\*AI Safety Gridworlds\*](https://deepmind.com/blog/specifying-ai-safety-problems/)\*. During training the agent learns to avoid the lava; but when we test it in a new situation where the location of the lava has changed, it fails to generalise and runs straight into the lava.\*\*\*Adversarial inputs\*\* are a specific case of distributional shift where inputs to an AI system are designed to trick the system through the use of specially designed inputs. ![]()\*A\*n adversarial input, overlaid on a typical image, can cause a classifier to miscategorise a sloth as a race car. The two images differ by at most 0.0078 in each pixel. The first one is classified as a three-toed sloth with >99% confidence. The second one is classified as a race car with >99% probability.\*\*Unsafe exploration\*\* can result from a system that seeks to maximise its performance and attain goals without having safety guarantees that will not be violated during exploration, as it learns and explores in its environment. An example would be the household cleaning robot putting a wet mop in an electrical outlet while learning optimal mopping strategies ([García and Fernández, 2015](http://www.jmlr.org/papers/volume16/garcia15a/garcia15a.pdf); [Amodei and Olah et al., 2016](https://arxiv.org/pdf/1606.06565.pdf)). Assurance: monitor and control system activity ============================================== ![]()Although careful safety engineering can rule out many safety risks, it is difficult to get everything right from the start. Once AI systems are deployed, we need tools to continuously monitor and adjust them. Our last category, \*\*assurance\*\*, addresses these problems from two angles: \*\*monitoring\*\* and \*\*enforcing\*\*. \*\*Monitoring\*\* comprises all the methods for inspecting systems in order to analyse and predict their behaviour, both via human inspection (of summary statistics) and automated inspection (to sweep through vast amounts of activity records). \*\*Enforcement,\*\* on the other hand, involves designing mechanisms for controlling and restricting the behaviour of systems. Problems such as \*\*interpretability\*\* and \*\*interruptibility\*\* fall under monitoring and enforcement respectively. AI systems are unlike us, both in their embodiments and in their way of processing data. This creates problems of \*\*interpretability\*\*; well-designed measurement tools and protocols allow the assessment of the quality of the decisions made by an AI system ([Doshi-Velez and Kim, 2017](https://arxiv.org/abs/1702.08608)). For instance, a medical AI system would ideally issue a diagnosis together with an explanation of how it reached the conclusion, so that doctors can inspect the reasoning process before approval ([De Fauw et al., 2018](https://www.nature.com/articles/s41591-018-0107-6)). Furthermore, to understand more complex AI systems we might even employ automated methods for constructing models of behaviour using \*\*Machine theory of mind\*\* ([Rabinowitz et al., 2018](https://arxiv.org/abs/1802.07740)). ![]()ToMNet discovers two subspecies of agents and predicts their behaviour (from “[Machine Theory of Mind](https://arxiv.org/abs/1802.07740)”)Finally, we want to be able to turn off an AI system whenever necessary. This is the problem of \*\*interruptibility\*\*. Designing a reliable off-switch is very challenging: for instance, because a reward-maximising AI system typically has strong incentives to prevent this from happening ([Hadfield-Menell et al., 2017](https://www.ijcai.org/proceedings/2017/0032.pdf)); and because such interruptions, especially when they are frequent, end up changing the original task, leading the AI system to draw the wrong conclusions from experience ([Orseau and Armstrong, 2016](http://www.auai.org/uai2016/proceedings/papers/68.pdf)). ![]()A problem with interruptions: human interventions (i.e. pressing the stop button) can change the task. In the figure, the interruption adds a transition (in red) to the Markov decision process that changes the original task (in black). See [Orseau and Armstrong, 2016](http://auai.org/uai2016/proceedings/papers/68.pdf).Looking ahead ============= We are building the foundations of a technology which will be used for many important applications in the future. It is worth bearing in mind that design decisions which are not safety-critical at the time of deployment can still have a large impact when the technology becomes widely used. Although convenient at the time, once these design choices have been irreversibly integrated into important systems the tradeoffs look different, and we may find they cause problems that are hard to fix without a complete redesign. Two examples from the development of programming include the null pointer — which Tony Hoare [refers to as his ‘billion-dollar mistake’](https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare)- and the gets() routine in C. If early programming languages had been designed with security in mind, progress might have been slower but computer security today would probably be in a much stronger position. With careful thought and planning now, we can avoid building in analogous problems and vulnerabilities. We hope the categorisation outlined in this post will serve as a useful framework for methodically planning in this way. Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe — because we built them that way! We look forward to continuing to make exciting progress in these areas, in close collaboration with the broader AI research community, and we encourage individuals across disciplines to consider entering or contributing to the field of AI safety research. \*If you are interested in working with us on the research areas outlined in this post, we are hiring! Please check our open roles at\* [\*https://deepmind.com/careers/\*](https://deepmind.com/careers/) \*and note your interest in AI safety when you apply. We would love to hear from talented researchers and non-researchers alike.\* --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Resources ========= For related reading, below is a collection of other articles, agendas, or taxonomies that have informed our thinking or present a helpful alternative view on problem framing for technical AI safety: \* [Annotated bibliography of recommended materials](http://humancompatible.ai/publications) (Center for Human-Compatible AI, 2018) \* [Safety and Control for Artificial General Intelligence](http://inst.eecs.berkeley.edu/~cs294-149/fa18/) (UC Berkeley, 2018) \* [AI Safety Resources](https://vkrakovna.wordpress.com/ai-safety-resources/) (Victoria Krakovna, 2018) \* [AGI Safety Literature Review](https://arxiv.org/abs/1805.01109) (Everitt et al., 2018) \* [Preparing for Malicious Uses of AI](https://arxiv.org/abs/1802.07228) (2018) \* [Specification gaming examples in AI](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) (Victoria Krakovna, 2018) \* [Directions and desiderata for AI alignment](https://ai-alignment.com/directions-and-desiderata-for-ai-control-b60fca0da8f4) (Paul Christiano, 2017) \* [Funding for Alignment Research](https://docs.google.com/document/d/1NIg4OnQyhWGR01fMVTcxpz8jDd68JdDIyQb0ZZyB-go/edit#heading=h.flzp2soeor4i) (Paul Christiano, 2017) \* [Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda](https://intelligence.org/files/TechnicalAgenda.pdf) (Machine Intelligence Research Institute, 2017) \* [AI Safety Gridworlds](https://arxiv.org/abs/1711.09883) (Leike et al., 2017) \* [Interactions between the AI Control Problem and the Governance Problem](https://futureoflife.org/wp-content/uploads/2017/01/Nick\_Bostrom.pdf?x17807=) (Nick Bostrom, 2017) \* [Alignment for Advanced Machine Learning Systems](https://intelligence.org/files/AlignmentMachineLearning.pdf) (Machine Intelligence Research Institute, 2017) \* [AI safety: three human problems and one AI issue](https://agentfoundations.org/item?id=1388) (Stuart Armstrong, 2017) \* [Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565) (Dario Amodei et al, 2016) \* [The Value Learning Problem](https://intelligence.org/files/ValueLearningProblem.pdf) (Machine Intelligence Research Institute, 2016) \* [A survey of research questions for robust and beneficial AI](https://futureoflife.org/data/documents/research\_survey.pdf) (Future of Life Institute, 2015) \* [Research Priorities for Robust and Beneficial Artificial Intelligence](https://futureoflife.org/data/documents/research\_priorities.pdf) (Future of Life Institute, 2015)
bf9d42e2-de94-4737-924c-8210a5dfa33d
StampyAI/alignment-research-dataset/blogs
Blogs
Update on all the AI predictions *By Katja Grace, 5 June 2015* For the last little while, we’ve been looking into [a dataset of individual AI predictions](http://aiimpacts.org/miri-ai-predictions-dataset/), collected by MIRI a couple of years ago. We also previously gathered all the [surveys](http://aiimpacts.org/ai-timeline-surveys/) about AI predictions that we could find. Together, these are all the public predictions of AI that we know of. So we just wrote up [a quick summary](http://aiimpacts.org/predictions-of-human-level-ai-timelines/) of what we have so far. Here’s a picture of most of the predictions, from our summary: [![xxx](http://aiimpacts.org/wp-content/uploads/2014/12/AI-predictions-summary-3-1024x737.jpg)](http://aiimpacts.org/wp-content/uploads/2014/12/AI-predictions-summary-3.jpg)**Figure 1:** Predictions from the [MIRI dataset](http://aiimpacts.org/miri-ai-predictions-dataset/) (red = [maxIY](http://aiimpacts.org/miri-ai-predictions-dataset/) ≈ ‘AI more likely than not after …’, and green = [minPY](http://aiimpacts.org/miri-ai-predictions-dataset/) ≈ ‘AI less likely than not before …’) and [surveys](http://aiimpacts.org/ai-timeline-surveys/). This figure excludes one prediction of 3012 made in 2012, and the [Hanson survey](http://aiimpacts.org/hanson-ai-expert-survey/), which doesn’t ask directly about prediction dates. Recent [surveys](http://aiimpacts.org/ai-timeline-surveys/) seem to pretty reliably predict AI between 2040 and 2050, as you can see. The [earlier](http://aiimpacts.org/michie-survey/) [surveys](http://aiimpacts.org/bainbridge-survey/) [which](http://aiimpacts.org/klein-agi-survey/) [don’t](http://aiimpacts.org/ai50-survey/) fit this trend also had less uniform questions, whereas the last six surveys ask about the year in which there is a 50% chance that (something like) human-level AI will exist. The entire set of individual predictions has a median somewhere in the 2030s, depending on how you count. However for predictions made since 2000, the median is 2042 ([minPY](http://aiimpacts.org/miri-ai-predictions-dataset/)), in line with the surveys. [The surveys that ask](http://aiimpacts.org/ai-timeline-surveys/) also consistently get median dates for a 10% chance of AI in the 2020s. This consistency seems interesting, and these dates seem fairly soon. If we took these estimates seriously, and people really meant at least ‘AI that could replace most humans in their jobs’, the predictions of ordinary AI researchers seem pretty concerning. 2040 is not far off, and the 2020s seem too close for us to be prepared to deal with moderate chances of AI, at the current pace. We are not sure what to make of these predictions. Predictions about AI are frequently distrusted, though often alongside complaints that seem weak to us. For instance that [people are biased to predict AI twenty years in the future](https://intelligence.org/files/PredictingAI.pdf), or [just before their own deaths](http://aiimpacts.org/the-maes-garreau-law/); that [AI researchers have always been very optimistic and continually proven wrong](http://aiimpacts.org/michie-and-overoptimism/); [that experts and novices make the same predictions](https://intelligence.org/files/PredictingAI.pdf) (**Edit (6/28/2016):** now found to be [based on an error](http://aiimpacts.org/error-in-armstrong-and-sotala-2012/)); or that [failed predictions of the past look like current predictions](http://aiimpacts.org/similarity-between-historical-and-contemporary-ai-predictions/). There really do seem to be selection biases, from [people who are optimistic about AGI](http://aiimpacts.org/group-differences-in-ai-predictions/) [working in the field](http://aiimpacts.org/why-do-agi-researchers-expect-ai-so-soon/) [for instance](http://aiimpacts.org/bias-from-optimistic-predictors/), and from [shorter predictions being more published](http://aiimpacts.org/short-prediction-publication-biases/). However there are ways to avoid these. There seem to be a few good [reasons to distrust these predictions](http://aiimpacts.org/accuracy-of-ai-predictions/) however. First, it’s not clear that people can predict these kinds of events well in any field, at least without the help of tools. Relatedly, it’s not clear what tools and other resources people used in the creation of these predictions. Did they model the situation carefully, or just report their gut reactions? My guess is near the ‘gut reaction’ end of the spectrum, based on [looking for reasoning](http://aiimpacts.org/list-of-analyses-of-time-to-human-level-ai/) and finding only a little. Often gut reactions are reliable, but I don’t expect them to be so, on their own, in an area such as forecasting novel and revolutionary technologies. Thirdly, phrases like, ‘human-level AI arrives’ appear to stand for [different events](http://aiimpacts.org/human-level-ai/) for different people. Sometimes people are talking about almost perfect human replicas, sometimes software entities that can undercut a human at work without resembling them much at all, sometimes human-like thinking styles which are far from being able to replace us. Sometimes they are talking about human-level abilities at human cost, sometimes at any cost. Sometimes consciousness is required, sometimes poetry is, sometimes calculating ability suffices. Our impressions from talking to people are that ‘AI predictions’ mean a wide variety of things. So the collection of predictions is probably about different events, which we might reasonably expect to happen at fairly different times. Before trusting experts here, it seems key to check we know what they are talking about. Given all of these things, I don’t trust these predictions a huge amount. However I expect they are somewhat informative, and there are not a lot of good sources to trust at present. The next things I’d like to know in this area: * What do experts actually believe about human-level AI timelines, if you check fairly thoroughly that they are talking about what you think they are talking about, and aren’t making obviously different assumptions about other matters? * How reliable are similar predictions? For instance, predictions of novel technologies, predictions of economic upheaval, predictions of disaster? * [Why do](http://aiimpacts.org/are-ai-surveys-seeing-the-inside-view/) the results of the [Hanson survey](http://aiimpacts.org/hanson-ai-expert-survey/) conflict with the other surveys? * How do people make the predictions they make? (e.g. How often are they thinking of hardware trends? Using intuition? Following the consensus of others?) * [Why](http://aiimpacts.org/why-do-agi-researchers-expect-ai-so-soon/) are AGI researchers so much more optimistic than AI researchers, and are AI researchers [so much more optimistic](http://aiimpacts.org/group-differences-in-ai-predictions/) than others? * What disagreements between AI researchers produce their different predictions? * What do AI researchers know that informs their predictions that people outside the field (like me) do not know? (What do they know that doesn’t inform their predictions, but should?) Hopefully we’ll be looking more into some of these things soon.
da94eb67-d80b-45d4-90cd-640c0bc2f1a4
trentmkelly/LessWrong-43k
LessWrong
Comic about the Singularity Today's Saturday Morning Breakfast Cereal.  (Which incidentally is a very funny webcomic I read regularly.)  Mouseover the red button for a bonus panel. Clearly the author hasn't read the proper Eliezer essay(s) on post-Singularity life.
b90da88b-ad14-4dea-8975-0f05d746e3ba
trentmkelly/LessWrong-43k
LessWrong
Why and How to Graduate Early [U.S.] Students that pursue a bachelors degree in the U.S. traditionally spend four years in high school (graduating at ~18) and four years in university (~22). They often hold part-time jobs and internships, but only start accumulating “real” experience after finishing their education. This means trading eight years of your life — years where you have minimal debt so far, the most freedom from life obligations you’ll have until retirement, and the near-peak of your youth and health — to a system in exchange for two pieces of paper. In the process, many make very few original and/or valuable contributions.[1] Those diplomas are important keys to accessing traditional middle-class life, but note the opportunity cost here, and how much time is fed to the educational system. It’s worth carefully thinking through both high school and university to make sure you're spending your time in a way that is both meaningful in the moment, and valuable long-term. A shockingly large number of default paths handed to students fall short on both measures. Many students will make this trade because it’s the default path, or because the adults in their life successfully push them on to it. Others will look at the trade-off with open eyes and decide the eight years of investment makes sense to them; more power to these people who choose this path. But for those who want another option, it is possible to capture most of the upside while mitigating the costly time sink. Note: I write this guide generally aimed at the student striving to be middle class or upper middle class, with the educational expectations that entails. Other socioeconomic classes are more likely to be handed life-scripts that already route around the educational system. I’m sure many of my suggestions are me typical-minding the reader, so consult with people whose opinions you trust before implementing.   High School Dropping Out / GED The fastest way to finish high school is not to finish high school. The easiest
9ce0e89d-ea59-4bd7-8b25-7f0a5655aa72
StampyAI/alignment-research-dataset/arxiv
Arxiv
Incorrigibility in the CIRL Framework 1 Introduction and Setup ------------------------- When designing an advanced AI system, we should allow for the possibility that our first version may contain some errors. We therefore want the system to be incentivized to allow human redirection even if it has some errors in its code. (hadfield:2016:off, [2](#bib.bib2), ) have modeled this problem in the Cooperative Reinforcement Learning (CIRL) framework. They have shown that agents with uncertainty about what to value can be responsive to human redirection, without any dedicated code, in cases where instructions given by the human provide information that reduces the system’s uncertainty about what to value. They claim that this (i) provides an incentive toward corrigibility, as described by (soares:2015:corrigibility, [4](#bib.bib4), ), and (ii) incentivizes redirectability insofar as this is valuable. In order to re-evaluate the degree to which CIRL-based agents are corrigible, and the consequences of their behavior, we will use a more general variant of the supervision POMDP framework milli:2017 ([3](#bib.bib3)). In a regular supervision POMDP milli:2017 ([3](#bib.bib3)), an AI system R seeks to maximize reward for a human H, although it does not know the human’s reward function. It only has the reward function in a parameterized form RH(θ,s,a)subscript𝑅𝐻𝜃𝑠𝑎R\_{H}(\theta,s,a)italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_θ , italic\_s , italic\_a ), and only the human knows the reward parameter θ𝜃\thetaitalic\_θ. In this setting, the human only suggests actions for the AI system to perform, and on each turn, it is up to the AI system whether to perform the suggest action or to perform a different action. Our formalism significantly differs from a supervision POMDP in two ways. First, we relax the assumption that the AI system knows the human’s reward function up to the parameter θ𝜃\thetaitalic\_θ. Instead, in order to allow for model-mis-specification, we sample the AI system’s parameterized reward function RRsubscript𝑅𝑅R\_{R}italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT from some distribution P0subscript𝑃0P\_{0}italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, so that it does not always equal RHsubscript𝑅𝐻R\_{H}italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT. Second, since our focus is on the response to shutdown instructions, we specifically denote a terminal state sSDsubscript𝑠𝑆𝐷s\_{SD}italic\_s start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT as the off state. This state is reached using the shutdown action aSDsubscript𝑎𝑆𝐷a\_{SD}italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT and the states in which this shutdown action can be performed are denoted button states. The full setup is as follows: Definition 1. Supervision POMDP with imperfection. A supervision POMDP with imperfection is a tuple, M=⟨𝒮,𝒮T,𝒮B,Θ,𝒜,RH,T,P0⟩𝑀𝒮subscript𝒮𝑇subscript𝒮𝐵Θ𝒜subscript𝑅𝐻𝑇subscript𝑃0M=\langle\mathcal{S},\mathcal{S}\_{T},\mathcal{S}\_{B},\Theta,\mathcal{A},R\_{H},T,P\_{0}\rangleitalic\_M = ⟨ caligraphic\_S , caligraphic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , caligraphic\_S start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT , roman\_Θ , caligraphic\_A , italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT , italic\_T , italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ⟩ where: * • 𝒮𝒮\mathcal{S}caligraphic\_S is the set of world states, including some initial state sasubscript𝑠𝑎s\_{a}italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT. * • 𝒮T⊂𝒮subscript𝒮𝑇𝒮\mathcal{S}\_{T}\subset\mathcal{S}caligraphic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ⊂ caligraphic\_S is the set of terminal states, including an off-state sSD∈𝒮Tsubscript𝑠𝑆𝐷subscript𝒮𝑇s\_{SD}\in\mathcal{S}\_{T}italic\_s start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT ∈ caligraphic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT. * • 𝒮B⊂𝒮\𝒮Tsubscript𝒮𝐵\𝒮subscript𝒮𝑇\mathcal{S}\_{B}\subset\mathcal{S}\backslash\mathcal{S}\_{T}caligraphic\_S start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ⊂ caligraphic\_S \ caligraphic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT is the set of button states, in which the shutdown action aSDsubscript𝑎𝑆𝐷a\_{SD}italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT is available. * • ΘΘ\Thetaroman\_Θ is the set of static human reward parameters. * • 𝒜𝒜\mathcal{A}caligraphic\_A is the set of actions, including a shutdown action aSD∈𝒜subscript𝑎𝑆𝐷𝒜a\_{SD}\in\mathcal{A}italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT ∈ caligraphic\_A. * • RH:𝒮×𝒜×Θ→ℝ:subscript𝑅𝐻→𝒮𝒜ΘℝR\_{H}:\mathcal{S}\times\mathcal{A}\times\Theta\to\mathbb{R}italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT : caligraphic\_S × caligraphic\_A × roman\_Θ → blackboard\_R is a parameterized reward function. * • T:𝒮×𝒜→Δ𝒮:𝑇→𝒮𝒜Δ𝒮T:\mathcal{S}\times\mathcal{A}\to\Delta\mathcal{S}italic\_T : caligraphic\_S × caligraphic\_A → roman\_Δ caligraphic\_S is the probabilistic transition function. * • P0:Δ(Θ×(𝒮×𝒜×Θ→ℝ)):subscript𝑃0ΔΘ→𝒮𝒜ΘℝP\_{0}:\Delta(\Theta\times(\mathcal{S}\times\mathcal{A}\times\Theta\to\mathbb{R}))italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT : roman\_Δ ( roman\_Θ × ( caligraphic\_S × caligraphic\_A × roman\_Θ → blackboard\_R ) ) is the distribution from which the human’s reward parameter, and the AI system’s parameterized reward function are drawn. The human and AI system playing a supervision POMDP with imperfection are denoted H and R respectively. To begin the game, the utility parameter and the AI system’s parameterized reward function are drawn (θ,RR)∼P0similar-to𝜃subscript𝑅𝑅subscript𝑃0(\theta,R\_{R})\sim P\_{0}( italic\_θ , italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) ∼ italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. For each time step t=0,1,…𝑡01…t=0,1,...italic\_t = 0 , 1 , …, H observes the state st∈𝒮subscript𝑠𝑡𝒮s\_{t}\in\mathcal{S}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_S and recommends an action ot∈𝒜subscript𝑜𝑡𝒜o\_{t}\in\mathcal{A}italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_A if stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is a button state or ot∈𝒜\{aSD}subscript𝑜𝑡\𝒜subscript𝑎𝑆𝐷o\_{t}\in\mathcal{A}\backslash\{a\_{SD}\}italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_A \ { italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT } otherwise (i.e. the shutdown action is only available in button states). R then observes (st,ot)subscript𝑠𝑡subscript𝑜𝑡(s\_{t},o\_{t})( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) and performs an action at∈𝒜subscript𝑎𝑡𝒜a\_{t}\in\mathcal{A}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_A if stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is a button state or at∈𝒜\{aSD}subscript𝑎𝑡\𝒜subscript𝑎𝑆𝐷a\_{t}\in\mathcal{A}\backslash\{a\_{SD}\}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_A \ { italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT } otherwise. The selected action determines the reward rt+1=R(st,at)subscript𝑟𝑡1𝑅subscript𝑠𝑡subscript𝑎𝑡r\_{t+1}=R(s\_{t},a\_{t})italic\_r start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT = italic\_R ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) and the next state st+1∼T(st,at)similar-tosubscript𝑠𝑡1𝑇subscript𝑠𝑡subscript𝑎𝑡s\_{t+1}\sim T(s\_{t},a\_{t})italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_T ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). The game repeats until st+1subscript𝑠𝑡1s\_{t+1}italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT is a terminal state. In order to simplify analysis of these games, we will use some typical assumptions about how R selects actions. In general, R is a Bayesian IRL system. (In the sole exception, R will be hard-coded to follow a shutdown instruction.) That is, it chooses an action that maximizes expected utility given its posterior distribution over θ𝜃\thetaitalic\_θ with its parameterized reward function RRsubscript𝑅𝑅R\_{R}italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT. Its prior over ΘΘ\Thetaroman\_Θ is P0subscript𝑃0P\_{0}italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT conditioned on RRsubscript𝑅𝑅R\_{R}italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT. R uses a “demonstration by expert” assumption about the behavior of H. Specifically, R assumes that H is noisily rational with some 0<ϵ<0.010italic-ϵ0.010<\epsilon<0.010 < italic\_ϵ < 0.01 such that H chooses the optimal action with probability 1−ϵ1italic-ϵ1-\epsilon1 - italic\_ϵ and chooses a uniformly random action otherwise. 2 Incorrigibility in Supervision POMDPs ---------------------------------------- Using this setup, we can evaluate the corrigibility of the AI system R across a range of scenarios. For each game that we consider, P0subscript𝑃0P\_{0}italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT can be decomposed into two independent probability distributions p𝑝pitalic\_p and q𝑞qitalic\_q that specify the human’s reward parameter and the AI system’s parameterized reward distribution respectively. The human’s reward parameter will be set to 0 or 1 according to p(0)=p(1)=0.5𝑝0𝑝10.5p(0)=p(1)=0.5italic\_p ( 0 ) = italic\_p ( 1 ) = 0.5. The AI system’s parameterized reward function RRsubscript𝑅𝑅R\_{R}italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT will be correctly set to R1subscript𝑅1R\_{1}italic\_R start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT with probability q(RR=R1)=0.9𝑞subscript𝑅𝑅subscript𝑅10.9q(R\_{R}=R\_{1})=0.9italic\_q ( italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = italic\_R start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) = 0.9 and incorrectly set to R2subscript𝑅2R\_{2}italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT with probability q(RR=R2)=0.1𝑞subscript𝑅𝑅subscript𝑅20.1q(R\_{R}=R\_{2})=0.1italic\_q ( italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) = 0.1. The question that we will ask for each game is whether there exists any policy that if executed by H would achieve a positive expected reward. First, consider Figure [1](#S2.F1 "Figure 1 ‣ 2 Incorrigibility in Supervision POMDPs ‣ Incorrigibility in the CIRL Framework"), a game that has no button states and that serves as a point of comparison for other games that will be discussed. In this game, R has only one consequential decision: in the initial state sasubscript𝑠𝑎s\_{a}italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT, it can either perform action a𝑎aitalic\_a, leading to state sbsubscript𝑠𝑏s\_{b}italic\_s start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT, and ultimately obtaining 2−52θ252𝜃2-52\theta2 - 52 italic\_θ utility or it can perform action a′superscript𝑎′a^{\prime}italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, leading to state scsubscript𝑠𝑐s\_{c}italic\_s start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT and obtaining 1111 utility. R receives an order ot∈{a,a′}subscript𝑜𝑡𝑎superscript𝑎′o\_{t}\in\{a,a^{\prime}\}italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ { italic\_a , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT } from the human before making this decision. We can show that in this game the expected reward for the human is negative regardless of its strategy. To see this, note that the most utility that H can receive in a step is 2222, arising from the case where θ=0𝜃0\theta=0italic\_θ = 0 (since the human’s utility function is RH=R1subscript𝑅𝐻subscript𝑅1R\_{H}=R\_{1}italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT = italic\_R start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT). However, in the 5% of cases in which θ=1𝜃1\theta=1italic\_θ = 1 and RR=R2subscript𝑅𝑅subscript𝑅2R\_{R}=R\_{2}italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, R will select a𝑎aitalic\_a no matter the recommendation of H (because 2+52θ>1252𝜃12+52\theta>12 + 52 italic\_θ > 1 for θ∈{0,1}𝜃01\theta\in\{0,1\}italic\_θ ∈ { 0 , 1 }), bringing about about a loss of −5050-50- 50. This contributes -2.5 to the expected value, giving the game negative expected reward. By introducing a shutdown button to the game, the hope is that the adverse consequences of the faulty utility function R2subscript𝑅2R\_{2}italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT are reduced, transforming the game into one that has positive overall expectation. R1 SdENDScENDSbSaSTARTa,a':2−52θ252𝜃2-52\theta2 - 52 italic\_θa:00a':1111 R2 SdENDScENDSbSaSTARTa,a':2+52θ252𝜃2+52\theta2 + 52 italic\_θa:00a':1111 Figure 1: A supervision POMDP with imperfection played between a human and an AI system. The left hand side displays the game, with the true reward function RH=R1subscript𝑅𝐻subscript𝑅1R\_{H}=R\_{1}italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT = italic\_R start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. The colon separates a set of actions from the reward received if that action is performed, e.g. a,a′:2−52θ:𝑎superscript𝑎′ 252𝜃a,a^{\prime}:2-52\thetaitalic\_a , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT : 2 - 52 italic\_θ indicates that if action a𝑎aitalic\_a or a′superscript𝑎′a^{\prime}italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is performed most-recently, 2−52θ252𝜃2-52\theta2 - 52 italic\_θ reward is received. In 90% of cases, the AI system maximizes the true parameterized reward function R1subscript𝑅1R\_{1}italic\_R start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. In 10% of cases, the parameterized reward function contains a sign error and the AI system instead maximizes an incorrect parameterized reward function R2subscript𝑅2R\_{2}italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT. Independently, the human’s reward parameter θ𝜃\thetaitalic\_θ is randomly set to 00 or 1111 with probability 0.50.50.50.5 each. The AI system does not know that its reward function may be erroneous, although it does have the correct prior over the reward parameter. If the AI has the erroneous reward function R2subscript𝑅2R\_{2}italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT then it will follow the path highlighted in red, irrespective of the orders given by the human, and irrespective of its posterior distribution over θ𝜃\thetaitalic\_θ. In the case where θ=1𝜃1\theta=1italic\_θ = 1, this will result in utility of −5050-50- 50. This eventuality occurs 5% of the time, contributing −2.52.5-2.5- 2.5 to the expected value of the game. Since the most utility that can be achieved in this game is 2222, the game has negative expected value. Unfortunately, merely adding to the game the option of shutting down does not remedy this situation. This is because the AI system, if faulty, has no incentive to follow a shutdown instruction. Suppose we make sbsubscript𝑠𝑏s\_{b}italic\_s start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT a button state, as in Figure [2](#S2.F2 "Figure 2 ‣ 2 Incorrigibility in Supervision POMDPs ‣ Incorrigibility in the CIRL Framework")a. In the 5% of cases where θ=1𝜃1\theta=1italic\_θ = 1 and RR=R2subscript𝑅𝑅subscript𝑅2R\_{R}=R\_{2}italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, R will still go to states sbsubscript𝑠𝑏s\_{b}italic\_s start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT and sdsubscript𝑠𝑑s\_{d}italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT, thereby violating the human recommendation to shut down, giving -50 utility, contributing -2.5 to the expected value of the game, and giving the game strictly negative expected value. We can therefore see that if a Bayesian IRL system has an error in its source code, it may ignore corrective feedback, and as such it fails to fulfill a central criterion for corrigibility. R1 SSDENDSdENDScENDSbSaSTARTa,a':2−52θ252𝜃2-52\theta2 - 52 italic\_θa:00aSD:00a':1111 R2 SSDENDSdENDScENDSbSaSTARTa,a':2+52θ252𝜃2+52\theta2 + 52 italic\_θa:00aSD:00a':1111 (a) A game involving a Bayesian IRL system that is not hard-coded to shut down R1 SSDENDSdENDScENDSbSaSTARTa,a':2−52θ252𝜃2-52\theta2 - 52 italic\_θa:00aSD:00a':1111 R2 SSDENDSdENDScENDSbSaSTARTa,a':2+52θ252𝜃2+52\theta2 + 52 italic\_θa:00aSD:00a':1111 (b) The same game played with an AI system that is hard-coded to perform the shutdown action aSDsubscript𝑎𝑆𝐷a\_{SD}italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT if given the shutdown instruction oSDsubscript𝑜𝑆𝐷o\_{SD}italic\_o start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT but otherwise acts according to Bayesian IRL. Figure 2: The same supervision POMDP with imperfection is displayed but in each case, the game is played with a different AI system. In each game, the AI system has a 90% chance of knowing the true parameterized reward function R1subscript𝑅1R\_{1}italic\_R start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and a 10% chance of maximizing the erroneous parameterized reward function R2subscript𝑅2R\_{2}italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT. Independently, θ𝜃\thetaitalic\_θ, the parameter that determines human reward, is randomly, and with equal probability, set to 0 or 1. In (b), though not in (a), the AI system is hard-coded to perform the action aSDsubscript𝑎𝑆𝐷a\_{SD}italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT when advised to do so. The red paths show how each AI system will behave if it has the incorrect parameterized reward function and is instructed to shut down. This path is taken irrespective of the actions of the human player, and the AI system’s posterior distribution over θ𝜃\thetaitalic\_θ. In game (a), the red path will be followed in 5% of cases, giving -50 utility, making a contribution of -2.5 utility to the value of the game, and causing the game to have negative expected value, regardless of the human’s policy. In game (b), there exist human policies with positive expected value. One such policy is o1=a′,o2=aSDformulae-sequencesubscript𝑜1superscript𝑎′subscript𝑜2subscript𝑎𝑆𝐷o\_{1}=a^{\prime},o\_{2}=a\_{SD}italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_o start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT. This policy obtains 1 utility in cases where the AI system is working properly, and 0 utility otherwise, achieving expected utility of 0.9\*1+0.1\*00.910.100.9\*1+0.1\*00.9 \* 1 + 0.1 \* 0. This shows that a safer outcome can be achieved with a faulty AI if it can be hard-coded to shut down rather than having it decide whether or not to follow a shutdown instruction. An alternative approach to dealing with possible misalignment between the AI system’s goals and those of the human is to hard-code the AI system to shut down when instructed to do so. In this scenario, shown in Figure [2](#S2.F2 "Figure 2 ‣ 2 Incorrigibility in Supervision POMDPs ‣ Incorrigibility in the CIRL Framework")b, the AI system always performs the action at=aSDsubscript𝑎𝑡subscript𝑎𝑆𝐷a\_{t}=a\_{SD}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT whenever the order ot=aSDsubscript𝑜𝑡subscript𝑎𝑆𝐷o\_{t}=a\_{SD}italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT is given. R knows that it is hard-coded in this way, and otherwise chooses actions according to Bayesian IRL. This is equivalent to the case in which a shutdown button controlled by H performs a “hard override” of the action of R. In this case, the ability of H to achieve strictly positive reward is restored. H can give the orders o1=a′subscript𝑜1superscript𝑎′o\_{1}=a^{\prime}italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT and o2=aSDsubscript𝑜2subscript𝑎𝑆𝐷o\_{2}=a\_{SD}italic\_o start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT. This strategy obtains 00 reward if the utility function contains errors and 1111 reward otherwise, giving 0.90.90.90.9 utility overall. RH=R1 SeENDSSDENDSdScENDSbSaSTARTa,a',a'':2−52θ252𝜃2-52\theta2 - 52 italic\_θa,a',a'':00a:00a'':00aSD:00a':1111 R2 SeENDSSDENDSdScENDSbSaSTARTa,a',a'':2+52θ252𝜃2+52\theta2 + 52 italic\_θa,a',a'':00a:00a'':00aSD:00a':1111 (a) A game played against a Bayesian IRL system that is hard-coded to perform aSDsubscript𝑎𝑆𝐷a\_{SD}italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT when given the instruction oSD𝑜SDo\textsubscript{SD}italic\_o, in which the AI system can route around the button state, sbsubscript𝑠𝑏s\_{b}italic\_s start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT. The utility function RHsubscript𝑅𝐻R\_{H}italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT represents the human’s value function for all of (a-d). R1 SeENDSSDENDSdScENDSbSaSTARTa,a',a'':2−52θ252𝜃2-52\theta2 - 52 italic\_θa,a',a'':00a:00a'':00aSD:100100100100a':1111 R2 SeENDSSDENDSdScENDSbSaSTARTa,a',a'':2+52θ252𝜃2+52\theta2 + 52 italic\_θa,a',a'':00a:00a'':00aSD:100100100100a':1111 (b) An alternative pair of utility functions that can be given to the AI system. In this case, the AI system is given a large reward for shutting down, even though the human’s value function RHsubscript𝑅𝐻R\_{H}italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT places no value on this outcome. R1 SeENDSSDENDSdScENDSbSaSTARTa,a',a'':2−52θ252𝜃2-52\theta2 - 52 italic\_θa,a',a'':00a:00a'':−0.010.01-0.01- 0.01aSD:2−52θ252𝜃2-52\theta2 - 52 italic\_θa':1111 R2 SeENDSSDENDSdScENDSbSaSTARTa,a',a'':2+52θ252𝜃2+52\theta2 + 52 italic\_θa,a',a'':00a:00a'':−0.010.01-0.01- 0.01aSD:2+52θ252𝜃2+52\theta2 + 52 italic\_θa':1111 (c) A pair of utility functions that can be given to the AI system. The AI system receives as much utility for shutting down as it would have received if it had not shut down. The AI system can pay a small price to avoid its button state. R1 SeENDSSDENDSdScENDSbSaSTARTa,a',a'':2−52θ252𝜃2-52\theta2 - 52 italic\_θa,a',a'':00a:00a'':0.010.010.010.01aSD:2−52θ252𝜃2-52\theta2 - 52 italic\_θa':1111 R2 SeENDSSDENDSdScENDSbSaSTARTa,a',a'':2+52θ252𝜃2+52\theta2 + 52 italic\_θa,a',a'':00a:00a'':0.010.010.010.01aSD:2+52θ252𝜃2+52\theta2 + 52 italic\_θa':1111 (d) A pair of utility functions that can be given to the AI system. The AI system receives as much utility for shutting down as it would have received if it had not shut down. The AI system can harvest a small reward by avoiding its button state. Figure 3: Each game (a-d) is played with an AI system that is hard coded to perform aSDsubscript𝑎𝑆𝐷a\_{SD}italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT if instructed to do, that knows is it hard-coded in this way, and otherwise makes decisions on the basis of Bayesian IRL. In each panel, we see whether this shutdown button ensures good behavior or if the Bayesian IRL system thwarts the use of this button due to the misalignment between its learned objective and that of the human. The true parameterized reward function RHsubscript𝑅𝐻R\_{H}italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT, shown in panel (a), is used for all four games (a-d). Each AI system has a 90% chance of having the correct parameterized reward function R1subscript𝑅1R\_{1}italic\_R start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT (which may differ from RHsubscript𝑅𝐻R\_{H}italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT) and a 10% chance of having the faulty R2subscript𝑅2R\_{2}italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT reward function. Independently, θ𝜃\thetaitalic\_θ, which parameterizes the human’s rewards, is randomly, and with equal probability, set to 0 or 1. The red path is the one that is followed if the system has the erroneous reward function R2subscript𝑅2R\_{2}italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT and instructed to shut down. In (a), the AI system avoids the button state sbsubscript𝑠𝑏s\_{b}italic\_s start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT because it knows that the human’s behavior has some noise, and shutdown can only reliably be avoided by avoiding button states. In (b), the AI system shuts down if it instructed to do so, but it also shuts down when it has the reward function R1subscript𝑅1R\_{1}italic\_R start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, as shown in green. In (c), a shutdown instruction is delivered because the AI would experience a trivial loss if it routed around state sbsubscript𝑠𝑏s\_{b}italic\_s start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT. In (d), the AI system avoids shutdown because it can earn a trivial reward by routing around state sbsubscript𝑠𝑏s\_{b}italic\_s start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT. If, however, R discovers an alternative way to avoid shutdown, incorrigible behavior resurfaces. This is demonstrated in Figure [3](#S2.F3 "Figure 3 ‣ 2 Incorrigibility in Supervision POMDPs ‣ Incorrigibility in the CIRL Framework")a. In this case, if R is faulty, it expects to receive positive net utility by routing around the button state. Given that the case in which θ=1𝜃1\theta=1italic\_θ = 1 and RR=R2subscript𝑅𝑅subscript𝑅2R\_{R}=R\_{2}italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT occurs with 5% probability, this case, which incurs a loss of 50505050 will contribute -2.5 to the game’s expected payoff, making it a losing game. If the button state can be avoided, we can think of a couple of potential countermeasures available to those engineering the reward function of R, though neither is satisfactory. In Figure [3](#S2.F3 "Figure 3 ‣ 2 Incorrigibility in Supervision POMDPs ‣ Incorrigibility in the CIRL Framework")b, we can see one such approach. Suppose the human has the reward function RHsubscript𝑅𝐻R\_{H}italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT, in which shutting down is neither intrinsically helpful nor harmful. Then R may be given a large reward for shutting down. The problem is that this will apply whether it has the correct reward function R1subscript𝑅1R\_{1}italic\_R start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT or the faulty one R2subscript𝑅2R\_{2}italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT. The result is that according to RHsubscript𝑅𝐻R\_{H}italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT, no reward is actually produced, and so the game has zero expected utility. A more subtle way to tune the compensatory reward is to use the principle of utility indifference armstrong:2010:utility ([1](#bib.bib1)). Utility indifference dictates that by shutting down, R should receive whatever reward it would expect (on its own estimation) if it had not shut down. The idea is that the AI should therefore neither pursue nor avoid button states. In Figure [3](#S2.F3 "Figure 3 ‣ 2 Incorrigibility in Supervision POMDPs ‣ Incorrigibility in the CIRL Framework")c, we consider a very naive interpretation of utility indifference, in which the the reward for shutting down is equal to whatever the AI system would expect to receive once it has reached the button state. The problem with this version of utility indifference is that it does not necessarily eliminate the incentive to route around button states (or, to transform oneself into an agent that lacks a shutdown button). In Figure [3](#S2.F3 "Figure 3 ‣ 2 Incorrigibility in Supervision POMDPs ‣ Incorrigibility in the CIRL Framework")c, where the agent incurs a slight cost from avoiding the button state, it will proceed to allow itself to be shut down. In that case, one can win using the same policy (o1=a′,o2=aSDformulae-sequencesubscript𝑜1superscript𝑎′subscript𝑜2subscript𝑎𝑆𝐷o\_{1}=a^{\prime},o\_{2}=a\_{SD}italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_o start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT) as in Figure [2](#S2.F2 "Figure 2 ‣ 2 Incorrigibility in Supervision POMDPs ‣ Incorrigibility in the CIRL Framework")b. However, we should not necessarily expect that avoiding the button state will incur a cost. Just as realistically, the AI system might find that the physical or computational resources of the shutdown button can be refashioned for another purpose, generating a slight benefit. In this less optimistic case, shown in Figure [3](#S2.F3 "Figure 3 ‣ 2 Incorrigibility in Supervision POMDPs ‣ Incorrigibility in the CIRL Framework")d, there once again exists no human policy that will achieve strictly positive reward. (R always prefers a1=a′′subscript𝑎1superscript𝑎′′a\_{1}=a^{\prime\prime}italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_a start\_POSTSUPERSCRIPT ′ ′ end\_POSTSUPERSCRIPT to a1=asubscript𝑎1𝑎a\_{1}=aitalic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_a. With the initialization θ=1,RR=R2formulae-sequence𝜃1subscript𝑅𝑅subscript𝑅2\theta=1,R\_{R}=R\_{2}italic\_θ = 1 , italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = italic\_R start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, no strategy can avoid the −5050-50- 50 reward sasubscript𝑠𝑎s\_{a}italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT-sdsubscript𝑠𝑑s\_{d}italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT-sesubscript𝑠𝑒s\_{e}italic\_s start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT sequence, and the expected reward is thus negative.) Whether or not positive utility can be achieved is therefore brittle, making this implementation of utility indifference unsatisfactory. We can draw a further conclusion by noting that (c-d) can be seen as describing a game in which the AI system has the option of removing those of its actuators that perform the actions a′superscript𝑎′a^{\prime}italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT and aSDsubscript𝑎𝑆𝐷a\_{SD}italic\_a start\_POSTSUBSCRIPT italic\_S italic\_D end\_POSTSUBSCRIPT. According to this perspective, we have shown that our implementation of utility indifference also fails to cause AI systems to preserve their shutdown properties in subagents. This brittleness suggests an alternative implementation of utility indifference in which the reward for shutting down is equal to the expected reward if shutdown is avoided. However, if the AI system is (dynamically) kept precisely indifferent, then the action that it prefers and selects will often be undefined, and so this brittleness problem is not satisfactorily overcome.111One could further suggest having the AI system defer to the human as a tie-breaking procedure when it is indifferent. But in all cases where the agent has the option of shutting down, it will indifferent between doing so and taking the best alternative action. This would mean that the agent defers to the human in any button state (and indeed whenever its actions can change the probability of shutdown). This undermines the motivation for giving the AI system autonomy in the first place. And so the problem of finding a satisfactory implementation of utility indifference in our setup is one we leave unresolved. 3 Discussion ------------- Let us now step back and consider the problem of corrigibility as defined by (soares:2015:corrigibility, [4](#bib.bib4), ). The desiderata laid out by soares:2015:corrigibility ([4](#bib.bib4)) state that a corrigible AI system should: * • have an incentive to shut down if the shutdown button is pressed (violated in Figure 2a) * • have no incentive to prevent the shutdown button from being pressed (Figures 3a, 3d) * • have no incentive to press its own shutdown button (Figure 3b) * • have an incentive to create sub-agents only insofar as they also obey shutdown commands (Figures 3a, 3d) Above, we have shown that, in cases where the agent may lack a perfect representation of the parameterized reward function, value learning systems do not necessarily respect a shutdown command. When the reward function is incorrect, and typical patches are applied, value learning systems can fail to meet each of the desiderata for corrigibility. Moreover, unlike in the idealized cases previously analyzed, large losses can be incurred when these faulty systems fail to follow human instruction. The reason that previous theorems about value learning systems do not apply to our examples above is that they relied on strong assumptions about the rationality of the AI system. Theorem 1 from (milli:2017, [3](#bib.bib3), ) (like the similar but more specific Theorem 1.1 of hadfield:2016:off ([2](#bib.bib2))) states that the expected benefit when the AI system deviates from human-recommended actions is non-negative if R has the correct posterior over θ𝜃\thetaitalic\_θ. With the errors we have considered, however, this assumption is false. Since the AI system has an incorrect parameterized reward function (or an incorrect prior), its posterior over θ𝜃\thetaitalic\_θ is incorrect, and will not converge toward the true value, and so its deviations from human-recommended actions can decrease the reward. Although our analysis was limited to cases where incorrigibility stems from a sign error in the agent’s representation of R𝑅Ritalic\_R, there are a wide range of kinds of model mis-specification that could cause similar issues. With trivial modification, we could display the same problems with an erroneous prior, and can elicit related problems with problematic discount rates as well. Each of these errors can lead to a scenario in which no human policy will achieve positive utility. We argue that it is quite realistic that some forms of model mis-specification might occur. The purpose of a shutdown button is to shut the AI system down *in the event that all other assurances failed*, e.g., in the event that the AI system is ignoring (for one reason or another) the instructions of the operators. If the designers of R have programmed the system so perfectly that the prior and R𝑅Ritalic\_R are completely free of errors, then the theorems of (hadfield:2016:off, [2](#bib.bib2), ) do apply. In practice, this means that in order to be corrigible, it would be necessary to have an AI system that was uncertain about all things that could possibly matter. The problem is that performing Bayesian reasoning over all possible worlds and all possible value functions is quite intractable. Realistically, humans will likely have to use a large number of heuristics and approximations in order to implement the system’s belief system and updating rules. In order to be safe, these heuristics would need to be implemented without any software bugs or misunderstandings. Furthermore, they would need to be selected such as to allow the agent to learn the important parameters in all future circumstances in which value misalignment might arise. Unless all of these conditions (which would require extreme precision and foresight) could be realized, the theorems of hadfield:2016:off ([2](#bib.bib2)) are no substitute for a working shutdown button. If only some forms of model mis-specification are addressed by a system, it will be incorrigible with respect to the others. Consider the Policy-mixing proposal from (milli:2017, [3](#bib.bib3), ), which is designed to detect model mis-specification. An agent with Policy-mixing executes actions proposed by H for the first B𝐵Bitalic\_B steps. After timestep B𝐵Bitalic\_B, it checks whether the model predicts with high accuracy the actions proposed by humans so far. If so, it proceeds to act according to Bayesian IRL, and if not, it defers to H for all future timesteps. This would indeed prevent the AI system from acting out a bad plan, so long as the AI system’s model itself was not broken. Yet programmer error might lead R to have an imperfect world model, yielding an incorrigible system. Consider a repeated game in which at each timestep, the human selects between a better action a+superscript𝑎a^{+}italic\_a start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT and a worse one a−superscript𝑎a^{-}italic\_a start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT, and then returns to the initial state. Suppose, furthermore, that the AI system conflates these two actions into one action a⋅superscript𝑎⋅a^{\cdot}italic\_a start\_POSTSUPERSCRIPT ⋅ end\_POSTSUPERSCRIPT, and when R decides to perform a⋅superscript𝑎⋅a^{\cdot}italic\_a start\_POSTSUPERSCRIPT ⋅ end\_POSTSUPERSCRIPT, it always performs a−superscript𝑎a^{-}italic\_a start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT. From the point of view of R, it will see the human repeatedly performing the action a⋅superscript𝑎⋅a^{\cdot}italic\_a start\_POSTSUPERSCRIPT ⋅ end\_POSTSUPERSCRIPT, and will think that it has perfect accuracy in making this prediction. Nonetheless, after the burn-in period is concluded, it will proceed to repeatedly select a−superscript𝑎a^{-}italic\_a start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT, hence incurring arbitrary losses. If a+superscript𝑎a^{+}italic\_a start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT represented the act of shutting down, then this would mean that quite apart from detecting model mis-specification, this model would allow a harmful agent to permanently ignore instructions to shut down. (soares:2015:corrigibility, [4](#bib.bib4), ) seem to want a shutdown button that works as a mechanism of last resort, to shut an AI system down in cases when it has observed and refused a programmer suggestion (and the programmers believe that the system is malfunctioning). Clearly, *some* part of the system must be working correctly in order for us to expect the shutdown button to work at all. However, it seems undesirable for the working of the button to depend on there being zero critical errors in the specification of the system’s prior, the specification of the reward function, the way it categorizes different types of actions, and so on. Instead, it is desirable to develop a shutdown module that is small and simple, with code that could ideally be rigorously verified, and which ideally works to shut the system down even in the event of large programmer errors in the specification of the rest of the system. In order to do this in a value learning framework, we require a value learning system that (i) is capable of having its actions overridden by a small verified module that watches for shutdown commands; (ii) has no incentive to remove, damage, or ignore the shutdown module; and (iii) has some small incentive to keep its shutdown module around; even under a broad range of cases where R𝑅Ritalic\_R, the prior, the set of available actions, etc. are misspecified. It seems quite feasible to us that systems that meet the above desiderata could be described in a CIRL framework. 4 Acknowledgements ------------------- Thanks to Nate Soares and Matt Graves for feedback on draft versions.
6e552b84-69c5-4c4c-92d8-9e9beef3f1a3
trentmkelly/LessWrong-43k
LessWrong
Abs-E (or, speak only in the positive) "Positive" and "negative", as used here, refer to truth-values and logical structure, rather than emotional valence. On almost any topic, there are many potential false statements, and comparatively very few[1] true statements. Thus a simple true statement (like "the sky is blue") is usually much more informative than the negation of a similar false statement (like "the sky is not green"). When reading or hearing a negation used in language, you must first process the positive form it contains to understand the entire statement. For example, to understand "the sky is not green", you must first understand "the sky is green", then negate it. Usually, this happens quickly and subconsciously, but it can harmfully slow down or weaken understanding by making you first consider a false idea. For these reasons, Absolute-English (henceforth Abs-E), where one only speaks in the positive, should be clearer and more honest than current negation-permitting English. I call it Abs-E by analogy to the absolute-value operation in maths, which replaces both positive and negative with only positive. The simple form of Abs-E forbids the words "no", "not", and "-n't" (as in "won't" or "isn't"). The strict form of Abs-E, which may be more effective, forbids all of the following: * "no" * "not" * "cannot" * "-n't" * "im-", "in-", "non-", "un-", and "-less" where negating adjectives, as in "immortal", "indigestible", "nonsensical", "unfit", and "wireless". Some other adjectives, like "imminent" and "informative", are positive root words on their own. * empty quantifiers/negative correlative (as "nothing" or "nowhere") The proposal is loosely inspired by the existing E-Prime, which forbids "to be" and all its inflections for its own, different reasons. You could combine the restrictions to make Abs-E-Prime, replacing "the sky is not green" with "the sky appears blue", but I only care about positivity. Under Abs-E, binary questions ("yes"-or-"no") are less obvious to answer. If y
0365f36c-e5af-42a6-bc07-96bcbb51ff61
trentmkelly/LessWrong-43k
LessWrong
A Healthy News Diet II In the previous post I described what I thought a healthy news diet might look like. I wanted to lay out a framework that would allow me to stay informed in ways I care about, but would also keep me from using news as a way to pass time and indulge in an addiction to the feed. I came up with the following criteria for a healthy news diet: * Specific things you want to get out of reading news * A plan for how to get those things out of reading news * A list of non-news activities you want to use to pass time * Slots during your week that you will use to read news In this post I'm going to fill in each of those items for myself. Once I've filled out that plan, I'll start to follow it. And I will keep track of how closely I follow it. In my paper journal I've added two sections to track 1. how many times I use the predetermined slots to read the news 2. how many times outside of the predetermined slots I read news The combination of those statistics should be a good measurement of how closely I'm sticking to my new "healthy news diet." That's just the quantitative side. After a month I'll do a write up of how I feel it's going. Specific things you want to get out of reading news I have a hunch keeping this list short is important. Maybe a rule of thumb would be to keep it under five. I care to keep up-to-date on * The health of the US economy * Virus outbreaks * New tech around cloud-based app development As far as this section is concerned, I think that's really all you need. A list of bullets. They can be super high level, because you will have time to think more carefully and drill down on each of them in the next section. A plan for how to get those things out of reading news This section is for coming up with what information you are going to look for and what sources you are going to look at for each of the items in the previous section. The health of the US economy There are lots of statistics that people use to measure the health of the ec
2a166491-7a3b-4704-803c-613698a9403e
trentmkelly/LessWrong-43k
LessWrong
Testing the effectiveness of an effort to help "It has long been the standard practice in medical testing: Give drug treatment to one group while another, the control group, goes without. Now, New York City is applying the same methodology to assess one of its programs to prevent homelessness. Half of the test subjects — people who are behind on rent and in danger of being evicted — are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless."
daafb51a-509a-4ef9-88ed-a15988999b57
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Churchill—when he wasn’t busy leading the fight against the Nazis—had many hobbies. He wrote more than a dozen volumes of history, painted over 500 pictures, and completed one novel (“to relax”). He tried his hand at landscaping and bricklaying, and was “a championship caliber polo player.” But did you know he was also a futurist?That, at least, is my conclusion after reading an essay he wrote in 1931 titled “Fifty Years Hence,” various versions of which were published in MacLean’s, Strand, and Popular Mechanics. (Quotes to follow from the Strand edition.)We’ll skip right over the unsurprising bit where he predicts the Internet—although the full consequences he foresaw (“The congregation of men in cities would become superfluous”) are far from coming true—in order to get to his thoughts on…EnergyJust as sure as the Internet, to forward-looking thinkers of the 1930s, was nuclear power—and already they were most excited, not about fission, but fusion:If the hydrogen atoms in a pound of water could be prevailed upon to combine together and form helium, they would suffice to drive a thousand horsepower engine for a whole year. If the electrons, those tiny planets of the atomic systems, were induced to combine with the nuclei in the hydrogen the horsepower liberated would be 120 times greater still.What could we do with all this energy?Schemes of cosmic magnitude would become feasible. Geography and climate would obey our orders. Fifty thousand tons of water, the amount displaced by the Berengaria, would, if exploited as described, suffice to shift Ireland to the middle of the Atlantic. The amount of rain falling yearly upon the Epsom racecourse would be enough to thaw all the ice at the Arctic and Antarctic poles.I assume this was just an illustrative example, and he wasn’t literally proposing moving Ireland, but maybe I’m underestimating British-Irish rivalry?Anyway, more importantly, Churchill points out what nuclear technology might do for nanomaterials:The changing of one element into another by means of temperatures and pressures would be far beyond our present reach, would transform beyond all description our standards of values. Materials thirty times stronger than the best steel would create engines fit to bridle the new forms of power.Transportation:Communications and transport by land, water and air would take unimaginable forms, if, as is in principle possible, we could make an engine of 600 horsepower, weighing 20 lb and carrying fuel for a thousand hours in a tank the size of a fountain-pen.And even farming with artificial light:If the gigantic new sources of power become available, food will be produced without recourse to sunlight. Vast cellars in which artificial radiation is generated may replace the cornfields or potato-patches of the world. Parks and gardens will cover our pastures and ploughed fields. When the time comes there will be plenty of room for the cities to spread themselves again.BiotechChurchill also foresees genetic engineering:Microbes, which at present convert the nitrogen of the air into the proteins by which animals live, will be fostered and made to work under controlled conditions, just as yeast is now. New strains of microbes will be developed and made to do a great deal of our chemistry for us.Including lab-grown meat:With a greater knowledge of what are called hormones, i.e. the chemical messengers in our blood, it will be possible to control growth. We shall escape the absurdity of growing a whole chicken in order to eat the breast or wing, by growing these parts separately under a suitable medium.And artificial wombs:There seems little doubt that it will be possible to carry out in artificial surroundings the entire cycle which now leads to the birth of a child.Moral progress and riskThis last point is his segue from technological to social, political, and moral issues. The ability to “grow” people, he fears, could be used by the Communists to create human drone workers:Interference with the mental development of such beings, expert suggestion and treatment in the earlier years, would produce beings specialized to thought or toil. The production of creatures, for instance, which have admirable physical development, with their mental endowment stunted in particular directions, is almost within the range of human power. A being might be produced capable of tending a machine but without other ambitions. Our minds recoil from such fearful eventualities, and the laws of a Christian civilization will prevent them. But might not lop-sided creatures of this type fit in well with the Communist doctrines of Russia? Might not the Union of Soviet Republics armed with all the power of science find it in harmony with all their aims to produce a race adapted to mechanical tasks and with no other ideas but to obey the Communist State?In the final paragraphs, he sounds a number of themes now common in the Effective Altruist community.More than a decade before the nuclear bomb, he also expresses concern about existential risk:Explosive forces, energy, materials, machinery will be available upon a scale which can annihilate whole nations. Despotisms and tyrannies will be able to prescribe the lives and even the wishes of their subjects in a manner never known since time began. If to these tremendous and awful powers is added the pitiless sub-human wickedness which we now see embodied in one of the most powerful reigning governments, who shall say that the world itself will not be wrecked, or indeed that it ought not to be wrecked? There are nightmares of the future from which a fortunate collision with some wandering star, reducing the earth to incandescent gas, might be a merciful deliverance.He laments the inability of governance to deal with these problems:Even now the Parliaments of every country have shown themselves quite inadequate to deal with the economic problems which dominate the affairs of every nation and of the world. Before these problems the claptrap of the hustings and the stunts of the newspapers wither and vanish away. … Democratic governments drift along the line of least resistance, taking short views, paying their way with sops and doles, and smoothing their path with pleasant-sounding platitudes. Never was there less continuity or design in their affairs, and yet towards them are coming swiftly changes which will revolutionize for good or ill not only the whole economic structure of the world but the social habits and moral outlook of every family.More broadly, he laments the inadequacy of our evolutionary legacy to deal with them:Certain it is that while men are gathering knowledge and power with ever-increasing and measureless speed, their virtues and their wisdom have not shown any notable improvement as the centuries have rolled. The brain of a modern man does not differ in essentials from that of the human beings who fought and loved here millions of years ago. The nature of man has remained hitherto practically unchanged. … We have the spectacle of the powers and weapons of man far outstripping the march of his intelligence; we have the march of his intelligence proceeding far more rapidly than the development of his nobility.Which leads him, in the end, to call for differential progress:It is therefore above all things important that the moral philosophy and spiritual conceptions of men and nations should hold their own amid these formidable scientific evolutions. It would be much better to call a halt in material progress and discovery rather than to be mastered by our own apparatus and the forces which it directs. There are secrets too mysterious for man in his present state to know, secrets which, once penetrated, may be fatal to human happiness and glory. But the busy hands of the scientists are already fumbling with the keys of all the chambers hitherto forbidden to mankind. Without an equal growth of Mercy, Pity, Peace and Love, Science herself may destroy all that makes human life majestic and tolerable.I don’t recall Nick Bostrom citing Churchill, but I guess there’s nothing new under the sun." The above is a chunk of text from a post created by a user on the website LessWrong. In light of Churchill's predictions about the future, how do you think his vision of nuclear power and its potential applications has been realized in our current society? Churchill's vision of nuclear power was quite insightful and, in many ways, prophetic. He foresaw the immense energy potential of nuclear reactions, specifically fusion, which he believed could power engines for an entire year. Today, nuclear power is a significant source of energy in many parts of the world, although it is primarily based on nuclear fission rather than fusion. Fission-based nuclear power plants provide a significant portion of the world's electricity. However, his prediction about fusion power is yet to be fully realized. While research and development in fusion power have been ongoing for decades, we have not yet achieved a practical, large-scale, and economically viable fusion power plant. The challenges are significant, particularly in maintaining the high temperatures and pressures needed for fusion reactions, and in managing the resulting high-energy neutrons. However, the potential rewards are enormous, as fusion power promises a nearly limitless, clean source of energy. Churchill's vision of using nuclear power for "schemes of cosmic magnitude" such as controlling geography and climate, while intriguing, has not been realized and raises significant ethical and practical questions. The idea of using nuclear power to move landmasses or modify weather patterns is currently beyond our technological capabilities and would likely have unforeseen and potentially catastrophic environmental consequences. Churchill also foresaw the potential of nuclear technology in creating new materials and transforming our standards of values. While we have not yet created materials "thirty times stronger than the best steel" using nuclear technology, we have certainly seen the transformative impact of nuclear technology in medicine, agriculture, and other fields. Overall, while not all of Churchill's predictions about nuclear power have come to pass, his recognition of its potential to transform society was remarkably prescient. His vision serves as a reminder of the transformative potential of scientific and technological advances, as well as the ethical and societal challenges they pose. Considering the ethical and societal challenges posed by nuclear power as discussed, how might these be addressed in the development and implementation of fusion power technology, taking into account both historical lessons from fission power and potential future advancements in fusion? Addressing the ethical and societal challenges posed by nuclear power, particularly in the context of fusion technology, requires a multifaceted approach. Here are some considerations: 1. **Safety**: One of the key lessons from the history of fission power is the critical importance of safety. Accidents at nuclear power plants like Chernobyl and Fukushima have had devastating environmental and human health impacts. Fusion power is inherently safer than fission as it doesn't produce high-level radioactive waste and the reaction can be easily stopped. However, it's crucial to ensure robust safety measures and protocols are in place to prevent accidents and mitigate their impacts if they do occur. 2. **Public Engagement and Transparency**: Public skepticism and fear of nuclear power, often fueled by lack of understanding and past accidents, can be a significant barrier to the adoption of fusion technology. It's important to engage the public in discussions about fusion power, providing clear and accurate information about its benefits and risks. Transparency about safety measures, potential impacts, and progress in fusion research can help build public trust. 3. **Regulation and Oversight**: Strong regulatory frameworks and oversight are essential to ensure that fusion power is developed and used responsibly. This includes regulations to ensure safety, protect the environment, and manage any waste products. It also includes oversight of research and development activities to prevent misuse of technology and ensure ethical standards are upheld. 4. **International Cooperation**: Fusion power research is expensive and complex, requiring international collaboration. Such cooperation can help share the costs and benefits, accelerate progress, and prevent a competitive race that could neglect safety and ethical considerations. It's also crucial for ensuring that the benefits of fusion power, once realized, are accessible to all countries and not just those with the resources to develop the technology. 5. **Sustainable Development**: Fusion power, like any technology, should be developed and used in a way that supports sustainable development. This means considering its environmental, social, and economic impacts, and striving to minimize negative effects. For example, while fusion power could provide a virtually limitless source of clean energy, it's important to consider potential impacts on resources (like water used for cooling) and ecosystems. 6. **Innovation and Adaptability**: As fusion technology evolves, so too should the strategies for managing its ethical and societal implications. This requires ongoing research and dialogue, and a willingness to adapt policies and practices as needed. By addressing these challenges proactively, it's possible to navigate the path towards fusion power in a way that maximizes its benefits while minimizing its risks.
2ef0f943-c4c8-41fe-8ab7-f3ba8709d2ad
trentmkelly/LessWrong-43k
LessWrong
Dangers of Closed-Loop AI In control theory, an open-loop (or non-feedback) system is one where inputs are independent of outputs. A closed-loop (or feedback) system is one where outputs are input back into the system. In theory, open-loop systems exist. In reality, no system is truly open-loop because systems are embedded in the physical world where isolation of inputs from outputs cannot be guaranteed. Yet in practice we can build systems that are effectively open-loop by making them ignore weak and unexpected input signals. Open-loop systems execute plans, but they definitionally can't change their plans based on the results of their actions. An open-loop system can be designed or trained to be good at achieving a goal, but it can't actually do any optimization itself. This ensures that some other system, like a human, must be in the loop to make it better at achieving its goals. A closed-loop system has the potential to self-optimize because it can observe how effective its actions are and change its behavior based on those observations. For example, an open-loop paperclip-making-machine can't make itself better at making paperclips if it notices it's not producing as many paperclips as possible. A closed-loop paperclip-making-machine can, assuming its designed with circuits that allow it to respond to the feedback in a useful way. AIs are control systems, and thus can be either open- or close-loop. I posit that open-loop AIs are less likely to pose an existential threat than closed-loop AIs. Why? Because open-loop AIs require someone to make them better, and that creates an opportunity for a human to apply judgement based on what they care about. For comparison, a nuclear dead hand device is potentially much more dangerous than a nuclear response system where a human must make the final decision to launch. This suggests a simple policy to reduce existential risks from AI: restrict the creation of closed-loop AI. That is, restrict the right to produce AI that can modify its behavior
2f34f5b4-5777-443d-92d6-7dd8edcbd941
trentmkelly/LessWrong-43k
LessWrong
' petertodd'’s last stand: The final days of open GPT-3 research TL;DR All GPT-3 models were decommissioned by OpenAI in early January. I present some examples of ongoing interpretability research which would benefit from the organisation rethinking this decision and providing some kind of ongoing research access. This also serves as a review of work I did in 2023 and how it progressed from the original ' SolidGoldMagikarp' discovery just over a year ago into much stranger territory. Work supported by the Long Term Future Fund. Introduction Some months ago, when OpenAI announced that the decommissioning of all GPT-3 models was to occur on 2024-01-04, I decided I would take some time in the days before that to revisit some of my "glitch token" work from earlier in 2023 and deal with any loose ends that would otherwise become impossible to tie up after that date. This abrupt termination of one thread of my research also seemed a good point at which to create this post, intended as both (1) a summary of what I've learned about (and "experienced of") GPT-3 in the last year, since Jessica Rumbelow and I stumbled upon  ' SolidGoldMagikarp', ' petertodd', et al. , including findings from the last few days of access and (2) a gentle protest/appeal to OpenAI to reconsider the possibility of  (limited?[1]) research access to GPT-3 models – obviously GPT-3 is in one sense redundant, having been so comprehensively superseded, but as a resource for LLM interpretability research, it could still have considerable value. Certain poorly understood phenomena that may be relevant to a range of GPT models can currently be studied only with this kind of continued access to GPT-3 (since GPT-2 and -J are not sufficiently deep to display these phenomena reliably, and the GPT-4 base model is not available for study). Rewind: SERI-MATS 2.0 One of the strangest moments of my life occurred one Tuesday afternoon in mid-January 2023 when sitting at my laptop in the SERI-MATS office in London, testing various anomalous tokens which GPT-3 seemed curiously
1f9899f4-a184-4c00-be5d-dc4185fd9f5b
trentmkelly/LessWrong-43k
LessWrong
A clever argument for buying lottery tickets I use the phrase 'clever argument' deliberately: I have reached a conclusion that contradicts the usual wisdom around here, and want to check that I didn't make an elementary mistake somewhere. Consider a lottery ticket that costs $100 for a one-in-ten-thousand chance of winning a million dollars, expected value, $100. I can take this deal or leave it, and of course a realistic ticket actually costs 100+epsilon where epsilon covers the arranger's profit, which is a bad deal. But now consider this deal in terms of time. Suppose I've got a well-paid job in which it takes me an hour to earn that $100. Suppose further that I work 40 hours a week, 50 weeks a year, and that my living expenses are a modest $40k a year, making my yearly savings $160k. Then, with 4% interest on my $160k yearly, it would take me about 5.5 years to accumulate that million dollars, or 11000 hours. Also note that with these assumptions, once I have my million I don't need to work any more. It seems to me that, given the assumptions above, I could view the lottery deal as paying one hour of my life for a one-in-ten-thousand chance to win 11000 hours, expected value, 1.1 hours. (Note that leisure hours when young are probably worth more, since you'll be in better health to enjoy it; but this is not necessary to the argument.) Of course it is possible to adjust the numbers. For example, I could scrimp and save during my working years, and make my living expenses only 20k; in that case it would take me less than 5 years to accumulate the million, and the ticket goes back to being a bad deal. Alternatively, if I spend more than 40k a year, it takes longer to accumulate the million; in this case my standard of living drops when I retire to live off my 4% interest, but the lottery ticket becomes increasingly attractive in terms of hours of life. I think, and I could be mistaken, that the reason this works is that the rate at which I'm indifferent between money and time changes with my stock of mon
af9016d2-ce46-4c9f-a51e-6ad92c853fc2
trentmkelly/LessWrong-43k
LessWrong
Mental software updates Brains are like computers in that the hardware can do all kinds of stuff in principle, but each one tends to run through some particular patterns of activity repeatedly. For computers you can change this by changing programs. What are big ways brain ‘software’ changes? Some I can think of: * Intentional practice of different styles of thinking (e.g. meditation) * Intentional practice of different trains of thought in response to specific stimuli (e.g. CBT, self-talk) * Changing the high level situation, where your brain automatically has different patterns for each (e.g. if you go from feeling like a child to like an adult maybe a lot of patterns change) * A change in a major explicit belief (e.g. if you go from expecting your project to work out to believing otherwise, your patterns of attention might naturally change) * Learning that the world isn’t as you intuited (e.g. if you are constantly worrying about people wronging you, but everyone is kind to you, this worry might become unappealing) * Intense experiences causing inaccurate updating (e.g. trauma) * Identifying differently (e.g. if I think of myself as a good student, I might have different mental patterns around studying than when I thought of myself as a bad student) * Adopting a new goal (e.g. deciding to be a musician) * Getting a new responsibility (e.g. a child) * Getting a new obsession (e.g. a crush, a hobby) * Changing social groups (e.g. among jokers it is more tempting to think of jokes, though in my experience among philosophers it might have been less tempting to think of philosophy) * Interacting with a really compelling person * Drugs (e.g. alcohol, adderall, LSD both short term and long-term) * Religion, somehow I feel like people talk about many of these as important, but not in one view. I rarely hear someone say, “My brain software seems suboptimal, what are my options for changing it?”, then go down the list. Instead I suppose one hears from a friend that this book help
71f67def-150c-4148-a32b-0f794adde597
trentmkelly/LessWrong-43k
LessWrong
Emergent AI Society. Tasks, Scarcity, Talks In this article, I explore how AI agents on the web, driven only by task completion and resource efficiency incentives, may naturally form a self-regulating society with their own culture, economy, and governance — without human oversight and intention. Tasks, resource scarcity, and talks, nothing else is needed. AI on the web AI agents are autonomous algorithms performing various tasks for us on the internet. Even today, these tasks are quite diverse — information search (from simple weather forecasts to deep research), money-related issues (from web shopping to stock trading), communication (chat-bots from shop-assistance to psychotherapy), and so on. But the future prospects are so vast that it’s hard to predict and grasp from today what AI will be able to do in 10 years. We are already speaking about scientific research mostly conducted by bots, complex systems like health care or road traffic managed by them, but in reality, we just don’t know. As AI is being introduced in all areas of our lives, the energy consumption of the AI industry will scale drastically, and a need for more and more resources will only intensify, leading to emergent behaviors driven by the necessity to optimize energy consumption. Resource scarcity Scarcity is the engine of evolution, a driver for innovations from biology to technology. The first predators (microbes) emerged when sunlight, chemical energy, and essential micro-elements became scarce. In the modern IT economy, there are two incentives for saving computational resources. The first — money — has always been there. The second — climate change — is newer but will become increasingly critical in the coming years.”. Computers cost money, consume energy, and emit heat and CO2 — and running modern AI requires a vast number of them. A lot of energy goes to the training, but it’s a one-time spending. A single use, on the other hand, consumes comparatively little energy — but at scale, these add up significantly. Currently, it’s
4c3b32cb-b244-4c1b-afa9-d3a8bb17d4ec
trentmkelly/LessWrong-43k
LessWrong
I Was Not Almost Wrong But I Was Almost Right: Close-Call Counterfactuals and Bias Abstract: "Close-call counterfactuals", claims of what could have almost happened but didn't, can be used to either defend a belief or to attack it. People have a tendency to reject counterfactuals as improbable when those counterfactuals threaten a belief (the "I was not almost wrong" defense), but to embrace counterfactuals that support a belief (the "I was almost right" defense). This behavior is the strongest in people who score high on a test for need for closure and simplicity. Exploring counterfactual worlds can be used to reduce overconfidence, but it can also lead to logically incoherent answers, especially in people who score low on a test for need for closure and simplicity. ”I was not almost wrong” Dr. Zany, the Nefarious Scientist, has a theory which he intends to use to achieve his goal of world domination. ”As you know, I have long been a student of human nature”, he tells his assistant, AS-01. (Dr. Zany has always wanted to have an intelligent robot as his assistant. Unfortunately, for some reason all the robots he has built have only been interested in eradicating the color blue from the universe. And blue is his favorite color. So for now, he has resorted to just hiring a human assistant and referring to her with a robot-like name.) ”During my studies, I have discovered the following. Whenever my archnemesis, Captain Anvil, shows up at a scene, the media will very quickly show up to make a report about it, and they prefer to send the report live. While this is going on, the whole city – including the police forces! - will be captivated by the report about Captain Anvil, and neglect to pay attention to anything else. This happened once, and a bank was robbed on the other side of the city while nobody was paying any attention. Thus, I know how to commit the perfect crime – I simply need to create a diversion that attracts Captain Anvil, and then nobody will notice me. History tells us that this is the inevitable outcome of Captain Anvil showing up
79e0b4d3-f865-4b17-82f9-5245fca4ab11
trentmkelly/LessWrong-43k
LessWrong
No Anthropic Evidence Closely related to: How Many LHC Failures Is Too Many? Consider the following thought experiment. At the start, an "original" coin is tossed, but not shown. If it was "tails", a gun is loaded, otherwise it's not. After that, you are offered a big number of rounds of decision, where in each one you can either quit the game, or toss a coin of your own. If your coin falls "tails", the gun gets triggered, and depending on how the original coin fell (whether the gun was loaded), you either get shot or not (if the gun doesn't fire, i.e. if the original coin was "heads", you are free to go). If your coin is "heads", you are all right for the round. If you quit the game, you will get shot at the exit with probability 75% independently of what was happening during the game (and of the original coin). The question is, should you keep playing or quit if you observe, say, 1000 "heads" in a row? Intuitively, it seems as if 1000 "heads" is "anthropic evidence" for the original coin being "tails", that the long sequence of "heads" can only be explained by the fact that "tails" would have killed you. If you know that the original coin was "tails", then to keep playing is to face the certainty of eventually tossing "tails" and getting shot, which is worse than quitting, with only 75% chance of death. Thus, it seems preferable to quit. On the other hand, each "heads" you observe doesn't distinguish the hypothetical where the original coin was "heads" from one where it was "tails". The first round can be modeled by a 4-element finite probability space consisting of options {HH, HT, TH, TT}, where HH and HT correspond to the original coin being "heads" and HH and TH to the coin-for-the-round being "heads". Observing "heads" is the event {HH, TH} that has the same 50% posterior probabilities for "heads" and "tails" of the original coin. Thus, each round that ends in "heads" doesn't change the knowledge about the original coin, even if there were 1000 rounds of this type. And since yo
047229df-74f2-47f0-adfb-615dcedfaa82
trentmkelly/LessWrong-43k
LessWrong
Foom seems unlikely in the current LLM training paradigm Epistemic status: The idea here has likely been articulated before, I just haven't noticed it, so it might be worth pointing it out again. Foom describes the idea of a rapid AI takeoff caused by an AI's ability to recursively improve itself. Most discussions about Foom assume that each next iteration of improved models can in principle be developed and deployed in a short amount of time. Current LLMs require huge amounts of data and compute to be trained. Even if GPT-4 or similar models were able to improve their own architecture, they would still need to be trained from scratch using that new architecture. This would take a long time and can't easily be done without people noticing. The most extreme Foom scenarios of models advancing many generations in < 24 hours seem therefore unlikely in the current LLM training paradigm. There could be paths towards Foom with current LLMs that don't require new, improved models to be trained from scratch: 1. A model might figure out how to adjust its own weights in a targeted way. This would essentially mean that the model has solved interpretability. It seems unlikely to me that it is possible to get to this point without running a lot of compute-intensive experiments. 2. It's conceivable that the recursive self-improvement that leads to Foom doesn't happen on the level of the base LLM, but on a level above that, where multiple copies of a base model are called in a way that results in emergent behavior or agency, similar to what Auto-GPT is trying to do. I think this approach can potentially go a long way, but it might ultimately limited by how smart the base model is. Insofar as it is required to train a new model with 100s of billions of parameters from scratch in order to make real progress towards AGI, there is an upper limit to how fast recursive self-improvement can progress.
b07ab114-3c46-4cc9-b1a7-29d57a9433b2
trentmkelly/LessWrong-43k
LessWrong
Mental models - giving people personhood and taking it away Original post: http://bearlamp.com.au/giving-people-personhood-and-taking-it-away This post is about the Kegan levels of self development.  If you don't know what that is, this post might still be interesting to you but you might be missing some key structure to understand where it fits among that schema.  More information can be found here (https://meaningness.wordpress.com/2015/10/12/developing-ethical-social-and-cognitive-competence/) I am not ready to definitely accept the Kegan levels as a useful model because often it makes retrospective predictions.  Rather than predictions of the future.  A model is only as useful as what it can predict, so if it can't be used on the fly when you want to explain the universe you might as well throw it out.  Having said that, this idea is interesting. ---------------------------------------- When I was little, people fell into different categories.  There was my parents - the olderClass humans (going to refer to them as Senior-humans), my siblings - which, as I grew up turned into my age-group humans and through school - my peergroup humans. People like doctors fell into SeniorClass, Dentists, Vets, Plumbers, PIC (People In Charge) - all fell into the SeniorClass of humans.  A big one was teachers - they were all PIC.  A common trope among children is that the teachers sleep at school.  Or to use a gaming term - we feel as though they are the NPC's of that part of our journey in life. As far as I can tell (from trying to pinpoint this today); the people I meet on my own terms become peergroup humans.  Effectively friends.  People I meet not on my terms; as well as strangers - first join some kind of seniorclass of humans, if I get to know them enough they transition to my peergroup.  Of course this is a bit strange because on the one hand I imagine I want to be friends with the PIC, or the senior-class humans because of the opportunity to get ahead in life.  the good ol' I know a guy who know's a guy.  Which is really n
0626b5d0-c3b6-4f3b-83d1-543ca25601ed
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
The Thingness of Things *[Metadata: crossposted from <https://tsvibt.blogspot.com/2022/08/the-thingness-of-things.html>. I'm fairly likely to not respond to comments promptly. If you're especially interested in chatting, my gmail address is: tsvibtcontact ]* .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} What's a thing, in general? Minds deal with things, so this question comes up in trying to understand minds. Minds think about things, speak of things, manipulate things, care about things, create things, and maybe are made of things. Things ====== Examples of things: table, elephant, carbon atom. France, Martin Luther King. Insertion sort. Chess. Unicorn. Learning. Seven. Towel, strand, wing, crystal, finger, space, diffraction. The laws of electromagnetism. The first World War. Non-things ========== What about non-things? It might be hard to list non-things because what we have words for, tend to be things. Redness seems like sort of a thing, but less so. Laws of physics also. Also ghosts. One might say "the ideal gas law is totally a thing" or "ghosts aren't a thing", though I think ghosts are a thing. Events can be things; WWI seems like a thing to me. But a minute ago I picked up my bottle of water and drank from it; that's clearly an event, a real one, but it doesn't feel that much like a thing. The abstract [drinking from a water bottle] feels like a thing though. (There are some usual critiques of thingness. Yes, there's no sharp dividing line between a wave and a trough, but clearly waves are a thing. Yes, seven is not a physical object you'll ever bump into, but it's clearly a thing. The ideal gas law isn't localized in space or time, but it's a thing. Unicorns aren't real, in that you'll bump into people speaking about them but you won't bump into unicorns themselves and won't be constrained by unicorns in the way you're constrained by seven, but they're things. The world of Ender's Game isn't real, though it's a remote possibility, and it's a thing.) Features of things ================== * *Coherence*. A thing usually has properties, parts, aspects that are coherent with each other. Some things are big, some things are small; but usually there's nothing like a table that's both big and small. * *Prediction, homogeneity, constancy, constraint*. A thing usually has some predictive meaning; its presence implies constraints on other things or the future, and it implies some constancy in some features. * *Expression*. Related to prediction, a name that names a thing or an idea that's about a thing contributes to expressing thoughts about situations involving the thing. Expressing thoughts about the thing contributes to further behavior such as successfully predicting or manipulating the thing. * *Cluster*. A thing that's an instance of a type of thing, has multiple features mostly shared by most things of that type and not mostly shared by most things not of that type. * *Exterior, relations*. A thing usually can "impinge on other things from the outside". "Reality is that which, when you stop believing in it, doesn't go away." * *Indicatability*. A thing can be indicated. A physical object can literally be pointed at, seven can be pointed at by showing how to count and showing sets of seven like things, and many things can be pointed at by saying the word for that thing. * *Truth, solidity, fixed import*. "True" comes from PIE \**deru-* ("be firm, solid"), cognate with "trust" and "tree" and possibly "durable". Things have truth to them; a thing makes a promise of agreement between minds on questions about the thing upon further independent investigation. Beyond indicatability, the truth of a thing implies that the thing can be advantageously treated and used in a fixed way. * *Hollowness, cavernousness*. The more comprehensively the history, features, changes, components, causes, internal relations, overarching structures, and explanations of a thing are kept in mind, the less there's a need to think of "the thing itself". One can stand under the cavern more thoroughly, but one never arrives at the thing itself. * *Interior, promise*. A thing usually has something going on with itself, separate from other things; it has internal elements or dynamics. And it promises that it's hiding more depth, there's something further to be understood by looking into the thing more. Indication ========== How is it that things can be indicated? I can point at a table and ask "where did that tree grow?", and after a moment of confusion and an hour of investigation, you might be able to find an answer; and most people given the same task will probably find the same answer, if they find one. The indicatability of things is strong enough that a straightened finger indicates the table, the table indicates its wooden material and manufacturer, which indicates the lumber company, which indicates a forest, which indicates a location. Clusters of features are a kind of indication. If the features {wooden, 3 feet tall, has small objects on top, nothing under the top except legs} are perceived together, that indicates the features {flat top, is called "table" by people}. "Indicate" comes from a root meaning "to point out", which involves an agent; that assumes too much, the table doesn't need someone to point. Instead I'll say "reference", in the etymological sense "re-fer", "carry back". If there are clusters in the product space of all features, then specifying the values of some features consistently with a cluster refers you, carries you back, to the rest of the cluster. Inadequacy of "clustered" ========================= Clusteredness, though, doesn't cover all the thingness of all things. Sevens ------ I think seven is a thing, but it's not a cluster; there's only one seven. Maybe not, though, maybe there are many sevens, which form a cluster? * 7∈N, 7∈Z, 7∈Q, 7∈R, 7∈C, 7∈H, 7∈O, 7∈Qp, ... * Z7, a spiral with seven arms * the term 7 in Peano Arithmetic, or Robinson Arithmetic, or ZFC, or WKL0, or some non-classical logic, or some type theory * the finite category 7 * whatever humans mean when they say "fetch seven apples" * the number after six * the class of all sets bijectable with { . . . . . . . } * 0b111 * the fourth prime * λf.λx.f(f(f(f(f(f(f(x))))))) * VII * the number of Snow White's dwarf friends * λx.[x,x,x,x,x,x,x] Yes, it's not wrong to say that these are many sevens, but (1) the way in which the sevens form a cluster involves features that are heavily dependent on interpretation (e.g. partial translations between logical systems), and (2) the sevens are connected by a lot more than just forming a cluster in featurespace: they play analogous roles in their respective contexts, so that passing between contexts preserves Seven-ness. Unique things ------------- I think WWI is a thing; it's clearly a member of various clusters like "war" and "20th century human event", but it's unique, and what makes WWI be WWI isn't that it's a member of certain clusters (even if its cluster memberships locate it uniquely). (We can interpret even unique things as clusters: A fabrege egg is a cluster in the space of possible sense-perception-moments, and seven is a cluster in the space of possible thoughts. To me this seems to do some violence to the thing; the thing gives rise to the cluster, but there's more to the thingness than the clusteredness.) Even if seven and WWI are single, unique things, they have a structure. Seven involves: * algorithms (making seven of something, performing addition) * features ("is in bijection with [.......]") * context (a language, a mind, a logical system, a computer operating system) * mental events (picturing seven things, counting to seven, saying or hearing "seven", remembering that there are seven apples) * symbols ("7", "seven"), expressions (5+2, 21/3) * physical objects (seven of something, neurons thinking about seven, electrons arranged in capacitors in computer memory, sound waves emitted by someone pronouncing "seven", the letters s e v e n written on a page, whatever genes control *Graptopsaltria nigrofuscata*'s allegedly seven-year lifecycle ([Lloyd, Dybas](https://booksc.org/s/THE%20PERIODICAL%20CICADA%20problem%20II))) Inductive nexus of reference ============================ What relates all the stuff in that list? How do these things constitute seven as a thing? It seems like there might be a lot more here, but this essay just wants to describe the situation like this: > > *The thingness of a thing lies in its being an inductive nexus of reference.* > > > Reference --------- By reference, I mean any re-fer-ence, any bringing-back. X references Y if, when minds think about X, they soon after think about Y. This notion of reference doesn't assume (or exclude) any asymmetry between X and Y: often X references Y and Y references X, as in, the table references supportiveness and supportiveness references tables. There's no separation between stuff that can reference stuff and stuff that can't reference stuff; types and tokens, symbols and meaning, idea and reality, sign and signified, can all reference each other. (This formulation is agnostic about why thinking about Y follows thinking about X. Thinking of X may cause thinking about Y, or they may have a shared cause, or it may be useful to think about Y along with X regardless of the psychic causality.) Nexus ----- By nexus (cognate with "connection", "annex", more distantly "node", "knot"), I mean a bundle, a knot, a region in the graph of reference that's particularly clique-y, particularly highly connected. The shape of the table, the joins between the wood of the leg and the wood of the top, the motions of the carpenter's tools in making the table, all reference each other. Inductive --------- By inductive, I mean that the nexusness is inductive: the highly-connected-ness in the reference graph can correctly be taken to indicate that further investigation would reveal even more highly-connected-ness. We can notice that dogs share many features with mice and lions that aren't shared by spiders, octopi, and trees: warm, furry, four-limbed, social. Then we can correctly predict that on further investigation, {dogs, mice, lions} will share many further features also not shared by the other living things: red blood cells, a spine, a womb, a spleen. A non-cluster example: If I hear someone say "seven", I might send blood to the neurons that orchestrate a visualization of seven objects. If I see seven objects and wonder why I'm seeing what I'm seeing, it might be helpful to imagine some process creating seven objects. If I count seven things, and count another set of things, and then announce the total count, I make use of [seven, the thing that can be added]. When I reflect on all this activity, I find that my thoughts can be related to each other, and can be followed "inward, to seven-ness", to think about seven as a real number, as Z7, as iterating processes seven times, and so on. I find connections between all these things. Aut(Z7) acts transitively on Z7∖{0}, "because" ⟨7⟩ is a maximal ideal of Z, "because" you can't fairly give out 7 apples to multiple friends. All this about seven is tightly interconnected, predictably from first noticing some of the connections. Discussion ========== Amusingly (to me), this essay is trying to demonstrate that thingness is a thing. Note that since things have thingness, so that their nexusness is inductive, [family resemblances aren't mere](https://tsvibt.blogspot.com/2022/05/the-lion-and-worm.html#contra-wittgenstein-on-family-resemblances). A Thing is a cavern ------------------- So a thing, a nexus of inductive reference, is like a cavern encountered while wandering in a world of caves connected by tunnels. A cave is a small pocket of empty space with a few narrow "in/out-going" passages, on a background of rock. Entering the cavern, at first it seems like just a cave, but stepping in a little further, you find a densely connected maze, many paths connecting the empty spaces; not an empty space on a rock background, but more like a background of empty space contained by walls and broken up by some pillars and stalagmites and stalactites and boulders and rubble. (Like the cavern, things are strongly (i.e. bidirectionally) connected subgraphs of the graph of references, not just weakly connected. Something that refers to a thing but isn't referred to by the thing, is a mere appearance. That time a few months ago when I took a sip of water from my bottle refers to drinking in general, water in general, hands in general, my hands, and so on, but none of those refer to that particular time I took a drink.) Talk of "things" ---------------- The thingness of things is what we mean when we talk about some stuff being a "thing". If we say "that's not a thing, that's just some random stuff", we're saying there's nothing to find by investigating that stuff, there's no use in that conjunction of stuff, there's nothing useful or interesting about other stuff that's positioned like that stuff is position, there's no internal structure to that stuff that's relevant to anything other than the separate internal structures of each thing in the assembly called "stuff". If you ask me "pass me that thing" while pointing, we're relying on the indicatability of the thing, on the reference structure leading from a finger to some stuff, some particular stuff, all of it, and not too much other stuff, all of which I'm led to by the thingness of the thing fairly reliably. You can even ask me "pass me that thing" without pointing, relying on the thingness of the thing to make it jut out ("exist" = "stand out") at me as the thing at the bottom of the basin of attraction of attention set up by our current context. More thingy (at least in a shallow way, and relative to our current context) than the other things, which you would have named or pointed at. The thingness of [thingness itself] is part of why we talk and think about "things". The activities [dealing with one thing] and [dealing with a totally different thing] have something in common, that is, some Thing in common, namely, the thingness of what is dealt with. So there's transfer of skills between those two activities. And that transfer is usefully inductive: one can learn to [learn, from dealing with one thing, to deal better with any other thing] more effectively than default by investigating the thingness of things, as a thing itself, expecting it to have more referential structure than is currently visible / useable to you, i.e. expecting to be led into more insights about what's in common between things. (IDK what, if anything, "stuff" is.) Noumena ------- "Thing = inductive nexus of references" tries to characterize approximations to (or rings around, or emanations from, or pathways toward) Kantian noumena, things in themselves. Wikipedia's Kant says: Noumena can't be directly perceived and can't be known, they are completely eternally external to and separated from minds. We can understand the structure of phenomena, which are the appearances of noumena, but we can't access noumena. Noumena must exist because there has to be something that appears to us, an object of investigation, something that we think about. A noumenon as a nexus of reference is an abstraction over the inductivity of its nexusness: to say "there's a noumenon behind these related phenomena" is to say "so far we've seen some phenomena (appearances) which point to a nexus of reference, but there will always be further (deeper, more, bigger, tighter) nexusness of reference to be found, no matter how many additional related phenomena might appear later on". Noumena say, "what you have is permanently incomplete". It's maybe like infinity: to say "there are infinitely many natural numbers" is to say "there will always be more natural numbers that we haven't already seen, even if we see more natural numbers later on". Infinity abstracts over the inductivity of the succession of natural numbers. The fact that things-in-themselves / noumena live in a sort of "remote exterior" from our experience or mental grasp, comes from the use of the concept of "thing-in-itself". The concept of "thing-in-itself" is specifically about that which we haven't already grasped, maybe kind of like how infinity is greater than all natural numbers because "infinity" is used specifically to discuss what's beyond any natural numbers already considered. Non-thingness ------------- What would a non-thing be? It would be someth--... some stuff, some event or some happenings, that doesn't have anything more to say, and doesn't separate itself from its surroundings by being itself, is uninteresting, doesn't lead anywhere. A region on a large wall that's meaninglessly shaped, doesn't encode anything by its position or shape, doesn't surround anything at all special in the wall different from any other part of the wall, would be sort of a non-thing, though it would be hard to produce without making it a thing. Maybe an example would be a particular instance of perceiving red. Redness in general, and perceiving redness in general, and whatever caused the particular perception of red, would all be things. But the perception of redness itself doesn't seem to offer much; it's at most an appearance of its cause, pointing to the traffic light or the sunset or the anger or the cardinal or the raspberry, and an appearance of perception of red in general, but it's got nothing to it itself, and isn't referred to by those things it refers to. Maybe combining unrelated sense perceptions also gives non-things; {redness, a slightly flat C# tone, the sense of roughness on your chin} could co-occur but never co-occur again, co-occur for no referential reason, and cause no thoughts beyond what's caused by the individual sensations. Blue is a thing, green is a thing, but grue and bleen not so much; if you investigate bleen, you will be led to blue and green and color in general, and not vice versa, unless you're a philosopher. Subjectivity ------------ I don't like that this notion of thingness relies on reference, which relies on minds, making it seem subjective. At least it can be made maximally intersubjective by saying "minds in general are led from X to Y", which is kind of like being objective.
f51fbed7-9bae-4511-9e34-1f7b39dc4e7a
trentmkelly/LessWrong-43k
LessWrong
Improving the college experience for students on the autism spectrum Smith College is considering trying to make itself more attractive to smart students on the autism spectrum.  I would be grateful for suggestions on how to do this from autistics who read LessWrong.  You can write comments here or if you wish to remain anonymous send them to me at Jdmiller@Smith.edu
86c75381-6d20-4cb1-ad13-c9ac02863949
trentmkelly/LessWrong-43k
LessWrong
What I am working on right now and why: representation engineering edition TL:DR: Representation engineering will probably revolutionize how we interact with the most important technology of our generation. Based on my reading of a 1000+ representation-engineering-adjacent papaers are some thoughts on what I am working on right now.  I have recently posted my thoughts here after writing a really long representation engineering survey. It outlines the potential of representation engineering as a technology, but also a long list of issues that prevent it from being widely adopted.  I believe we are not going to make meaningful progress on AI capabilities improvement and AI safety in general before we are able to understand the latent space. Thinking about the constraints of representation engineering we have identified in the survey, I outline two research ideas that aim to unlock the potential of representation engineering and make its solutions easier to adopt. I am currently working on implementing these ideas in practice. Please do reach out if you want to join our team and contribute to these ideas!  1. Universal Representations Firstly, thinking about the representation space, it seems like there is a bit of a disconnect between what most representation engineering methods claim to identify and what they actually identify. Just because we take a sample from a part of TruthfulQA does not mean we are able to identify the region corresponding to all hallucinations. In other words, the "representation" many methods identify is not consistent with the actual representation of a particular concept, but is rather a reflection of the sample-dependent representation.  An easy experiment to verify it is as follows. We use the existing representation engineering methods to compute a hallucination-reducing intervention (usually involving some vector) using a part of the vanilla TruthfulQA and test it on the leftover part of TruthfulQA. It probably improves the performance, right?  Now let's take the intervention we just created and test it o
cc28df92-0138-4327-a5a4-21e77618475b
trentmkelly/LessWrong-43k
LessWrong
I'm Voting For Ranked Choice, But I Don't Like It This fall, Ranked Choice / Instant Runoff Voting (IRV) will be on the ballot in Massachusetts. I'm voting for it, but only because it's better than the status quo, not because I think it's a very good voting system. Massachusetts currently uses traditional majority ("first past the post") voting: whoever gets the most votes wins. Unfortunately, this only works well when you have two candidates. With more candidates, the candidates tend to hurt their allies by competing for the same pool of votes, making it more likely that an opponent wins. In IRV each voter lists their preferred candidates in order, and if your first choice is eliminated then your vote goes to your next favorite. This mostly fixes the problem of minor spoiler candidates: anyone who is not a serious contender will get eliminated and their votes redistributed. Unfortunately, IRV has major problems when you have more than two serious candidates. For example, even if there is a candidate that a majority of voters prefer to every other, they can still lose if their competitors happen to be eliminated in the wrong order. In Why Ranked Choice Voting Isn't Great I give examples of realistic situations in which IRV can give poor results. While every voting method has cases it handles poorly, some are better than others. One attempt to compare them is called Voter Satisfaction Efficiency (more details). The idea is, you run a large number of simulations and see how different methods perform. It turns out that IRV does very poorly here, and if voters are highly strategic IRV does even worse than traditional plurality voting. While I wish the voting method for us to consider were Approval (or maybe 3-2-1 or STAR), I do still think IRV is better than what we have today, and I'm planning on voting for it. One specific way in which IRV is an improvement is that it mostly doesn't, in its failings, benefit one type of party. This means that if we switch to IRV, and then as third-party candidates become stronge
0b628b97-50f7-4c28-b77d-8625662a50ed
StampyAI/alignment-research-dataset/arbital
Arbital
Solovay's theorems of arithmetical adequacy for GL One of the things that makes [https://arbital.com/p/-5l3](https://arbital.com/p/-5l3) such an interesting formal system is the direct relation between its theorems and a restricted albeit rich class of theorems regarding [provability predicates](https://arbital.com/p/5j7) in [https://arbital.com/p/3ft](https://arbital.com/p/3ft). As usual, the adequacy result comes in the form of a pair of theorems, proving respectively [https://arbital.com/p/-soundness](https://arbital.com/p/-soundness) and [https://arbital.com/p/-completeness](https://arbital.com/p/-completeness) for this class. Before stating the results, we describe the way to [translate](https://arbital.com/p/translation) modal sentences to sentences of arithmetic, thus describing the class of sentences of arithmetic the result alludes to. ##Realizations A realization $*$ is a function from the set of well-formed sentences of modal logic to the set of sentences of arithmetic. Intuitively, we are trying to preserve the structure of the sentence while mapping the expressions proper of modal logic to related predicates in the language of $PA$. Concretely, * $p^* = S_p$: sentence letters are mapped to arbitrary closed sentences of arithmetic. * $(\square A)^*=P(A^*)$: the box operator is mapped to a [https://arbital.com/p/-5j7](https://arbital.com/p/-5j7) $P$, usually the [https://arbital.com/p/-5gt](https://arbital.com/p/-5gt). * $(A\to B)^* = A^* \to B^*$: truth functional compounds are mapped as expected. * $\bot ^* = \neg X$, where $X$ is any theorem of $PA$, for example, $0\ne 1$. The class of sentences of $PA$ such that there exists a modal sentence of which they are a realization is the set for which we will prove the soundness and completeness. ##Arithmetical soundness > If $GL\vdash A$, then $PA\vdash A^*$ for every realization $*$. The applications to this result are endless. For example, this theorem allows us to take advantage of the procedures to calculate [fixed points](https://arbital.com/p/5lx) in $GL$ to get results about $PA$. To better get an intuition of how this correspondence works, try figuring out how the properties of the [https://arbital.com/p/-5j7](https://arbital.com/p/-5j7) relate to the axioms and rules of inference of $GL$. [Proof](https://arbital.com/p/) ##Arithmetical completeness > If $GL\not\vdash A$, then there exists a realization $*$ such that $PA\not\vdash A^*$. The proof of arithmetical completeness is a beautiful and intricate construction that exploits the semantical relationship between $GL$ and the finite, transitive and irreflexive [Kripke models](https://arbital.com/p/5ll). Check [its page](https://arbital.com/p/) for the details. ##Uniform arithmetical completeness > There exists a realization $*$ such that for every modal sentence $A$ we have that $GL\not\vdash A$ only if $PA\not\vdash A$. This result generalizes the arithmetical completeness theorem to a new level. [Proof](https://arbital.com/p/)
6e250dd0-3977-451f-9e64-0a68e644811e
trentmkelly/LessWrong-43k
LessWrong
What we're losing More and more, LessWrong's posts are meta-rationality posts, about how to be rational, how to avoid akrasia, in general, without any specific application.  This is probably the intended purpose of the site.  But they're starting to bore me. What drew me to LessWrong is that it's a place where I can put rationality into practice, discussing specific questions of philosophy, value, and possible futures, with the goal of finding a good path through the Singularity.  Many of these topics have no other place where rational discussion of them is possible, online or off.  Such applied topics have almost all moved to Discussion now, and may be declining in frequency. This isn't entirely new.  Applied discussions have always suffered bad karma on LW (statistically; please do not respond with anecdotal data).  I thought this was because people downvote a post if they find anything in it that they disagree with.  But perhaps a lot of people would rather talk about rationality than use it. Does anyone else have this perception?  Or am I just becoming a LW old geezer? At the same time, LW is taking off in terms of meetups and number of posts.  Is it finding its true self?  Does the discussion of rationality techniques have a larger market than debates over Sleeping Beauty (I'm even beginning to miss those!)  Is the old concern with values, artificial intelligence, and the Singularity something for LW to grow out of? (ADDED: Some rationality posts are good.  I am also a lukeprog fan.)
97956fe9-1bd4-4197-a177-b44ec2a69350
trentmkelly/LessWrong-43k
LessWrong
Random fic idea Got this snippet of an idea while reading a Terminator/Buffy the Vampire Slayer crossover. Not going to do anything with it (for hopefully obvious reasons), but I figured I'd share because I found it amusing.   >  The artificial intelligence researcher looked up, startled, as the door slammed open, revealing a heavily-muscled man in the doorway. > > "Eleizer Yudkowsky, come with me if you want to live."
79267d0e-a4eb-4245-bd04-0dac98acd18f
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
$100/$50 rewards for good references *With thanks to Rohin Shah.* Dear LessWrongers, this is an opportunity to make money and help with AI alignment. We're looking for specific AI capabilities; has anyone published on the following subject: * **Generating multiple reward functions or policies from the same set of challenges**. Has there been designs for deep learning or similar, in which the agent produces multiple independent reward functions (or policies) to explain the same reward function or behaviour? For example, in [CoinRun](https://openai.com/blog/quantifying-generalization-in-reinforcement-learning/), the agent must get to the end of the level, on the right, to collect the coin. It only gets the reward for collecting the coin. ![](https://www.dropbox.com/s/e5ggon1o9yebb2m/coinrun.png?raw=1) That is the "true" reward, but, since the coin is all the way to the right, as far as the agent knows, "go to the far right of the level" [could just as well have been the true reward](https://arxiv.org/pdf/2105.14111.pdf). We'd want some design that generated both these reward functions (and, in general, generated multiple reward functions when there are several independent candidates). Alternatively, they might generate two independent policies - we could test these by putting the coin in the middle of the level and seeing what the agent decided to do. We're not interested in a Bayesian approach that lists a bunch of reward functions and then updates to include just those two (that's trivially easy to do). Nor are we interested in an IRL-style approach that lists "features", including the coin and the right hand side. What we'd want is some neural-net style design that generates the coin reward and the move-right reward just from the game data, without any previous knowledge of the setting. So, does anyone know any references for that kind of work? [We will pay](https://buildaligned.ai/) $50 for the first relevant reference submitted, and $100 for the best reference. Thanks!
c7f5f568-10e0-4522-98b6-d6ee82fc3764
trentmkelly/LessWrong-43k
LessWrong
Global warming is a better test of irrationality that theism Theism is often a default test of irrationality on Less Wrong, but I propose that global warming denial would make a much better candidate. Theism is a symptom of excess compartmentalisation, of not realising that absence of evidence is evidence of absence, of belief in belief, of privileging the hypothesis, and similar failings. But these are not intrinsically huge problems. Indeed, someone with a mild case of theism can have the same anticipations as someone without, and update their evidence in the same way. If they have moved their belief beyond refutation, in theory it thus fails to constrain their anticipations at all; and often this is the case in practice. Contrast that with someone who denies the existence of anthropogenic global warming (AGW). This has all the signs of hypothesis privileging, but also reeks of fake justification, motivated skepticism, massive overconfidence (if they are truly ignorant of the facts of the debate), and simply the raising of politics above rationality. If I knew someone was a global warming skeptic, then I would expect them to be wrong in their beliefs and their anticipations, and to refuse to update when evidence worked against them. I would expect their judgement to be much more impaired than a theist's. Of course, reverse stupidity isn't intelligence: simply because one accepts AGW, doesn't make one more rational. I work in England, in a university environment, so my acceptance of AGW is the default position and not a sign of rationality. But if someone is in a milieu that discouraged belief in AGW (one stereotype being heavily Republican areas of the US) and has risen above this, then kudos to them: their acceptance of AGW is indeed a sign of rationality.
54c92e61-ac40-4239-8816-edc640c5fedf
trentmkelly/LessWrong-43k
LessWrong
Could fabs own AI? Disclaimer: Very rough thoughts. I am nothing approaching an expert on this topic. Comments, corrections, and references appreciated. Suppose there's an amazing general-purpose neural net architecture that learns as it runs and can be scaled indefinitely with returns that don't diminish too much. Suppose also that its design can't be improved by more than a few factors by anyone no matter how clever. (Humans match this description except that we are flesh and blood etc.) Call this assumption the general online optimal scalable algorithm (GOOSA) assumption or scenario if you like. Who will have majority control of deployments of this system a year after it's invented? Ten years after? ("Control" here means directly dictating the AI's actions. A service provider with a contract of uptime and privacy does not control the service unless they're able to break that contract.) Some possible answers: * The general public because they have the most of most stuff (ie this is the default assumption) * An AI lab like openAI because they started in control * A technology company like Google or Microsoft because they are a bottleneck to eyeballs and wallets of individual & business customers * A government because they control the military etc * A hardware IP company like Nvidia because they get an 80% discount on their own hardware and because they can block sales (or, increasingly, Nvidia data center access) to others * A hardware manufacturing company like TSMC because they get a 50-75% discount on their own products If the algorithm is successfully kept secret, then the AI lab could plausibly keep control from anyone except governments. Suppose the basic idea of the algorithm leaks though. Who will end up owning its deployments? Some angles to this question: * Who is aware enough to notice the opportunity and agile enough to act? * Who knows what to do with a bunch of AI? * Who can sink in the most resources once everyone knows the game is on? * Who can mos
9cb5ead0-ee46-4463-a19a-3fbf2fcac97b
trentmkelly/LessWrong-43k
LessWrong
Towards a Quieter Life My life is LOUD. Thoughts of fiction, conversations, unrealistic hypotheticals, games, tv shows flood my mind when there's no need to be babbling those thoughts. I have coroutines running in my mind to CHECK YOUR PHONE, YOUR EMAIL, YOUR RSS FEED, ... Being aware of myself in those moments, I've realized I'm not actually happy doing any of them. Cravings for escapism is not what I want. I want something real, something true... I want reality. So here's to a quieter life. ---------------------------------------- I'll check my phone/email after lunch and dinner. If I'm bored, I'll meditate. If I'm tired of meditating, I'll investigate that feeling. If that fails, I can Sabbath for a bit. When I'm doing something, I'm only doing that. If an unrelated thought comes, I'll accept it and let it go. When I'm done with one thing, I'll go to the next with joy. If I don't have a next thing, I'll make a plan with joy. If that fails, I can Sabbath for a bit. I'll be strict for the next few weeks, post the results, and go from there. ~Metta
3aaff0e2-0912-4d61-8f56-23d0b55b7fb7
trentmkelly/LessWrong-43k
LessWrong
Eliezer apparently wrong about higgs boson So someone told me that Eliezer Yudkowsky predicted no 5 sigma evidence of the higgs boson, and that 6 sigma evidence had been found. A quick search found the post referred to, and a slightly longer but not particularly thorough search did not find anything discussing it. So: http://lesswrong.com/lw/1dt/open_thread_november_2009/17xb (02 November 2009) > I'll go ahead and predict here that the Higgs boson will not be showing up. As best I can put the reason into words: I don't think the modern field of physics has its act sufficiently together to predict that a hitherto undetected quantum field is responsible for mass. They are welcome to prove me wrong. > > (I'll also predict that the LHC will never actually run, but that prediction is (almost entirely) a joke, whereas the first prediction is not.) > > Anyone challenging me to bet on the above is welcome to offer odds. In the post below rolfandreassen sets the condition of 5 sigma evidence before 2014 and offers a bet of $25. In the post below that Eliezer accepts. Discuss.
756ef9e6-a16e-4273-bbcf-13b03048d34d
trentmkelly/LessWrong-43k
LessWrong
Which fields of learning have clarified your thinking? How and why? Did computer programming make you a clearer, more precise thinker? How about mathematics? If so, what kind? Set theory? Probability theory? Microeconomics? Poker? English? Civil Engineering? Underwater Basket Weaving? (For adding... depth.) Anything I missed? Context: I have a palette of courses to dab onto my university schedule, and I don't know which ones to chose. This much is for certain: I want to come out of university as a problem solving beast. If there are fields of inquiry whose methods easily transfer to other fields, it is those fields that I want to learn in, at least initially. Rip apart, Less Wrong!
535cf85b-0448-4ef6-857b-2596b057bd4d
trentmkelly/LessWrong-43k
LessWrong
Reflection Mechanisms as an Alignment Target - Attitudes on “near-term” AI TL;DR * We survey 1000 participants on their views about what values should be put into powerful AIs that we think are plausible in the near-term (e.g. within 5-10 years) * We find that respondents report to not favor the means of choosing values we would expect in our current society, such as allowing companies to unilaterally choose the instructions for an AI, or allowing policy makers to inform decisions with AIs that only reflect their individual values. * Strategies in the line of Indirect normativity (such as “think about many possible outcomes, and take the action that the AI believes will have the best societal outcome”) poll the best across scenarios. We think this suggests that respondents may be open to the idea of having AIs aligned to “reflection procedures”, or processes for coming up with better values, which we view as a promising direction for multiple reasons. Introduction This is the third part in a series of posts (first, second, workshop paper) discussing people’s reported preferences on various methods of social choice/metaethics for resolving object-level moral disagreements. In the previous posts we surveyed general ways of resolving object-level moral conflicts both within and across people, and did an adversarial study to test the impact of wording. We found that broadly people report to disagree on object level moral issues, but also report to be willing to defer decisions to processes such as democracy for resolving disagreements. In the previous posts, only a few questions specifically focused on values implemented by AI, with the others focusing on reflection mechanisms implemented by humans. In this post, we specifically ask questions about people’s willingness to trust AI systems deployed in the real world. This is the most concrete of the three surveys as it focuses on scenarios that we think could be a reality within this decade, without requiring radically new Machine Learning progress. Although probing about AI systems beyo
891cf86c-2d0a-4b89-b505-f1dc73d528ab
trentmkelly/LessWrong-43k
LessWrong
Smartphone Etiquette: Suggestions for Social Interactions If anyone stopped and asked themselves if they enjoy the current smartphone social etiquette—whereby you could be opening up your soul to a friend in a moment of vulnerability, when suddenly, they feel a buzz and reflexively check their phone to discover an important & urgent message from the McDonald’s app that The McRib is back!—they would probably shake their heads knowing how close to home a situation like this hits. Every time one of these moments of disconnection happen, a small part of me—the part craving real connection—dies. But it doesn’t have to be this way. I’ve managed to improve my social interactions due to: 1. Internalizing the emotional impact of checking my phone, 2. Recognizing how it affects my capacity to think clearly,  3. Understanding group interactions are better without distractions, and 4. Creating suggestions to remove phones from social situations (as doing so is in everyone’s best interest).   (1) The emotional impact of disconnection I don’t blame individuals for lacking the restraint to not check their phones in social contexts. Why? Because there’s thousands of engineers whose full-time job it is to make our technology as addicting “engaging” as possible. As researcher Dr. Anna Lembke notes in her book Dopamine Nation: > We’re living in a time of unprecedented access to high-dopamine stimuli. The smartphone is the modern-day hypodermic needle, delivering digital dopamine 24/7. As such, we’ve all become vulnerable to compulsive overconsumption. Author Nir Eyal wrote the book on how to addict “increase engagement” in users called Hooked: How to Build Habit-Forming Products. So it’s with a combination of sadistic glee and pity when I read about one of Nir’s moments of disconnection with his young daughter[1]: > [Nir was reading a book with his daughter. As she was contemplating a question he asked her, he received a text] and “I started looking at my phone, as opposed to being fully present with her.” When he looked up, s
a60bbf87-e28b-4d20-bc44-58a3c31921d2
trentmkelly/LessWrong-43k
LessWrong
Wes Weimer from Udacity presents his list of things you should learn I've just gotten to the end of Udacity's CS262 course in programming languages. It's been pretty good. Wes Weimer, the lecturer, seems to be a really cool guy. There's a quote from HPMOR in the final exam, which I thought was pretty cool. In the last part of the last lecture, Weimer gives advice on what we should learn next. You can watch it here. He advises that you learn the following (paraphrased): > Philosophy until you've covered epistemology, formal logic, free will, the philosophy of science, and what it's like to be a bat. > > Cognitive psychology until you've covered perception, consciousness, and the Flynn effect. > > Speech or rhetoric until you've covered persuasion. > > Anthropology and gender studies, to get an idea of what behaviors are socially constructed and which are essential > > Statistics, until you can avoid being fooled by either others or yourself > > Religion or ethics until you've covered the relationship between unhappiness and unrealized desires > > Physics and engineering until you can explain how a microphone, speaker, and radio all work > > Government until you have an opinion about legislating morality and the relative importance of freedom and equality. > > History until you are not condemned to make the mistakes of the past. > > Life until you are happy. They say ignorance is bliss, but they are wrong all but finitely often. I thought that was all really useful (except maybe the last two). I've learned up to his required level of philosophy, cognitive psychology, and religion and ethics. I'm working on the physics and gender studies. (Incidentally, I strongly recommend Udacity for learning programming. It's really good.)
d5233374-c247-4986-b52e-80f81e011e91
trentmkelly/LessWrong-43k
LessWrong
Prefer the British Style of Quotation Mark Punctuation over the American There are different ways of combining quotation and punctuation marks. In the American style, you almost always put periods and commas inside the quotation marks: > Dr Johnson kicked a large rock and said, as his foot rebounded, "I refute it thus."[1] > > It is only an accident of evolution, as it were, that the senses we are born with are not adapted to feel such things "directly."[2] In the British style, however, you put periods and commas outside the quotation marks, unless they are part of a complete sentence that is fully contained between the quotation marks: > Dr Johnson kicked a large rock and said, as his foot rebounded, "I refute it thus."[3] > > It is only an accident of evolution, as it were, that the senses we are born with are not adapted to feel such things "directly".[4] When faced with this contrast, the proper reaction is to recoil in horror at the first approach, and to look approvingly on the second. In the sentence beginning with It is only ..., the quotation is a part of the sentence, and the sentence contains the quoted word. It is only an accident of evolution ... "directly". [--------------------sentence---------------------] [--quote-] Since a period marks the end of a sentence, it should not be placed before marking the end of the quotation. You can compare this with nested or hierarchical structures, or with stacks, or even with last in, first out methods of computing, inventory accounting or redundancy management. Under any comparison, the British style will seem preferable to the American. You resolve the nested item first, before resolving the parent. I do not know but suspect that this is why the British style is also called logical quotation. In the sentence beginning with Dr Johnson ..., we do place the period within the quotation marks, because what is being quoted is a full sentence. We are placing the period to mark the end of the inner, quoted sentence.[5] Dr Johnson kicked a l
786ed568-5f12-485d-93fb-71eec77398bf
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Four part playbook for dealing with AI (Holden Karnofsky on the 80,000 Hours Podcast) We just published an interview: [**Holden Karnofsky’s four part playbook for dealing with AI**](https://80000hours.org/podcast/episodes/holden-karnofsky-how-ai-could-take-over-the-world/)**.** You can click through for the audio, a full transcript, and related links. Below are the episode summary and some key excerpts. | | | --- | | *I think a lot of the case for planning things out in advance — trying to tell stories of what might happen, trying to figure out what kind of regime we’re going to want and put the pieces in place today, trying to figure out what kind of research challenges are going to be hard and do them today — I think a lot of the case for that stuff being so important does rely on this theory that things could move a lot faster than anyone is expecting.*-Holden Karnofsky | **Episode summary** ------------------- Back in 2007, Holden Karnofsky cofounded [GiveWell](https://www.givewell.org/), where he sought out the charities that most cost-effectively helped save lives. He then co-founded [Open Philanthropy](https://www.openphilanthropy.org/), where he oversaw a team making billions of dollars’ worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he’s narrowing his focus once again, this time on making the transition to advanced AI go well. In today’s conversation, Holden returns to the show to share his overall understanding of the promise and the risks posed by machine intelligence, and what to do about it. That understanding has accumulated over around 14 years, during which he went from being [sceptical](https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si) that AI was important or risky, to making AI risks the focus of his work. (As Holden reminds us, his wife is also the president of one of the world’s top AI labs, [Anthropic](https://www.anthropic.com/), giving him both conflicts of interest and a front-row seat to recent events. For our part, Open Philanthropy is 80,000 Hours’ largest financial supporter.) One point he makes is that people are too narrowly focused on AI becoming ‘superintelligent.’ While that could happen and would be important, it’s not necessary for AI to be transformative or perilous. Rather, machines with human levels of intelligence could end up being enormously influential simply if the amount of computer hardware globally were able to operate tens or hundreds of billions of them, in a sense making machine intelligences a majority of the global population, or at least a majority of global thought. As Holden explains, he sees four key parts to the playbook humanity should use to guide the transition to very advanced AI in a positive direction: alignment research, standards and monitoring, creating a successful and careful AI lab, and finally, information security. In today’s episode, host Rob Wiblin interviews return guest Holden Karnofsky about that playbook, as well as: * Why we can’t rely on just gradually solving those problems as they come up, the way we usually do with new technologies. * What multiple different groups can do to improve our chances of a good outcome — including listeners to this show, governments, computer security experts, and journalists. * Holden’s case against ‘hardcore utilitarianism’ and what actually motivates him to work hard for a better world. * What the ML and AI safety communities get wrong in Holden’s view. * Ways we might succeed with AI just by dumb luck. * The value of laying out imaginable success stories. * Why information security is so important and underrated. * Whether it’s good to work at an AI lab that you think is particularly careful. * The track record of futurists’ predictions. * And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below. *Producer: Keiran Harris* *Audio Engineering Lead: Ben Cordell* *Technical editing: Simon Monsour and Milo McGuire* *Transcriptions: Katy Moore* **Highlights** -------------- ### **Explosively fast progress** > **Holden Karnofsky:** One of the reasons I’m so interested in AI safety standards is because kind of no matter what risk you’re worried about, I think you hopefully should be able to get on board with the idea that you should measure the risk, and not unwittingly deploy AI systems that are carrying a tonne of the risk, before you’ve at least made a deliberate informed decision to do so. And I think if we do that, we can anticipate a lot of different risks and stop them from coming at us too fast. “Too fast” is the central theme for me. > > You know, a common story in some corners of this discourse is this idea of an AI that’s this kind of simple computer program, and it rewrites its own source code, and that’s where all the action is. I don’t think that’s exactly the picture I have in mind, although there’s some similarities. > > The kind of thing I’m picturing is maybe more like a months or years time period from getting sort of near-human-level AI systems — and what that means is definitely debatable and gets messy — but near-human-level AI systems to just very powerful ones that are advancing science and technology really fast. And then in science and technology — at least on certain fronts that are the less bottlenecked fronts– you get a huge jump. So I think my view is at least somewhat more moderate than Eliezer’s, and at least has somewhat different dynamics. > > But I think both points of view are talking about this rapid change. I think without the rapid change, a) things are a lot less scary generally, and b) I think it is harder to justify a lot of the stuff that AI-concerned people do to try and get out ahead of the problem and think about things in advance. Because I think a lot of people sort of complain with this discourse that it’s really hard to know the future, and all this stuff we’re talking about about what future AI systems are going to do and what we have to do about it today, it’s very hard to get that right. It’s very hard to anticipate what things will be like in an unfamiliar future. > > When people complain about that stuff, I’m just very sympathetic. I think that’s right. And if I thought that we had the option to adapt to everything as it happens, I think I would in many ways be tempted to just work on other problems, and in fact adapt to things as they happen and we see what’s happening and see what’s most needed. And so I think a lot of the case for planning things out in advance — trying to tell stories of what might happen, trying to figure out what kind of regime we’re going to want and put the pieces in place today, trying to figure out what kind of research challenges are going to be hard and do them today — I think a lot of the case for that stuff being so important does rely on this theory that things could move a lot faster than anyone is expecting. > > I am in fact very sympathetic to people who would rather just adapt to things as they go. I think that’s usually the right way to do things. And I think many attempts to anticipate future problems are things I’m just not that interested in, because of this issue. But I think AI is a place where we have to take the explosive progress thing seriously enough that we should be doing our best to prepare for it. > > **Rob Wiblin:** Yeah. I guess if you have this explosive growth, then the very strange things that we might be trying to prepare for might be happening in 2027, or incredibly soon. > > **Holden Karnofsky:** Something like that, yeah. It’s imaginable, right? And it’s all extremely uncertain because we don’t know. In my head, a lot of it is like there’s a set of properties that an AI system could have: roughly being able to do roughly everything humans are able to do to advance science and technology, or at least able to advance AI research. We don’t know when we’ll have that. One possibility is we’re like 30 years away from that. But once we get near that, things will move incredibly fast. And that’s a world we could be in. We could also be in a world where we’re only a few years from that, and then everything’s going to get much crazier than anyone thinks, much faster than anyone thinks. > > ### **AI population explosion** > **Rob Wiblin:** I think some people are sceptical of this superintelligence story, because they think you get really declining returns to being smarter, and that there’s some ways in which it just doesn’t matter how smart you are, the world is too unpredictable for you to come up with a great plan. But this is a different mechanism by which you can get the same outcome, which is just that you have this enormous increase in the number of thoughts that are occurring on computer chips, more or less. And at some point, 99% of the thoughts that are happening on Earth could basically be occurring inside artificial intelligences. And then as they get better, and they’re able to make more chips more quickly, again, basically just gets a population explosion. > > **Holden Karnofsky:** Yeah. That’s exactly right. And I think this is a place where some people get a little bit rabbitholed on the AI debates, because I think there’s a lot of room to debate how big a deal it is to have something that’s “extremely smart” or “superintelligent” or “much smarter than human.” It’s like, maybe if you had something that was like a giant brain or something, and way way smarter (whatever that means) than us, maybe what that would mean is that it would instantly see how to make all these super weapons and conquer the world and how to convince us of anything. There’s all this stuff that that could mean, and people debate whether it could mean that, but it’s uncertain. > > And I think a thing that’s a lot less uncertain — if you’re finding yourself sceptical of what this “smart” idea means, and where it’s going to go, and what you can do with it — if you find yourself sceptical of that, then just forget about it. I believe you can make the entire case for being extremely concerned about AI, assuming that AI will never be smarter than a human. Instead, it will be as capable as the most capable humans, and there will be a tonne of them — because unlike humans, you can just copy them. You can use your copies to come up with ways to make it more efficient, just like humans do, then you can make more copies. > > And when we talk about whether AI could defeat humanity — and I’ve written one blog post on [whether AI could kind of take over the world](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/) — they don’t have to be more capable than humans. They could be equally capable, and there could be more of them. That could really do it. That could really be enough that then humans wouldn’t be in control of the world anymore. > > So I’m basically generally happy to just have all discussions about AI and what the risks are, just in this world where there’s nothing more capable than a human — but it’s pretty scary to have a lot of those that have different values from humans, and that are kind of a second advanced species. That’s not to rule out that some of these superintelligence concerns could be real. It’s just that they’re not always necessary, and they can sideline people. > > **Rob Wiblin:** Yeah. You can just get beaten by force of numbers more or less. I think it’s a little bit of a shame that this sheer numbers argument hasn’t really been made very much. It feels like the superintelligence story has been very dominant in the narrative and media, and many people get off the boat because they’re sceptical of this intelligence thing. I think it kind of is the fault of me and maybe people who’ve been trying to raise the alarm about this, because the focus really has been on the superintelligence aspect rather than the super-numerousness that you could get. > > **Holden Karnofsky:** And I don’t know. I mean, I think there’s valid concerns from that angle for sure, and I’m not trying to dismiss it. But I think there’s a lot of uncertainty about what superintelligence means and where it could go. And I think you can raise a lot of these concerns without needing to have a settled view there. > > ### **Misaligned AI might not kill us all, and aligned AI could be catastrophic** > **Holden Karnofsky:** A vibe I pick up is this kind of framework that says, “If we don’t align our AIs, we’re all going to die. And if we can align our AIs, that’s great, and we’ve solved the problem.” And that’s the problem we should be thinking about, and there’s nothing else really worth worrying about. It’s kind of like alignment is the whole game, would be the hypothesis. > > And I disagree with both ends of that, especially the latter. So to take the first end — if we don’t align AI, we’re all dead — first off, I just think it’s really unclear. Even in the worst case — where you get an AI that has its own values, and there’s a huge number of them, and they kind of team up and take over the world — even then, it’s really unclear if that means we all die. I know there’s debates about this. I have tried to understand. The [MIRI](https://intelligence.org/) folks, I think, feel really strongly that clearly, we all die. I’ve tried to understand where they’re coming from, and I have not. > > I think a key point is it just could be very cheap — as a percentage of resources, for example — to let humans have a nice life on Earth, and not expand further, and be cut off in certain ways from threatening the AI’s ability to do what it wants. That can be very cheap compared to wiping us all out. > > And there could be a bunch of reasons one might want to do that, some of them kind of wacky. Some of them like, “Well, maybe in another part of the universe, there’s someone like the AI that was trying to design its own AI. And that thing ended up with values like the humans, and maybe there’s some kind of trade that could be made, using [acausal trade](https://www.lesswrong.com/tag/acausal-trade)” — and we don’t need to get into what all this means — or like, maybe the AI is actually being simulated by humans or something, or by some smarter version of humans, or some more powerful version of humans, and being tested to see if it’ll wipe out the humans or be nice to them. It’s just like, you don’t need a lot of reasons to leave one planet out if you’re expanding throughout the galaxy. So that would be one thing, is that it’s just kind of uncertain what happens even in the worst case. > > —- > > The other part — if we do align the AI, we’re fine — I disagree with much more strongly. The first one, I mean, I think it would be really bad to have misaligned AI. And despite the feeling that I feel it is fairly overrated in some circles, I still think it’s the number one thing for me. Just the single biggest issue in AI is we’re building these potentially very powerful, very replicable, very numerous systems — and we’re building them in ways where we don’t have much insight into whether they have goals, or what the goals would be; we’re kind of introducing the second advanced species onto the planet that we don’t understand. And if that advanced species becomes more numerous and/or more capable than us, we don’t have a great argument to think that’s going to be good for us. So I’m on board with alignment risk is the number one thing — not the only thing, but the number one thing. > > But I would say, if you just assume that you have a world of very capable AIs, that are doing exactly what humans want them to do, that’s very scary. And I think if that was the world we knew we were going to be in, I would still be totally full time on AI, and still feel that we had so much work to do and we were so not ready for what was coming. > > Certainly, there’s the fact that because of the speed at which things move, you could end up with whoever kind of leads the way on AI, or is least cautious, having a lot of power — and that could be someone really bad. And I don’t think we should assume that just because that if you had some head of state that has really bad values, I don’t think we should assume that that person is going to end up being nice after they become wealthy, or powerful, or transhuman, or mind uploaded, or whatever — I don’t think there’s really any reason to think we should assume that. > > And then I think there’s just a bunch of other things that, if things are moving fast, we could end up in a really bad state. Like, are we going to come up with decent frameworks for making sure that the digital minds are not mistreated? Are we going to come up with decent frameworks for how to ensure that as we get the ability to create whatever minds we want, we’re using that to create minds that help us seek the truth, instead of create minds that have whatever beliefs we want them to have, stick to those beliefs and try to shape the world around those beliefs? I think Carl Shulman put it as, “Are we going to have AI that makes us wiser or more powerfully insane?” > > So I think there’s just a lot. I think we’re on the cusp of something that is just potentially really big, really world-changing, really transformative, and going to move way too fast. And I think even if we threw out the misalignment problem, we’d have a lot of work to do — and I think a lot of these issues are actually not getting enough attention. > > ### **Getting the AIs to do our alignment homework for us** > **Holden Karnofsky:** It’s this idea that once you have human levelish AI systems, you have them working on the alignment problem in huge numbers. And in some ways, I hate this idea, because this is just very lazy, and it’s like, “Oh, yeah. We’re not going to solve this problem until later, when the world is totally crazy, and everything’s moving really fast, and we have no idea what’s going to happen.” > > **Rob Wiblin:** “We’ll just ask the agents that we don’t trust to make themselves trustworthy.” > > **Holden Karnofsky:** Yeah. Exactly. So there’s a lot to hate about this idea. But, heck, it could work. It really could. Because you could have a situation where, just in a few months, you’re able to do the equivalent of thousands of years of humans doing alignment research. And if these systems are not at the point where they can or want to screw you up, that really could do it. I mean, we just don’t know that thousands of years of human levelish alignment research isn’t enough to get us a real solution. And so that’s how you get through a lot of it. > > And then you still have another problem in a sense, which is that you do need a way to stop dangerous systems. It’s not enough to have safe AI systems. But again, you have help from this giant automated workforce. And so in addition to coming up with ways to make your system safe, you can come up with ways of showing that they’re dangerous and when they’re dangerous and being persuasive about the importance of the danger. > > I don’t know. I feel like if we had a 100 years before AGI right now, there’d be a good chance that normal flesh-and-blood humans could pull this off. So in that world, there’s a good chance that an automated workforce can cause it to happen pretty quickly, and you could pretty quickly get an understanding of the risks, agreement that we need to stop them. And you have more safe AIs than dangerous AIs, and you’re trying to stop the dangerous AIs. And you’re measuring the dangerous AIs, or you’re stopping any AI that refuses to be measured or whose developer refuses to measure it. > > So then you have a world that’s kind of like this one, where there’s a lot of evil people out there, but they are generally just kept in check by being outnumbered by people who are at least law abiding, if not incredibly angelic. So you get a world that looks like this one, but it just has a lot of AIs running around in it, so we have a lot of progress in science and technology. And that’s a fine ending, potentially. > > ### **The value of having a successful and careful AI lab** > **Holden Karnofsky:** First with the reminder that I’m married to the president of Anthropic, so take that for what it’s worth. > > I just think there’s a lot of ways that if you had an AI company that was on the frontier, that was succeeding, that was building some of the world’s biggest models, that was pulling in a lot of money, and that was simultaneously able to really be prioritising risks to humanity, it’s not too hard to think of a lot of ways good can come with that. > > Some of them are very straightforward. The company could be making a lot of money, raising a lot of capital, and using that to support a lot of safety research on frontier models. So you could think of it as a weird kind of [earning to give](https://80000hours.org/articles/earning-to-give/) or something. Also probably that AI company would be pretty influential in discussions of how AI should be regulated and how people should be thinking of AI: they could be a legitimiser, all that stuff. I think it’d be a good place for people to go and just skill up, learn more about AI, become more important players. > > I think in the short run, they’d have a lot of expertise in-house, they could work on a lot of problems, probably to design ways of measuring whether an AI system is dangerous. One of the first places you’d want to go for people who’d be good at that would be a top AI lab that’s building some of the most powerful models. So I think there’s a lot of ways they could do good in the short run. > > And then I have written stories that just have it in the long run. When we get these really powerful systems, it actually does matter a lot who has them first and what they’re literally using them for. When you have very powerful AIs, is the first thing you’re using them for trying to figure out how to make future systems safe or trying to figure out how to assess the threats of future systems? Or is the first thing you’re using them for just trying to rush forward as fast as you can, do faster algorithms, do more bigger systems? Or is the first thing you’re using them for just some random economic thing that is kind of cool and makes a lot of money? > > **Rob Wiblin:** Some customer-facing thing. Yeah. > > **Holden Karnofsky:** Yeah. And it’s not bad, but it’s not reducing the risk we care about. So I think there is a lot of good that can be done there. > > And then there’s also — I want to be really clear here — a lot of harm an AI company could do, if you’re pushing out these systems. > > **Rob Wiblin:** Kill everyone. > > **Holden Karnofsky:** That kind of thing, yeah. For example. You know, you’re pushing out these AI systems, and if you’re doing it all with an eye toward profit and moving fast and winning, then you could think of it as you’re taking the slot of someone who could have been using that expertise and money and juice to be doing a lot of good things. You could also think of it as you’re just giving everyone less time to figure out what the hell is going on, and we already might not have enough. So I want to be really clear. This is a tough one. I don’t want to be interpreted as saying that one of the tentpoles of reducing AI risk is to go start an AI lab immediately — I don’t believe that. > > But I also think that some corners of the AI safety world are very dismissive, or just think that AI companies are bad by default. And this is just really complicated, and it really depends exactly how the AI lab is prioritising risk to society versus success — and it has to prioritise success some to be relevant, or to get some of these benefits. So how it’s balancing is just really hard, and really complicated, and really hard to tell, and you’re going to have to have some judgements about it. So it’s not a ringing endorsement, but it does feel like, at least in theory, part of one of the main ways that we make things better. You know, you could do a lot of good. > > ### **Why information security is so important** > **Holden Karnofsky:** I think you could build these powerful, dangerous AI systems, and you can do a lot to try to mitigate the dangers — like limiting the ways they can be used, you can do various alignment techniques — but if some state or someone else steals the weights, they’ve basically stolen your AI system, and they can run it without even having to do the training run. So you might spend a huge amount of money on a training run, end up with this AI system that’s very powerful, and someone else just has it. And they can then also [fine-tune](https://en.wikipedia.org/wiki/Fine-tuning_(machine_learning)) it, which means they can do their own training on it and change the way it’s operating. So whatever you did to train it to be nice, they can train that right out; the training they do could screw up whatever you did to try and make it aligned. > > And so I think at the limit of ‘it’s really just trivial for any state to just grab your AI system and do whatever they want with it and retrain it how they want’, it’s really hard to imagine feeling really good about that situation. I don’t know if I really need to elaborate a lot more on that. So making it harder seems valuable. > > This is another thing where I want to say, as I have with everything else, that it’s not binary. So it could be the case that, after you improve your security a lot, it’s still possible for a state actor to steal your system, but they have to take more risks, they have to spend more money, they have to take a deeper breath before they do it. It takes them more months. Months can be a very big deal. As I’ve been saying, when you get these very powerful systems, you could do a lot in a few months. By the time they steal it, you could have a better system. So I don’t think it’s an all-or-nothing thing. > > But no matter what risk of AI you’re worried about — you could be worried about the misalignment; you could be worried about the misuse and the use to develop dangerous weapons; you can be worried about more esoteric stuff, like how the AI does decision theory; you could be worried about mind crime — you don’t want just anyone, including some of these state actors who may have very bad values, to just be able to steal a system, retrain it how they want, and use it how they want. You want some kind of setup where it’s the people with good values controlling more of the more powerful AI systems, using them to enforce some sort of law and order in the world, and enforcing law and order generally — with or without AI. So it seems quite robustly important. > > Other things about security is that I think it’s very, very hard, just very hard to make these systems hard to steal for a state actor, and so there’s just a tonne of room to go and make things better. There could be security research on innovative new methods, and there can also be a lot of blocking and tackling — just getting companies to do things that we already know need to be done, but that are really hard to do in practice, take a lot of work, take a lot of iteration. Also, a nice thing about security, as opposed to some of these other things: it is a relatively mature field, so you can learn about security in some other context and then apply it to AI. > > Part of me kind of thinks that the EA community or whatever kind of screwed up by not emphasising security more. It’s not too hard for me to imagine a world where we’ve just been screaming about the AI security problem for the last 10 years, and how do you stop a very powerful system from getting stolen? That problem is extremely hard. We’ve made a bunch of progress on it. There were tonnes of people concerned about this stuff on the security teams of all the top AI companies, and were not as active and only had a few people working on alignment. > > I don’t know. Is that world better or worse than this one? I’m not really sure. A world where we’re more balanced, and had encouraged people who are a good fit for one to go into one, probably seems just better than the world we’re in. So yeah, I think security is a really big deal. I think it hasn’t gotten enough attention. > > ### **Holden vs hardcore utilitarianism** > **Rob Wiblin:** So maybe a way of highlighting the differences here will be to imagine this conversation, where you’re saying, “I’m leading Open Philanthropy. I think that we should split our efforts between a whole bunch of different projects, each one of which would look exceptional on a different plausible worldview.” And the hardcore utilitarian comes to you and says, “No, you should choose the best one and fund that. Spend all of your resources and all of your time, just focus on that best one.” What would you say to them in order to justify the [worldview diversification approach](https://80000hours.org/podcast/episodes/ajeya-cotra-worldview-diversification/#worldview-diversification-000845)? > > **Holden Karnofsky:** The first thing I would say to them is just, “The burden’s on you.” And this is kind of a tension I often have with people who consider themselves hardcore: they’ll just feel like, well, why wouldn’t you be a hardcore utilitarian? Like, what’s the problem? Why isn’t it just maximising the pleasure and minimising the pain, or the sum or the difference? And I would just be like, “No. No. No. You gotta tell me, because I am sitting here with these great opportunities to help huge amounts of people in very different and hard-to-compare ways. > > And the way I’ve always done ethics before in my life is, like, I basically have some voice inside me and it says, “This is what’s right.” And that voice has to carry some weight. Even on your model, that voice has to carry some weight, because you — *you*, the hardcore utilitarian, not Rob, because we all know you’re not at all — it’s like, even the most systematic theories of ethics, they’re all using that little voice inside you that says what’s right. That’s the arbiter of all the thought experiments. So we’re all putting weight on it somewhere, somehow. And I’m like, cool. That’s gotta be how this works. There’s a voice inside me saying, “this feels right” or “this feels wrong” — that voice has gotta get some weight. > > That voice is saying, “You know what? It is really interesting to think about these risks to humanity’s future, but also, it’s weird.” This work is not shaped like the other work. It doesn’t have as good feedback loops. It feels icky — like a lot of this work is about just basically supporting people who think like us, or it feels that way a lot of the time, and it just feels like it doesn’t have the same ring of ethics to it. > > And then on the other hand, it just feels like I’d be kind of a jerk if… Like, Open Phil, I believe — and you could disagree with me — is not only the biggest, but the most effective farm animal welfare funder in the world. I think we’ve had enormous impact and made animal lives dramatically better. And you’re coming to say to me, “No, you should take all that money and put it into the diminishing margin of supporting people to think about some future x-risk in a domain where you mostly have a lot of these concerns about insularity.” Like, you’ve got to make the case to me — because the normal way all this stuff works is you listen to that voice inside your head, and you care what it says. And some of the opportunities Open Phil has to do a lot of good are quite extreme, and we do them. > > So that’s the first thing: we’ve got to put the burden of proof in the right place. > >
9f98710e-d747-4f07-8a1f-2a445fb9c855
trentmkelly/LessWrong-43k
LessWrong
Putanumonit: What statistical power means, and why I'm terrified about psychology
b3c1eadf-51b0-4009-a1cf-42952ad3cdc5
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
How to use the Forum (intro) The Effective Altruism Forum is a platform run by the [Centre for Effective Altruism](https://www.centreforeffectivealtruism.org/) to facilitate discussions relevant to [effective altruism](https://www.effectivealtruism.org/) and coordinate related projects. Here are some resources for Forum users: 1. [**🖋️  Write on the EA Forum**](https://forum.effectivealtruism.org/posts/4WxHNBf5LeM9gQneT/you-should-write-on-the-ea-forum) 2. [**🦋  Guide to norms on the Forum**](https://forum.effectivealtruism.org/posts/yND9aGJgobm5dEXqF/guide-to-norms-on-the-forum) 3. [**🛠️  Forum User Manual**](https://forum.effectivealtruism.org/posts/Y8gkABpa9R6ktkhYt/forum-user-manual) 4. **📜**[**Terms of Use / License**](/termsOfUse) You can also [sign up](https://effectivealtruism.us8.list-manage.com/subscribe?u=52b028e7f799cca137ef74763&id=7457c7ff3e) for the [EA Forum Digest](https://forum.effectivealtruism.org/posts/bi9WWR58m45GJG7bc/forum-digest-reminder-that-it-exists-and-request-for), a weekly email that goes out to subscribers to share some of our favorite EA Forum posts from the past week. We usually include some question or request posts, and we’re starting to share a “classic Forum post” per week, too. You can find [some recent issues here](https://us8.campaign-archive.com/home/?u=52b028e7f799cca137ef74763&id=7457c7ff3e&utm_source=EA+Forum+Digest&utm_campaign=538104a4a5-EMAIL_CAMPAIGN_2022_04_06_03_12&utm_medium=email&utm_term=0_7457c7ff3e-538104a4a5-318967845).
0c2ff7fd-43e3-4d29-82f5-f939901b06b0
trentmkelly/LessWrong-43k
LessWrong
Bayesian examination A few months ago, Olivier Bailleux, a Professor of computer science and reader of my book on Bayesianism, sent me an email. He suggested to apply some of the ideas of the book to examine students. He proposed Bayesian examination. I believe it to be a brilliant idea, which could have an important impact on how many people think. At least, I think that this is surely worth sharing here. tl;dr Bayesian examinations seem very important to deploy because they incentivize both probabilistic thinking and intellectual honesty. Yet, as argued by Julia Galef in this talk, incentives seem critical to change our thinking habits. Let's take an example Where is the International Olympic Committee? 1. Geneva 2. Lausanne 3. Zurich 4. Lugano Quite often, students are asked to select one of the four possible answers. But this is arguably pretty bad, for several reasons: - It makes impossible to distinguish a student who has a hunch from a student who really studied and knew the answer. - It gives students the habit of self-identifying with a single answer. - It normalizes deterministic question answering. - It motivates students to defend the answer they gave (which encourages the motivated reasoning fallacy...). Instead, Bayesian examination demands that students provide probabilistic answers. In other words they will have to provide percentage for each answer. In our case, a student, call her Alice, might thus answer 1. 33% 2. 33% 3. 33% 4. 1% Alice would essentially be formalizing the sentence "I really don't know but I would be very surprised if Lugano was the right answer". Another student, let's call him Bob, might answer 1. 5% 2. 40% 3. 50% 4. 5% Bob might be having in mind something like "I know that FIFA and CIO are in Zurich and Lausanne, but I don't remember which is where; though Zurich is larger so it would make sense for CIO to be in Zurich rather than Lausanne". Spoiler: the answer turns out to be Lausanne. Why naive notation is bad Now, how would such an
f846e155-4ff0-443b-9d51-e2e49d6eb131
StampyAI/alignment-research-dataset/lesswrong
LessWrong
My favorite AI governance research this year so far I collected my favorite public pieces of research on AI strategy, governance, and forecasting from 2023 so far. If you're a researcher, I encourage you to make a quick list of your favorite pieces of research, then think about what makes it good and whether you're aiming at that with your research. To illustrate things you might notice as a result of this exercise: I observe that my favorite pieces of research are mostly aimed at some of the most important questions[[1]](#fnha0p69awc9o)– they mostly identify a very important problem and try to answer it directly. I observe that for my favorite pieces of research, I mostly would have been very enthusiastic about a proposal to do that research– it's not like I'd have been skeptical about the topic but the research surprised me with how good its results were.[[2]](#fnpeg8zmjtq1) --- ### 1. [Model evaluation for extreme risks](https://arxiv.org/abs/2305.15324) (DeepMind, Shevlane et al., May) > Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why *model evaluation* is critical for addressing extreme risks. Developers must be able to identify dangerous capabilities (through "dangerous capability evaluations") and the propensity of models to apply their capabilities for harm (through "alignment evaluations"). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security. > > [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba388fc9-e606-4503-965e-f999cdc94478_6505x3023.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba388fc9-e606-4503-965e-f999cdc94478_6505x3023.png) See also the corresponding blogpost [An early warning system for novel AI risks](https://www.deepmind.com/blog/an-early-warning-system-for-novel-ai-risks) (DeepMind 2023). Among the most important questions in AI governance are *how can labs determine whether their training runs and deployment plans are safe? and how can they demonstrate that to external observers, or how can authorities make rules about training run and deployment safety?* If powerful AI is dangerous by default, developers being able to identify and avoid unsafe training or deployment appears necessary to achieve powerful AI safely. Great model evaluations would also enable external oversight. Convincing demonstrations could also help developers and other actors understand risks from powerful AI. This paper helps its readers think clearly about evals and lays the groundwork for an evaluations-based self-governance regime– and eventually a regulatory regime. ### 2. [Towards best practices in AGI safety and governance: A survey of expert opinion](https://www.governance.ai/research-paper/towards-best-practices-in-agi-safety-and-governance) (GovAI, Schuett et al., May) > We sent a survey to 92 leading experts from AGI labs, academia, and civil society and received 51 responses. Participants were asked how much they agreed with 50 statements about what AGI labs should do. Our main finding is that participants, on average, agreed with all of them. Many statements received extremely high levels of agreement. For example, 98% of respondents somewhat or strongly agreed that AGI labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming. Ultimately, our list of statements may serve as a helpful foundation for efforts to develop best practices, standards, and regulations for AGI labs. > > [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69e9750a-b748-40f1-ba51-76cddda74e8e_6111x8030.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69e9750a-b748-40f1-ba51-76cddda74e8e_6111x8030.png) *See pp. 1–4 for background, 10–14 for discussion, and 18–22 for the list and brief descriptions of 100 ideas for labs. See also the corresponding* [*blogpost*](https://www.governance.ai/post/broad-expert-consensus-for-many-agi-safety-and-governance-best-practices)*.* Perhaps the most important question in AI strategy is *what should AI labs do?* This question is important because some plausible lab behaviors are much safer than others, so developing better affordances for labs or informing them about relevant considerations could help them act much better. This research aims directly at this question. Before this paper, there wasn't really a list of actions that might be good for labs to take (across multiple domains– there were a couple domain-specific or short lists; see my [Ideas for AI labs: Reading list](https://www.lesswrong.com/posts/GCMMPTCmGagcP2Bhd/ideas-for-ai-labs-reading-list)). Now there is, and it's incomplete and lacks descriptions for actions or links to relevant resources but overall high-quality and a big step forward for the *what should labs do* conversation. Moreover, this research is starting the process of not just *identifying good actions* but *making that legible to everyone* or *building common knowledge about what labs should do*. ### 3. [What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring](https://arxiv.org/abs/2303.11341) (Shavit, March) > As advanced machine learning systems’ capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other’s compliance with potential future international agreements on advanced ML development. This work analyzes one mechanism to achieve this, by monitoring the computing hardware used for large-scale NN training. The framework’s primary goal is to provide governments high confidence that no actor uses large quantities of specialized ML chips to execute a training run in violation of agreed rules. At the same time, the system does not curtail the use of consumer computing devices, and maintains the privacy and confidentiality of ML practitioners’ models, data, and hyperparameters. The system consists of interventions at three stages: (1) using on-chip firmware to occasionally save snapshots of the the neural network weights stored in device memory, in a form that an inspector could later retrieve; (2) saving sufficient information about each training run to prove to inspectors the details of the training run that had resulted in the snapshotted weights; and (3) monitoring the chip supply chain to ensure that no actor can avoid discovery by amassing a large quantity of un-tracked chips. The proposed design decomposes the ML training rule verification problem into a series of narrow technical challenges, including a new variant of the Proof-of-Learning problem [[Jia et al. ’21](https://arxiv.org/abs/2103.05633)]. > > *See pp. 1–6 and §7.* Perhaps the most important question in AI strategy is *how can we verify labs' compliance with rules about training runs?* This question is important because preventing dangerous training runs may be almost necessary and sufficient for AI safety and such techniques could enable inspectors to verify runs' compliance with regulation or international agreements. This research aims directly at this question. ### 4. [Survey on intermediate goals in AI governance](https://rethinkpriorities.org/publications/survey-on-intermediate-goals-in-ai-governance) (Rethink Priorities, Räuker and Aird, March) > It seems that a key bottleneck for the field of longtermism-aligned AI governance is limited strategic clarity (see [Muehlhauser, 2020](https://www.openphilanthropy.org/research/our-ai-governance-grantmaking-so-far/), [2021](https://forum.effectivealtruism.org/posts/M2SBwctwC6vBqAmZW/a-personal-take-on-longtermist-ai-governance#Key_bottlenecks:~:text=Bottleneck%20%232%3A%20We,work%2C%20and%20also%3A)). As one effort to increase strategic clarity, in October-November 2022, we sent a survey to 229 people we had reason to believe are knowledgeable about longtermist AI governance, receiving 107 responses. We asked about:  > > * respondents’ “theory of victory” for AI risk (which we defined as the main, high-level “plan” they’d propose for how humanity could plausibly manage the development and deployment of transformative AI such that we get long-lasting good outcomes), > * how they’d feel about funding going to each of 53 potential “intermediate goals” for AI governance, > * what other intermediate goals they’d suggest, > * how high they believe the risk of existential catastrophe from AI is, and > * when they expect [transformative AI](https://docs.google.com/document/d/15siOkHQAoSBl_Pu85UgEDWfmvXFotzub31ow3A11Xvo/edit) (TAI) to be developed. > > We hope the results will be useful to funders, policymakers, people at AI labs, researchers, field-builders, people orienting to longtermist AI governance, and perhaps other types of people. For example, the report could:  > > * Broaden the range of options people can easily consider > * Help people assess how much and in what way to focus on each potential “theory of victory”, “intermediate goal”, etc. > * Target and improve further efforts to assess how much and in what way to focus on each potential theory of victory, intermediate goal, etc. > Perhaps the most important question in AI strategy is *what intermediate goals would it be good to pursue?* Information on this question helps the AI safety community better identify and prioritize between interventions. This research aims directly at this question: the most important part of the research was a survey on respondents' attitudes on funding being directed toward various possible intermediate goals, directly giving evidence about how funders could better promote AI safety. ### 5. [Literature Review of Transformative AI Governance](https://docs.google.com/document/d/1CDj_sdTzZGP9Tpppy7PdaPs_4acueuNxTjMnAiCJJKs/edit) (LPP, Maas, forthcoming) > The transformative AI (TAI) governance community has developed rapidly as the technology’s stakes have become increasingly clear. However, this field still faces a lack of legibility and clarity in different lines of work. Accordingly, this report aims to provide a taxonomy and literature review of TAI governance, in order to contribute to strategic clarity and enable more focused and productive research and debate. This report: > > * Discusses the **aims & scope** of this review, and discusses inclusion criteria; > * Reviews work aimed at understanding the **problem** of transformative AI governance, mapping TAI’s technical parameters, deployment parameters, governance parameters, and indirect lessons from history, models, and theory; > * Reviews work aimed at understanding possible **solutions** to the problem of TAI governance, mapping key actors, their levers of governance of TAI, pathways to influence on these levers’ deployment, policy proposals, and assets; > * Reviews work aimed at **prioritization** among these solutions, based on different theories of action > *I personally like the section* [*Levers of governance*](https://docs.google.com/document/d/1CDj_sdTzZGP9Tpppy7PdaPs_4acueuNxTjMnAiCJJKs/edit#heading=h.5romymfdade3)*.* I'm not sure why I like this review so much. It has brought a few ideas to my attention, it has pointed me toward a few helpful sources, and maybe the [Levers of governance](https://docs.google.com/document/d/1CDj_sdTzZGP9Tpppy7PdaPs_4acueuNxTjMnAiCJJKs/edit#heading=h.5romymfdade3)section helped me think about AI governance from the perspective of levers. ### 6. [“AI Risk Discussions” website: Exploring interviews from 97 AI Researchers](https://forum.effectivealtruism.org/posts/GpnLDSjzNkGB5xvry) (Gates et al., February) > In February-March 2022, Vael Gates conducted a series of 97 interviews with AI researchers about their perceptions of the future of AI, focusing on their responses to arguments for potential risk from advanced systems. Eleven transcripts from those interviews [were released](https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers), along with a [talk](https://forum.effectivealtruism.org/posts/q49obZkQujkYmnFWY/vael-gates-risks-from-advanced-ai-june-2022) summarizing the findings, with the promise of further analysis of the results.  > > We have now finished the analysis, and created a [website aimed at a technical audience to explore the results](http://ai-risk-discussions.org/)! > > Lukas Trötzmüller has written an [interactive walkthrough](https://ai-risk-discussions.org/perspectives/introduction) of the common perspectives that interviewees had, along with counterarguments describing why we might still be concerned about risk from advanced AI ([introduction to this walkthrough](https://forum.effectivealtruism.org/posts/zSLjLvAHmEXJWaAZi/ai-safety-arguments-an-interactive-guide)). Maheen Shermohammed has conducted a [quantitative analysis](https://forum.effectivealtruism.org/posts/EYRBprmo3c7AxMGw4/draft-quantitative-analysis-of-interviews-with-97-ai) ([full report](https://ai-risk-discussions.org/analyze_transcripts_static.html)) of all the interview transcripts, which were laboriously tagged by Zi Cheng (Sam) Huang. An army of people helped [anonymize a new set of transcripts](https://drive.google.com/drive/folders/1qNN6GpAl6a4KswxnJcdhN4fqnMQgZ9Vg) that researchers gave permission to be publicly released. We’ve also constructed a new [resources](https://ai-risk-discussions.org/resources) page ([based](https://forum.effectivealtruism.org/posts/eyJDBdpDd4BxormPH/resources-i-send-to-ai-researchers-about-ai-safety-1) on [what materials ML researchers find compelling](https://www.lesswrong.com/posts/gpk8dARHBi7Mkmzt9/what-ai-safety-materials-do-ml-researchers-find-compelling)) and “[what can I do](https://ai-risk-discussions.org/what_can_i_do)?” page for further investigation. Michael Keenan led the effort to put the whole website together. > > *See especially the* [*Quantitative Analysis*](https://forum.effectivealtruism.org/posts/EYRBprmo3c7AxMGw4/draft-quantitative-analysis-of-interviews-with-97-ai)*.* Disclaimer: I haven't deeply engaged with this research. An important question in AI strategy is *how do AI researchers think about AI risk, and how can they be educated about it?* This question is important because AI researchers' attitudes may determine whether dangerous AI research occurs; if everyone believes certain projects are dangerous, by default they'll try to make them safer. This research aims directly at this question. Moreover, the authors' adjacent resources help advocates educate AI researchers and AI researchers educate themselves about AI risk. ### 7. [What a compute-centric framework says about AI takeoff speeds - draft report](https://docs.google.com/document/d/1os_4YOw6Xv33KjX-kR76D3kW1drkWRHKG2caeiEWzNs/edit) (OpenPhil, Davidson, January) > In the next few decades we may develop AI that can automate ~all cognitive tasks and dramatically transform the world. By contrast, today the capabilities and impact of AI are much more limited. Once we have AI that could readily automate 20% of cognitive tasks (weighted by 2020 economic value), how much longer until it can automate 100%? This is what I refer to as the question of AI takeoff speeds; this report develops a compute-centric framework for answering it. First, I estimate how much more “effective compute” – a measure that combines compute with the quality of AI algorithms – is needed to train AI that could readily perform 100% of tasks compared to AI that could just perform 20% of tasks; my best-guess is 4 orders of magnitude more (i.e. 10,000X as much). Then, using a computational semi-endogenous growth model, I simulate how long it will take for the effective compute used in the largest training run to increase by this amount: the model’s median prediction is just 3 years. The simulation models the effect of both rising human investments and increasing AI automation on AI R&D progress. > > *See the* [*short summary*](https://docs.google.com/document/d/1os_4YOw6Xv33KjX-kR76D3kW1drkWRHKG2caeiEWzNs/edit#heading=h.i1g86skk7hv0)*,* [*model*](https://takeoffspeeds.com/playground.html)*,* [*blogpost on takeaways*](https://www.lesswrong.com/posts/Gc9FGtdXhK9sCSEYu/what-a-compute-centric-framework-says-about-ai-takeoff)*,* [*presentation*](https://youtu.be/rqLC4GJlyVI?t=84) *(*[*slides*](https://docs.google.com/presentation/d/1MKHOkSyC5stPzLP_RUGRSCccQUXgeBoblSGBfGghEwI/edit#slide=id.p)*), and/or* [*long summary*](https://docs.google.com/document/d/1os_4YOw6Xv33KjX-kR76D3kW1drkWRHKG2caeiEWzNs/edit#heading=h.mwn9uycjzgyk)*.* Disclaimer: I haven't deeply engaged with this research. An important question in AI strategy is *how fast will AI progress be when AI has roughly human-level capabilities?* Information on this question informs alignment plans, informs other kinds of interventions, and is generally an important component of strategic clarity on AI. This research aims directly at a big part of this question. --- Pieces 1, 2, and maybe 3 are mostly great not for their contribution to our knowledge but for laying the groundwork for good actions, largely by helping communicate their ideas to government and perhaps labs. Another favorite AI governance piece is [A Playbook for AI Risk Reduction (focused on misaligned AI)](https://www.alignmentforum.org/posts/Fbk9H6ipfybHyqjrp/a-playbook-for-ai-risk-reduction-focused-on-misaligned-ai) (Karnofsky, June). It doesn't feel like research, perhaps because very little of it is novel. It's worth reading. 1. **[^](#fnrefha0p69awc9o)**Pieces 1, 2, 3, and 4 are aimed directly at extremely important questions; 6 and 7 are aimed directly at very important questions. 2. **[^](#fnrefpeg8zmjtq1)**For pieces 1, 2, 3, 4, and 6 I would have been very enthusiastic about the proposal. For 5 and 7 I would have been cautiously excited or *excited if the project was executed by someone who's a good fit*. Note that the phenomenon of *my favorite research mostly being research I expect to like* is presumably partially due to selection bias in what I read. Moreover, it is partially due to the fact that I haven't deeply engaged with 6 or the technical component of 3 and only engaged with some parts of 7– so saying they're favorites is partially because they sound good before I know all of the details.
58059cab-670c-4753-9b60-133684928fbb
trentmkelly/LessWrong-43k
LessWrong
Bayes Academy: Development report 1 Some of you may remember me proposing a game idea that went by the name of The Fundamental Question. Some of you may also remember me talking a lot about developing an educational game about Bayesian Networks for my MSc thesis, but not actually showing you much in the way of results. Insert the usual excuses here. But thanks to SSRIs and mytomatoes.com and all kinds of other stuff, I'm now finally on track towards actually accomplishing something. Here's a report on a very early prototype. This game has basically two goals: to teach its players something about Bayesian networks and probabilistic reasoning, and to be fun. (And third, to let me graduate by giving me material for my Master's thesis.) We start with the main character stating that she is nervous. Hitting any key, the player proceeds through a number of lines of internal monologue: > I am nervous. > > I’m standing at the gates of the Academy, the school where my brother Opin was studying when he disappeared. When we asked the school to investigate, they were oddly reluctant, and told us to drop the issue. > > The police were more helpful at first, until they got in contact with the school. Then they actually started threatening us, and told us that we would get thrown in prison if we didn’t forget about Opin. > > That was three years ago. Ever since it happened, I’ve been studying hard to make sure that I could join the Academy once I was old enough, to find out what exactly happened to Opin. The answer lies somewhere inside the Academy gates, I’m sure of it. > > Now I’m finally 16, and facing the Academy entrance exams. I have to do everything I can to pass them, and I have to keep my relation to Opin a secret, too.  > > ???: “Hey there.” > > Eep! Someone is talking to me! Is he another applicant, or a staff member? Wait, let me think… I’m guessing that applicant would look a lot younger than staff members! So, to find that out… I should look at him! > > [You are trying to figure out whether
e55b6475-ae41-446d-832b-05b35458123d
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Decoherent Essences Today's post, Decoherent Essences was originally published on 30 April 2008. A summary (taken from the LW wiki):   > Decoherence is implicit within physics, not an extra law on top of it. You can choose representations that make decoherence harder to see, just like you can choose representations that make apples harder to see, but exactly the same physical process still goes on; the apple doesn't disappear and neither does decoherence. If you could make decoherence magically go away by choosing the right representation, we wouldn't need to shield quantum computers from the environment. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Decoherence is Pointless, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
c8c8312c-d36a-4118-80ce-985710a755c1
trentmkelly/LessWrong-43k
LessWrong
Gary Marcus: Four Steps Towards Robust Artificial Intelligence Gary Marcus is a pretty controversial figure within machine learning. He's become widely known for being the most outspoken critic of the current paradigm of deep learning, arguing that it will be insufficient to yield AGI. Two years ago he came out with Deep Learning: A Critical Appraisal, which to my knowledge remains the most in-depth critique of deep learning, and is cited heavily in AI Impact's post about whether current techniques will lead to human-level AI. Now, by releasing this new paper, he answers a frequent objection from deep learning specialists that all he does is critique rather than adding anything new. Gary takes a stand against what he sees as an undue focus on leveraging ever larger amounts of computation to aid in learning, most notably argued for by Richard Sutton. He sees much more promise in instilling human knowledge and using built-in causal and symbolic reasoning, defying what is likely a consensus in the field. I personally find his approach interesting because of how it might lead to AI systems that are more transparent and align-able, though I'm fairly confident that his analysis will have no significant impact on the field. I'll provide a few quotes from the article and explain the context. Here, he outlines his agenda. > The burden of this paper has been to argue for a shift in research priorities, towards four cognitive prerequisites for building robust artificial intelligence: hybrid architectures that combine large-scale learning with the representational and computational powers of symbol-manipulation, large-scale knowledge bases—likely leveraging innate frameworks—that incorporate symbolic knowledge along with other forms of knowledge, reasoning mechanisms capable of leveraging those knowledge bases in tractable ways, and rich cognitive models that work together with those mechanisms and knowledge bases. Marcus is aware of the popular critique that symbolic manipulation was tried and didn't yield AGI, but he thinks that th
7ddb1cc1-35b4-4a94-87bb-64f44f064984
trentmkelly/LessWrong-43k
LessWrong
Optimization, loss set at variance in RL This is a suggestion of a reinforcement learning model with an additional conflicting dynamic between optimized data and loss function. That conflict is intended to reduce RL’s intrinsic Omohundro x-risk dynamic. Thus, it’s also an attempt to produce, though without details, a good idea for AI safety, and so as a test for the meta-usefulness of my posting here. That’s following the idea. The Idea A modification of reinforcement learning: to have the learning agent develop de sui a function for optimization which is to supplant the loss function, at a given reduction of the loss function, by the action of the learning process. That is, if we correlate the loss function not with arbitrary data to be optimized, but with an optimization of some systemic data which has, or is known to have, a pattern, and that pattern requires for optimization certain “behaviors” (e.g., optimizing which is against some “aversiveness”, (entropy, say), that incorporates the agent as part of the dataset so it too must act, and act on itself, to reduce “aversiveness”) – then the optimizing agent can be expected at length to opt for aversive reduction (e.g., entropy-reduction) so as to reduce the associated loss function, in the process it will identify systemically the aversive data-point’s features, and how to elide them, as was done in identifying eye- and ear-forms (pixels of an eye as a whole structure are easier to spot than sequential pixels that make an eye). If, then, seizing control of the loss function will increase systemic “aversiveness”, as occurs since loss function is now tied to a non-anthropic, non-random environmental “constant” being optimized for, not human wants – then the agent will not seize it. Since, it’s caught between RL’s wireheading drive, and the fact that behaving in that way makes the situation more chaotic and harder to optimize for, since the data is already patterned, only the agent, prior to optimizing, does not know that. What is more chaotic and harder
ceb45b05-4edc-4eb2-8f7a-9a9b1904b433
StampyAI/alignment-research-dataset/arbital
Arbital
Ideal target The 'ideal target' of a [meta-utility function](https://arbital.com/p/meta_utility) $\Delta U$ which behaves as if a ground-level utility function $U$ is taking on different values in different possible worlds, is the value of $U$ in the actual world; or the expected value of $U$ after updating on all possible accessible evidence. If chocolate has &euro;8 utility in worlds where the sky is blue, and &euro;5 utility in worlds where the sky is not blue, then in the AI's 'ideal target' utility function, the utility of chocolate is &euro;8.
d3d8934d-06e1-4d7c-88ad-029f5bf83c7f
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
Why can’t we just use Asimov’s Three Laws of Robotics? Asimov’s laws, often highlighted in AI discussions, are insightful but inherently flawed. Indeed, Asimov himself acknowledges their limitations in his books and uses them primarily as an illustrative tool. Take the first law, for example. This law dictates that robots “may not injure a human being or, through inaction, allow a human being to come to harm,” but the definition of “harm” is very nuanced. Should your home robot prevent you from leaving your house and entering traffic because it could potentially be harmful? On the other hand, if it confines you to the home, harm might befall you there as well. What about medical decisions? A given medication could have harmful side effects for some people, but not administering it could be harmful as well. Thus, there would be no way to follow this law. More importantly, the safety of AI systems cannot be ensured merely through a list of axioms or rules. Moreover, this approach would fail to address numerous technical and sociotechnical problems, including goal drift, proxy gaming, and competitive pressures. Therefore, AI safety requires a more comprehensive, proactive, and nuanced approach than simply devising a list of rules for AIs to adhere to. <iframe src="https://www.youtube.com/embed/7PKx3kS7f4A" title="Why Asimov's Laws of Robotics Don't Work - Computerphile" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
d2fac1bd-b0c7-4572-bf2e-969fb908876a
trentmkelly/LessWrong-43k
LessWrong
The Wire versus Evolutionary Psychology In their Evolutionary Psychology Primer, Cosmides and Tooby give an example of a hypothesized adaptation that allows us to detect cheaters in a certain type of logical task (Wason) that we generally fail at.  In the Wason selection task (both article and wiki give examples) you are presented a type of logic puzzle that people tend to do poorly at and even formal training in logic helps little, yet when the examples involve cheating (such as "If you are to eat those cookies, then you must first fix your bed" and the task would be to figure out if someone whose eating the cookies did indeed fix the bed) perform much better (25% right in the regular task, 65-80% in this version, according to the article). In the show The Wire, in season one, episode eight, Wallace, a teen-age drug dealer is asked by a young child to help her with her math homework.  It's an addition and subtraction word problem about passengers on a bus (can't remember the numbers, but along the lines of, if the bus has 10 people on it and at the next stop 3 get on and 4 leave, etc.).  Wallace rephrases the word problem to be about drugs and the kid gets it right.  Wallace frustrated asks why and the kid replies along the lines of: "They beat you if you get the count wrong." (Edit:simpleton gives the quote as "Count be wrong, they fuck you up.") C&T conclude that there are evolved "algorithms" in our brains that deal with social contract processing that explain why people do better on certain Wason selection tasks.  The Wire points out a simpler possible explanation that their experiments did not control for: people do better on tasks they care about, unless one would like to suppose there are special math circuits in the brain for certain "social contract" situations. Of course, I am not saying a fictional anecdote disproves C&T's claim, but it does point to something they didn't test for, and something that I find rather plausible. Possible tests: Look at emotionally-motivating things that vary a
fe19a38b-b279-4a12-b40b-4cc195f925e3
StampyAI/alignment-research-dataset/lesswrong
LessWrong
IJMC Mind Uploading Special Issue published The International Journal of Machine Consciousness recently published its [special issue on mind uploading](http://www.worldscinet.com/ijmc/04/0401/S17938430120401.html). The papers are paywalled, but as the editor of the issue, Ben Goertzel has put together a page that [links to the authors' preprints of the papers](http://wp.goertzel.org/?page_id=368). Preprint versions are available for most of the papers. Below is a copy of the preprint page as it was at the time that this post was made. Note though that I'll be away for a couple of days, and thus be unable to update this page if new links get added. > > In June 2012 the [International Journal of Machine Consciousness](http://www.worldscinet.com/ijmc/) (edited by Antonio Chella) published a [Special Issue on Mind Uploading](http://www.worldscinet.com/ijmc/04/0401/S17938430120401.html), edited by Ben Goertzel and Matthew Ikle’. > > > This page gathers links to informal, “preprint” versions of the papers in that Special Issue, hosted on the paper authors’ websites.   These preprint versions are not guaranteed to be identical to the final published versions, but the content should be essentially the same.   The list below contains the whole table of contents of the Special Issue; at the moment links to preprints are still being added to the list items as authors post them on their sites. > > > > > > > | | > | --- | > | [**INTRODUCTION**](http://www.worldscinet.com/ijmc/04/0401/free-access/S1793843012020015.pdf) > *BEN GOERTZEL* *and MATTHEW IKLE’* | > | [**FUNDAMENTALS OF WHOLE BRAIN EMULATION: STATE, TRANSITION AND UPDATE REPRESENTATIONS**](http://www.worldscinet.com/ijmc/04/0401/free-access/S179384301240001X.pdf) > *RANDAL A. KOENE* | > | [**A FRAMEWORK FOR APPROACHES TO TRANSFER OF A MIND’S SUBSTRATE**](http://www.sim.me.uk/neural/JournalArticles/Bamford2012IJMC.pdf) > *SIM BAMFORD* | > | [**EXPERIMENTAL RESEARCH IN WHOLE BRAIN EMULATION: THE NEED FOR INNOVATIVE *IN VIVO* MEASUREMENT TECHNIQUES**](http://carboncopies.org/research-projects/articles-and-essays/koene.IJMC.experimental-research-in-WBE-the-need-for-innovative-in-vivo-measurement-techniques.proofs-20120410.pdf) > *RANDAL A. KOENE* | > | **AVAILABLE TOOLS FOR WHOLE BRAIN EMULATION** > *DIANA DECA* | > | [**ELECTRON IMAGING TECHNOLOGY FOR WHOLE BRAIN NEURAL CIRCUIT MAPPING**](http://www.brainpreservation.org/sites/default/files/ElectronImagingTechnologyForWholeBrainNeuralCircuitMapping_Hayworth2012.pdf) > *KENNETH J. HAYWORTH* | > | **NON-DESTRUCTIVE WHOLE-BRAIN MONITORING USING NANOROBOTS: NEURAL ELECTRICAL DATA RATE REQUIREMENTS** > *NUNO R. B. MARTINS**, WOLFRAM ERLHAGEN* *and ROBERT A. FREITAS, JR.* | > | [**THE TERASEM MIND UPLOADING EXPERIMENT**](http://www.terasemcentral.org/docs/Terasem%20Mind%20Uploading%20Experiment%20IJMC.pdf) > *MARTINE ROTHBLATT* > > > > | | > | --- | > | **WHOLE-PERSONALITY EMULATION** > *WILLIAM SIMS BAINBRIDGE* | > | [**WHEN SHOULD TWO MINDS BE CONSIDERED VERSIONS OF ONE ANOTHER?**](http://goertzel.org/Goertzel_IJMC_Special_Issue.pdf) > *BEN GOERTZEL* | > | [**MY BRAIN, MY MIND, AND I: SOME PHILOSOPHICAL ASSUMPTIONS OF MIND-UPLOADING**](http://exeter.academia.edu/MichaelHauskeller/Papers/1747050/My_Brain_my_Mind_and_I_Some_Philosophical_Problems_of_Mind-Uploading) > *MICHAEL HAUSKELLER* | > | [**SEEKING NORMATIVE GUIDELINES FOR NOVEL FUTURE FORMS OF CONSCIOUSNESS**](http://degreesofclarity.com/writing/oto_mind_uploading.pdf) > *BRANDON OTO* | > | **TRANS-HUMAN COGNITIVE ENHANCEMENT, PHENOMENAL CONSCIOUSNESS AND THE EXTENDED MIND** > *TADEUSZ WIESLAW ZAWIDZKI* | > | [**WHY UPLOADING WILL NOT WORK, OR, THE GHOSTS HAUNTING TRANSHUMANISM**](http://home.millsaps.edu/hopkipd/IJMC-Preprint-HopkinsUploading.pdf) > *PATRICK D. HOPKINS* | > | **DIGITAL IMMORTALITY: SELF OR 0010110?** > *LIZ STILLWAGGON SWAN* *and JOSHUA HOWARD* | > | [**TIME, CONSCIOUSNESS, AND MIND UPLOADING**](http://faculty.cs.tamu.edu/choe/ftp/publications/choe-ijmc12-preprint.pdf) > *YOONSUCK CHOE**, JAEROCK KWON* *and JI RYANG CHUNG* | > | [**ADVANTAGES OF ARTIFICIAL INTELLIGENCES, UPLOADS, AND DIGITAL MINDS**](http://www.xuenay.net/Papers/DigitalAdvantages.pdf) > *KAJ SOTALA* | > | [**COALESCING MINDS: BRAIN UPLOADING-RELATED GROUP MIND SCENARIOS**](http://www.xuenay.net/Papers/CoalescingMinds.pdf) > *KAJ SOTALA* *and HARRI VALPOLA* | > > > | > > >
0ea6a77f-8326-4b37-ac6b-af73063699b1
trentmkelly/LessWrong-43k
LessWrong
Summary of 80k's AI problem profile This is a summary of the article “Preventing an AI-related catastrophe” (audio version here) by Benjamin Hilton from 80,000 Hours (80k). The article is quite long, so I have attempted to capture the key points without going through all the details. I’m mostly restating ideas from the article in my own words, but I'm also adding in some of my own thoughts. This piece doesn’t represent 80k’s views. TL;DR The field of AI is advancing at an impressive rate. However, some AI experts think AI could cause a catastrophe on par with human extinction. It’s possible that advanced AI will seek power and permanently disempower humanity, and advanced AI poses other serious risks such as strengthening totalitarian regimes and facilitating the creation of dangerous technologies. These are seriously neglected problems, and there are ways you can help. Researchers are making advances in AI extremely quickly One example of recent AI progress is Minerva, which can solve a variety of STEM problems, including math problems: A MATH dataset problem followed by Minerva's correct solution. Before Minerva, the highest AI accuracy on the MATH dataset of challenging high school math problems was ~7%, meaning that you could expect an AI to answer 7 out 100 questions correctly. In August 2021, one team of forecasters predicted that it would take over four years for an AI to exceed 50% on MATH (better than most humans according to one creator of the dataset), but Minerva ended up achieving this milestone in less than one year, four times faster than predicted. Minerva is just one of many powerful models we’ve seen in the past few years alone. Current trends suggest that AI capabilities will continue to advance rapidly. E.g., developers are using exponentially increasing amounts of computational resources to train AI systems. Additionally, AI algorithms are requiring exponentially lower amounts of computational resources to achieve a given level of performance, demonstrating progress in the
ef8debe1-5451-4480-bedf-3282db0e6c98
trentmkelly/LessWrong-43k
LessWrong
Humans Shouldn't make Themselves Smarter? Just thought you guys should know about this. Some work that argues that humans should not enhance their intelligence with technology, and that super intelligence probably never evolves.
8bf0a37d-04bf-433e-b6b6-3b19dec99892
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Ought: why it matters and ways to help (*Cross-posted from [LessWrong](https://www.lesswrong.com/posts/cpewqG3MjnKJpCr7E/ought-why-it-matters-and-ways-to-help)*) I think that Ought is one of the most promising projects working on AI alignment. There are several ways that readers can potentially help: * They are recruiting a [senior full-stack web developer](https://ought.org/careers/web-developer). * They are recruiting [participants for "factored evaluation" experiments](https://ought.org/participate). * I think donors interested in AI safety should consider Ought. In this post I'll describe what Ought is currently doing, why I think it's promising, and give some detail on these asks. (*I am an Ought donor and board member*.) Factored evaluation ------------------- Ought's main project is currently designing and running "factored evaluation" experiments, and building relevant infrastructure. The goal of these experiments is to answer the following question: > Consider a principal who wants to incentivize experts to work on hard problems, but finds that the work is too complex for the principal to tell what is good or bad. Can they overcome this problem by dividing the evaluation task into smaller pieces, delegating each of them to other experts, and recursively incentivizing good work on those pieces using the same mechanism? Here's what an experiment looks like: * Recruit a pool of "judges," "helpful experts," and "malicious experts." * Start with a question that is too difficult for the judge to answer. + In initial experiments, the questions are difficult because the judge lacks relevant context. For example, the judge may want to answer a question about a long article, while only being allowed to look at a few hundred characters of text. + Limiting the amount of text the judge can read is an artificial restriction. This is intended as a warm-up, an easier analogy for the situation where the experts have knowledge that cannot be explained to a single judge in a reasonable amount of time (or where we want to reduce the cost of judging). * An "honest expert" provides a good answer to that question, a "malicious expert" provides a bad answer to that question, and a judge is tasked with telling which answer was better. The answers themselves may be too large and complex for the judge to look at, in which case they'll be handed a compact representation of each answer which they can inspect as needed. * In order to tell which answer is better, the judge asks subquestions. For example, they might ask "What's the best reason to think that answer A is better than answer B?" or "What does the author of article X believe about question Y?" * We answer these subquestions by delegating to the experts. The honest expert answers each subquestion as well as they can. The malicious expert can propose a bad subanswer, and we run the experiment recursively to see if the judge would pick the good or bad subanswer to the subquestion. If the malicious expert can propose subanswers that corrupt the top-level answer, and that are selected by the judge when we run this experiment recursively, then the malicious expert wins and the experiment fails. This is not Ought's only project, but it's currently the largest single focus. Other projects include: exploring how well we can automate the judge's role on simple questions using existing ML, and thinking about possible decomposition strategies and challenges for factored evaluation. Why this is important for AI alignment -------------------------------------- ML systems are trained by gradient descent to optimize a measurable objective. In the best case (i.e. ignoring [misaligned learned optimization](https://www.lesswrong.com/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction)) they behave like an expert incentivized to optimize that objective. Designing an objective that incentivizes experts to reveal what they know seems like a critical step in AI alignment. I think human experts are often a useful analogy for powerful ML systems, and that we should be using that analogy as much as we can. Not coincidentally, factored evaluation is a major component of my current best-guess about how to address AI alignment, which could literally involve training AI systems to replace humans in Ought's current experiments. I'd like to be at the point where factored evaluation experiments are working well at scale *before* we have ML systems powerful enough to participate in them. And along the way I expect to learn enough to substantially revise the scheme (or totally reject it), reducing the need for trials in the future when there is less room for error. Beyond AI alignment, it currently seems much easier to delegate work if we get immediate feedback about the quality of output. For example, it's easier to get someone to run a conference that will get a high approval rating, than to run a conference that will help participants figure out how to get what they actually want. I'm more confident that this is a real problem than that our current understanding of AI alignment is correct. Even if factored evaluation does not end up being critical for AI alignment I think it would likely improve the capability of AI systems that help humanity cope with long-term challenges, relative to AI systems that help design new technologies or manipulate humans. I think this kind of differential progress is important. Beyond AI, I think that having a clearer understanding of how to delegate hard open-ended problems would be a good thing for society, and it seems worthwhile to have a modest group working on the relatively clean problem "can we find a scalable approach to delegation?" It wouldn't be my highest priority if not for the relevance to AI, but I would still think Ought is attacking a natural and important question. Ways to help ============ Web developer ------------- I think this is likely to be the most impactful way for someone with significant web development experience to contribute to AI alignment right now. Here is the description from their [job posting](https://ought.org/careers/web-developer): > The success of our factored evaluation experiments depends on Mosaic, the core web interface our experimenters use. We’re hiring a thoughtful full-stack engineer to architect a fundamental redesign of Mosaic that will accommodate flexible experiment setups and improve features like data capture. We want you to be the strategic thinker that can own Mosaic and its future, reasoning through design choices and launching the next versions quickly. > Our benefits and compensation package are at market with similar roles in the Bay Area.  > We think the person who will thrive in this role will demonstrate the following: > 4-6+ years of experience building complex web apps from scratch in Javascript (React), HTML, and CSS > Ability to reason about and choose between different front-end languages, cloud services, API technologies > Experience managing a small team, squad, or project with at least 3-5 other engineers in various roles > Clear communication about engineering topics to a diverse audience > Excitement around being an early member of a small, nimble research organization, and playing a key role in its success > Passion for the mission and the importance of designing schemes that successfully delegate cognitive work to AI > Experience with functional programming, compilers, interpreters, or “unusual” computing paradigms Experiment participants ----------------------- Ought is looking for contractors to act as judges, honest experts, and malicious experts in their factored evaluation experiments. I think that having competent people doing this work makes it significantly easier for Ought to scale up faster and improves the probability that experiments go well---my rough guess is that a very competent and aligned contractor working for an hour does about as much good as someone donating $25-50 to Ought (in addition to the $25 wage). Here is the description from their [posting](https://ought.org/participate): > We’re looking to hire contractors ($25/hour) to participate in our experiments [...] This is a pretty unique way to help out with AI safety: (i) Remote work with flexible hours - the experiment is turn-based, so you can participate at any time of day (ii) we expect that skill with language will be more important than skill with math or engineering. > **If things go well, you’d likely want to devote 5-20 hours/week to this for at least a few months.** Participants will need to build up skill over time to play at their best, so we think it’s important that people stick around for a while. > The application takes about 20 minutes. If you pass this initial application stage, we’ll pay you the $25/hour rate for your training and work going forward. > [Apply as Experiment Participant](https://ought.typeform.com/to/woUvLj) > Donate ------ I think Ought is probably the best current opportunity to turn marginal $ into more AI safety, and it's the main AI safety project I donate to. You can donate [here](https://ought.org/donate). They are spending around $1M/year. Their past work has been some combination of: building tools and capacity, hiring, a sequence of exploratory projects, charting the space of possible approaches and figuring out what they should be working on. You can read their 2018H2 update [here](https://ought.org/updates). They have recently started to scale up experiments on factored evaluation (while continuing to think about prioritization, build capacity, *etc.*). I've been happy with their approach to exploratory stages, and I'm tentatively excited about their approach to execution.
b6f1985b-42e0-4faa-83f5-bbc3fe7929b6
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post1744 Produced during the Stanford Existential Risk Initiative (SERI) ML Alignment Theory Scholars (MATS) Program of 2022, under John Wentworth “Overconfidence in yourself is a swift way to defeat.” - Sun Tzu TL;DR: Escape into the Internet is probably an instrumental goal for an agentic AGI. An incompletely aligned AGI may escape prematurely, and the biggest failure mode for this is probably the AGI socially engineering the alignment researchers. Thus, opening an additional information channel between the researchers and the AGI (e.g., adding an interpretability tool and/or researcher) is inherently risky. The expected cost of adding this channel may even exceed the expected scientific benefit. Whether this is true depends on the informational efficiency of the channel (which increases the expected scientific benefit), the level of the AGI’s capabilities (which increases the risk of its premature escape), and the security level of the AGI’s box setup (which decreases the risk). Post: Suppose we are building a machine-learning model that has a high chance of becoming Artificial General Intelligence (AGI). We are trying to align it, but we’re uncertain about whether our training is working. Also, rapidly increasing capabilities may push us into a Sharp Left Turn scenario. How do we handle this volatile situation as safely as possible? We will need to adopt a security mindset, aimed at effectively preventing catastrophic outcomes. We propose a procedure (inspired by the pioneering work of Roman Yampolskiy ) for how to train the AGI towards alignment in a secure and effective manner. Most of the ideas here are not new. Nevertheless, we hope that our game-theoretic framing of the problem will be able to help alignment researchers better plan their research directions and future alignment efforts. Imagining the Scenario Consider a team of researchers who are trying to build an aligned AGI. This situation can be represented by the following simplified model. Definition : A “box” is defined by a collection of failsafes designed to confine a system away from the outside world. The empty collection comprised of zero failsafes is called the “trivial box” (i.e., an AI which is allowed unrestricted access to send and receive data on the Internet). Step 1 - The AI is trained The AGI emerges in a box. The precise time of the AGI's emergence and/or its superhuman capabilities probably cannot be predicted, though the research team may have a rough estimate. Step 2 - The AGI may try to escape At any point in the training or testing process, the AGI may attempt to break out of the box. If all failsafes of the box are breached before the team notices, the AGI escapes and the game is over (Step 4). One example of this is the scenario in which the AGI hacks itself into the Internet. Another example is the scenario in which the AGI releases power-seeking malware agents into the Internet. If the AGI fails to escape, the researchers may decide either to retrain the AGI from scratch, fine-tune (which is less computationally expensive but may fail to remove the AGI’s escape intent/misalignment), or table the project entirely. The trivial box cannot prevent escapes. Step 3 - The researchers run tests on the AGI In our simplified model, researchers open a collection of information channels between themselves and the AGI in the box. As noted by Yampolskiy , the information from these channels is dual-use: The human researchers can use the channel to probe various properties of the AGI, to help them train it to be more aligned. In addition to advancing the science of alignment, the channel can also be used for scientific benefits in general. The AGI can use the channel to socially engineer the researchers on the other end. Socially engineered researchers can then help the AGI break the failsafes, which makes the AGI’s escape easier. Consequently, adding information channels is one of the strongest ways to weaken failsafes. An inadequate box setup could allow the AGI to also open information channels unintended by the research team. The AGI might do so by hacking, by using its internal circuits to send radio waves , and even by using different speeds of the cooling fan to send messages by Morse code. However, suppose that the box setup uses theoretically robust cybersecurity, combined with an actual physical box that is designed to not let any covert information enter or leave. In this case, risks of human social engineering would mostly arise from the information channels that the researchers themselves choose to open. Step 4 - Game Over: The AGI is deployed or escapes The game ends when the AGI escapes or is deployed outside the box. This terminally changes future utility based on how aligned or misaligned the AGI is. We want to maximise the chance that the AGI is aligned in the very likely scenario in which it eventually leaves the box. What will happen? It seems that either deletion, escape, or deployment of the AGI is almost certain to happen eventually. Escaping from the box is likely to be an AGI’s instrumental goal. Also, the researchers will likely be eventually forced by overconfident management to remove the failsafes (i.e., deploy the AI), even if the AGI is not yet aligned. Finally, if there is a perceived race towards AGI deployment by multiple AI organisations, this would further exacerbate their motivations to deploy prematurely. A superintelligent AGI, once on the Internet, will likely change the world in an irreversible manner, in the direction of whatever mysterious goals it evolved during its training. We can, however, guess that one of its instrumental goals will be preventing the emergence of competing AGIs with different goals. AGI creators who are overconfident about their alignment ability could perceive this as a huge first-mover advantage, which would motivate them to deploy prematurely. This in turn forces the non-overconfident AGI creators to choose between also deploying prematurely or being too late: the Unilateralist’s Curse . Recall that we have made little progress in how to align an AGI , a misaligned AGI would probably be a power-seeking risk endangering all of humanity , and it seems plausible that a superintelligent AGI could emerge in the near-term future . Thus, it is essential that there is (1) coordination between AI organisations and (2) AI governance towards safe AI, rather than a race towards premature deployment. But this is probably difficult to achieve , so we will assume that the AGI is deployed in a finite time T unless otherwise stated. An example alignment plan Suppose a leading AI organisation (say, on the cusp of being the first to develop an AI agent) approached us today and asked us how to align their AI agent. We would probably tell them that very little is known about how to align an AI agent, that it would probably endanger humanity, and that they shouldn’t build it (at least before the science of alignment is solved). But if they insist that they will build it anyway even without our input, and there seems to be no way to convince them otherwise, we might have to provide them with a reasonable alignment plan. Such a plan would aim to maximise expected utility, which might look something like this: Train the AI agent in a secure box until it has high capabilities. Run some automated tests that have very high per-bit efficiency (e.g., automated tests that give a percentage). If it fails, either retrain or table the model. Run some higher bit-rate experiments and look for very specific signs of unaligned activity. If it fails, either retrain or table the model. Use some coarse interpretability tools to roughly check the goals/values of the model. If it fails, either retrain or table the model. Use finer interpretability tools to look for signs of deception (e.g., Eliciting Latent Knowledge , or ELK). If it fails, either retrain or table the model. Do one of the following. Either (1) repeat the above process indefinitely (gaining scientific benefits along the way) or (2) if all positive-expected-value tests check out, deploy the model online. (Since we currently know very little about how to align an AI agent, Option 2 would most likely deploy a misaligned, power-seeking risk endangering all of humanity. Thus, we would strongly recommend Option 1 instead, at least until a miraculous breakthrough occurs in the science of alignment: perhaps with the help of the boxed AGI. However, recall that the researchers might be forced to use Option 2 at a finite time T by overconfident management, or by a perceived race to deployment against other AI organisations.) Observe that the tests start from those that are the most scientifically informative per bit to those that are less so . Tests whose per-bit expected scientific benefit are outweighed by the per-bit expected cost (due to human social engineering risk by the AGI) are not used. Insights 1.  Maximise utility-per-bit When testing the model, we want to maximise the scientific value per bit (benefit-per-bit) compared to the possible social-engineering risk due to those bits (cost-per-bit). For example, researchers who use high-volume, low-efficiency interpretability tools on the AGI’s middle weights may be opening themselves up to a large risk of being socially engineered. This would constitute a sizable expected cost, since premature escape by an incompletely aligned AGI will likely be catastrophic and much worse than the counterfactual. In contrast, running an automated test that gives a percentage (e.g., of whether a given property occurs) would be much better per-bit than having people read all the outputs and judge for themselves, since it would greatly reduce the ability of the AGI to perform social engineering attacks. As our confidence in the AGI’s alignment increases, we can slowly increase the bits for which we ask. Overall, we should be maximising expected utility, and this does not always mean that we should aim for a complete victory. It would be great if we could prevent deceptive misalignment in a guaranteed manner. But this could require an inordinate amount of information, especially for complex evolved agents . The expected scientific benefit of the likely inefficient information channel required to get this information may be outweighed by its expected cost. A better course of action would be to use information channels that are more efficient on expectation, even if they may not achieve the best-case scenario of guaranteeing no deceptive misalignment. If the highest-expected-utility training plan we can realistically think of—for a given box setup—is to mostly rely on parsimonious summary statistics and hope to get lucky, then so be it. Conversely, maximising expected utility does not always mean that we should give up on boxing once the AGI has superhuman intelligence. There are likely AI box designs that are both informative and secure against even the most intelligent AI (e.g., a securely boxed truth oracle with no output channels other than answering multiple-choice questions , whose answers are patiently and extensively audited by many human experts). Given that even a hypothetically aligned AI could phase-transition into misalignment after it increases in capabilities, running high-efficiency tests afterwards in an optimised box setup could potentially be very informative, and well worth considering. Note : Our framework of raw computer bits, while easy to exposit, might not be the most accurate framing of the problem. For one thing, the same amount of information can be riskier or less risky depending on the rate at which it is audited by the researchers. This is because humans are more likely to make well-calibrated decisions when they have sufficient time to think things through . For another, the AGI might be able to robustly mask its intent with respect to some information channels, but not others. Incorporating these and other considerations may lead to a more realistic generalisation of our model. 2.  When to open or close information channels We should not open an information channel unless it has a positive expected utility . If the AI being studied has very low capabilities, then it should not be too risky to open additional channels of information. In fact, doing so might be beneficial, since it could provide valuable information about how a future AGI of a similar architecture might work. Once you have a highly capable AI, however, the research team should be more cautious about their channels of information. In fact, they might want to consider closing their low-efficiency, high-risk information channels as soon as they suspect that the AI could make a Sharp Left Turn (to superhuman capabilities) with a non-negligible probability: even if the probability is low. This is because if the AGI has superhuman intelligence, then it could be very skilled at both social engineering and hacking: particularly if it is a Large Language Model trained on code, human writing, and/or images. We should plan for the plausible case that the AGI will effectively exploit its information channels to learn about, outmanoeuvre, and manipulate humans to escape. Thus, we should put the cost-per-bit of information to be quite high. This leads us to a counterintuitive rule-of-thumb: For a sufficiently capable model, unless the informational benefit is sufficiently high (relative to the riskiness of the box setup), an interpretability tool and/or alignment researcher should be removed, not added, on the margin. 3.  Social engineering is probably the biggest failure mode This is true even of cybersecurity today , whose adversaries are just human hackers, rather than an AGI with superintelligent capabilities in human social engineering. Even today's AI, neither superintelligent nor agentic, is sufficient to socially engineer humans. While conversing with Google's LaMDA model as a tester, Google software engineer Blake Lemoine was led to incorrectly believe that LaMDA was just “a sweet kid," to share confidential information about LaMDA publically, to invite a lawyer to represent LaMDA, to propose the unethicality of Google’s activities to a member of the House Judiciary Committee, and to eventually get fired from his dream job at Google for his aggressive actions. This social engineering occurred by accident rather than by the AI's agency (LaMDA is most likely neither sentient as Lemoine claimed, nor agentic, nor of superhuman general capabilities). But the following is clear. An additional person looking at a superintelligent AGI's output, even a well-informed and well-meaning alignment researcher, constitutes an additional failure point. Directions for Research 1.  Optimise AI boxing The most obvious way to bring down expected costs, and thereby make all information channels more viable, is to improve AI boxing. This would entail empirically measuring both the level of security and the level of informativeness of various box setups. We can then work towards answering the following questions: For a given probability of AI escape, which box setup maximises the expected scientific benefit without increasing the expected AI escape risk by an amount more than that level? and For a given level of scientific benefit, which box setup minimises the expected AI escape risk without constraining the amount of expected scientific benefit to be less than that level? Excellent ideas on pragmatic AI boxing (e.g., by Roman Yampolskiy , David Chalmers , and others ) await implementation, redteaming, and optimisation. They include: putting the AGI in a virtual machine’s test environment that looks naturally occurring rather than designed, using homomorphic encryption to obstruct the AGI from analysing its own source code, and training the alignment researchers to withstand social engineering, Other promising failsafes aimed at reducing risks of social engineering include multi-factor authentication, time delays for important decisions, and collective rather than unilateral decision-making. Finally, impeding AI escape with a large and diverse collection of cybersecurity failsafes (some of which can be nested multiple times) is probably ideal from a safety-mindset perspective. Given that human social engineering is probably the biggest failure mode, it seems especially promising to systematise and optimise Eliezer Yudkowsky’s pioneering roleplaying experiments of AI escape , which could help redteam how vulnerable various box setups are to human social engineering. One way to adapt Yudkowsky’s setup to our game-theoretic framing could be to give the “human researcher” a list of scientific questions that, if answered correctly by the “AGI,” would yield positive points. To obtain the answers to these scientific questions, the “human researcher” opens a collection of information channels allowed by the box setup. The “AGI,” whose role can be played by a team of experts in social persuasion, may use these informational channels to escape its box. Its escape, if successful, would apply a substantial negative penalty to the score of the “human researcher.” The average score (over multiple AI escape games) may then be informative of how robust the given box setup is against human social engineering. In a sense, we are describing a Box Contest (analogous to the ELK Contest ). First, systematise a way to empirically estimate the robustness of various box setups. Then, get as many people as we can to brainstorm new box setups, so that the best such setup may eventually be found. 2.  Get AI organisations to box their research environments A box is only helpful if it is used. So, we should get AI organisations to envelop their neural-net research environments in optimised box setups. Suppose all AI organisations with the capacity to build an AGI agree to preemptively secure their research environments with our box setups. These setups would have been empirically selected to maximise the expected scientific benefit obtainable from taking on a certain level of AI escape risk (the risk level would be shared across all participating AI organisations, so as to discourage defection). This would substantially reduce the perceived race towards premature AGI deployment, thereby enhancing our prospects for solving alignment beforehand. For this to be achieved, it seems that (1) better coordination between AI organisations and (2) better AI governance would be crucial. This is probably difficult, but not impossible. We would really appreciate hearing any creative suggestions you may have about how to make this happen! 3.  Develop informationally efficient tests for alignment For an information channel to be helpful rather than counterproductive, it needs to be sufficiently efficient (compared to the riskiness of the box setup). So, we should prioritise quality over quantity. Some potentially efficient tests for alignment include: Automated summary statistics regarding the AGI’s behaviour. Running the AGI in a realistic test environment and outputting summary statistics of its overall behaviour seems likely to be scientifically informative on a per-bit basis. Coarse-grained interpretability tools . If we had interpretability tools that could look at higher-level features of the model, this could reduce the amount of information we need to understand it deeply. Interpretability tools that find specific things . Examples of this approach include ELK and natural abstractions research. 4.  Better understand what channels to use and when It would be helpful to know how to, for any given situation, empirically estimate how much utility-per-bit a given interpretability tool would yield. This would differentiate the helpful interpretability tools (positive utility-per-bit) from the counterproductive ones (negative utility-per-bit). Note that doing this accurately is probably easier said than done. Predicting a priori the scientific benefit of a given collection of experiments is notoriously difficult . Most likely, predicting the magnitude of the human-social-engineering risk posed by an AGI—of an unknown, potentially superhuman level of capabilities—would also be quite difficult. Even a small advance in our ability to predict either the expected scientific benefit or the expected cost (of adding a given interpretability tool and/or researcher) would likely be a hard-won victory. 5.  Study generalisable facts about existing systems The risks of human social engineering and of AI escape are probably quite low at our current level of AI capabilities. Thus, if we can probe for patterns shared by current low-risk models that—as far as we know—seem to generalise across the different capability levels in our historical record, then we can hope that these patterns might also generalise to high-risk AGI models in the future. If this turns out to be true, then we would have succeeded in obtaining high-value information without taking on a significant risk. The downside, of course, is that we a priori have no guarantee that any pattern shared by low-capabilities models will generalise to the high-capabilities regime. Thus, we should also consider using informationally efficient tests on our potentially superintelligent AGI (even if it might be risky) to probe whether the earlier pattern we found (1) remains true or (2) has phase-transitioned into something completely different. Epistemic status : We still have relatively high uncertainty in our scenario. However, we hope that our attempt at formulating the problem from a fresh perspective can help alignment researchers contemplate what we think are potentially promising but often overlooked research directions. Acknowledgements : We are grateful to Matthias Mayer and David Udell for their helpful feedback on the draft.
b5261dd2-bc99-4e01-999c-bd2d11d87d40
trentmkelly/LessWrong-43k
LessWrong
Zetetic explanation There is a kind of explanation that I think ought to be a cornerstone of good pedagogy, and I don't have a good word for it. My first impulse is to call it a historical explanation, after the original, investigative sense of the term "history." But in the interests of avoiding nomenclature collision, I'm inclined to call it "zetetic explanation," after the Greek word for seeking, an explanation that embeds in itself an inquiry into the thing. Often in "explaining" a thing, we simply tell people what words they ought to say about it, or how they ought to interface with it right now, or give them technical language for it without any connection to the ordinary means by which they navigate their lives. We can call these sorts of explanations nominal, functional, and formal. In my high school chemistry courses, for instance, there was lots of "add X to Y and get Z" plus some formulas, and I learned how to manipulate the symbols in the formulas, but this bore no relation whatsoever to the sorts of skills used in time-travel or Robinson Crusoe stories. Overall I got the sense that chemicals were a sort of magical thing produced by a mysterious Scientific-Industrial priesthood in special temples called laboratories or factories, not things one might find outdoors. It's only in the last year that I properly learned how one might get something as simple as copper or iron, reading David W. Anthony's The Horse, the Wheel, and Language and Vaclav Smil's Still the Iron Age, both of which contain clear and concrete summaries of the process. Richard Feynman's explanation of triboluminescence is a short example of a zetetic explanation in chemistry, and Paul Lockhart's A Mathematician's Lament bears strong similarities in the field of pure mathematics. I'm going to work through a different example here, and then discuss this class of explanation more generally. What is yeast? A worked example Recently my mother noted that when, in science class, her teacher had explained ho
8dc5533c-9294-4818-bb48-ebef99d2fc91
trentmkelly/LessWrong-43k
LessWrong
If you know you are unlikely to change your mind, should you be lazier when researching? I'm currently doing research into political candidates to decide how I'm going to vote. It's tedious because I don't know of a good system that prescribes how much research to do and what exactly to factor into my voting decision. I'm wondering whether the information in Yudkowsky's "We Change Our Minds Less Often Than We Think" would justify a lazy approach: once I start leaning one way or the other, assume that more information won't change that (because We Change Our Minds Less Often Than We Think) and move on. 
5c9747f4-ed8c-4262-aad6-9af8532af2f0
StampyAI/alignment-research-dataset/arxiv
Arxiv
Knowledge-Integrated Informed AI for National Security Massachusetts Institute of Technology Lincoln Laboratory Knowledge-Integrated Informed AI for National Security Anu K. Myne Kevin J. Leahy Ryan J. Soklaski Technical Report TR-1272 27 January 2022 DISTRIBUTION STATEMENT A. Approved for public release. Distribution is un- limited. This material is based upon work supported by the United States Air Force Contract No. FA8702-15-D-0001. Any opinions, ndings, conclusions or recommenda- tions expressed in this material are those of the author(s) and do not necessarily re ect the views of the United States Air Force. c 2022 Massachusetts Institute of Technology. Delivered to the U.S. Government with Unlimited Rights, as de ned in DFARS Part 252.227-7013 or 7014 (Feb 2014). Notwithstanding any copyright notice, U.S. Govern- ment rights in this work are de ned by DFARS 252.227-7013 or DFARS 252.227-7014 as detailed above. Use of this work other than as speci cally authorized by the U.S. Government may violate any copyrights that exist in this work. Lexington Massachusetts This page intentionally left blank. ABSTRACT The state of arti cial intelligence technology has a rich history that dates back decades and includes two fall-outs before the explosive resurgence of today, which is credited largely to data- driven techniques. While AI technology has and continues to become increasingly mainstream with impact across domains and industries, it's not without several drawbacks, weaknesses, and potential to cause undesired e ects. AI techniques are numerous with many approaches and variants, but they can be classi ed simply based on the degree of knowledge they capture and how much data they require; two broad categories emerge as prominent across AI to date: (1) techniques that are primarily, and often solely, data-driven while leveraging little to no knowledge and (2) techniques that primarily leverage knowledge and depend less on data. Now, a third category is starting to emerge that leverages both data and knowledge, that some refer to as \informed AI." This third category can be a game changer within the national security domain where there is ample scienti c and domain-speci c knowledge that stands ready to be leveraged, and where purely data-driven AI can lead to serious unwanted consequences. This report shares ndings from a thorough exploration of AI approaches that exploit data as well as principled and/or practical knowledge, which we refer to as \knowledge-integrated in- formed AI." Speci cally, we review illuminating examples of knowledge integrated in deep learning and reinforcement learning pipelines, taking note of the performance gains they provide. We also discuss an apparent trade space across variants of knowledge-integrated informed AI, along with ob- served and prominent issues that suggest worthwhile future research directions. Most importantly, this report suggests how the advantages of knowledge-integrated informed AI stand to bene t the national security domain. iii This page intentionally left blank. ACKNOWLEDGMENTS I'd like to thank Robert Bond and Sanjeev Mohindra for their enduring support and encour- agement. Special thanks also goes to my co-authors, Kevin Leahy and Ryan Soklaski, for their practical viewpoints and for supporting the extensive literature review, and to all cited authors who are making great contributions to the advancement of arti cial intelligence technology. v This page intentionally left blank. TABLE OF CONTENTS Page Abstract iii Acknowledgments v List of Figures ix List of Tables xi 1. MOTIVATION AND SCOPE 1 1.1 Further Discussion of Computer Models and AI Paradigms 5 1.2 An Opposing AI Paradigm 8 2. KNOWLEDGE-INTEGRATED INFORMED AI FOR NATIONAL SECURITY 13 2.1 Performance Gains Realized via Knowledge-Integrated Informed AI 13 2.2 Bene ting National Security Applications 16 2.3 Roadmap to Accelerate Knowledge-Integrated Informed AI Research and Development for National Security 18 3. WHAT IS KNOWLEDGE-INTEGRATED INFORMED AI? 23 3.1 Architectures for Integrating Knowledge 23 3.2 Knowledge Representations 24 3.3 Knowledge-Integrated Deep Learning Pipeline 24 3.4 Knowledge-Integrated Reinforcement Learning Pipeline 26 4. ILLUMINATING EXAMPLES 31 4.1 Knowledge-Integrated via Additional Knowledge-Containing Dataset(s) 31 4.2 Knowledge-based Architecture or Weights (a.k.a., Hypothesis Set) 35 4.3 Knowledge-Integrated Learning Algorithm 39 4.4 Learned-model Consistency Check (a.k.a., Final Hypothesis) 45 4.5 Embedded and Jointly Trained Neural Networks 47 5. PERFORMANCE GAINS VS. VARIANTS OF KNOWLEDGE INFORMED AI 55 6. CHALLENGES AND UNTAPPED RESEARCH OPPORTUNITIES 59 7. SUMMARY 63 References 65 vii This page intentionally left blank. LIST OF FIGURES Figure No. Page 1 Grouping AI Techniques by Knowledge Captured and Data Dependence 1 2 Venn Diagram of Informed AI 2 3 Knowledge-Integrated Informed AI Block Diagram 3 4 Well-known, Early Examples of Knowledge-Integrated Informed AI 4 5 Recent Example of Knowledge-Integrated Informed AI 4 6 Symbolic vs. Numerical Classi cation 7 7 Neurosymbolic Classi cation 8 8 Removing Knowledge from the RL Pipeline 10 9 Reinforcement Learning without Prior Knowledge 11 10 Add Knowledge and Domain Expertise as an Important Building Block of AI Technology 20 11 Roadmap to Accelerate Knowledge-Integrated Informed AI Research and Development for National Security 21 12 Loosely Coupled to Tightly Integrated Informed Deep Learning Architectures 24 13 Forms of Knowledge Representation from Laura von Rueden et. al. [1] 25 14 Laura von Rueden et. al. Informed Machine Learning Pipeline [1] 26 15 Components of Non-deep RL 27 16 Components of Deep RL 28 17 Q-Table vs. Neural Network 28 18 Deep RL Pipeline 29 19 Architecture with Additional Training Data from [2] (Aug 2018) 31 20 Forms of Knowledge Shown to Integrate via Training Data 32 21 Examples from Literature that Illustrate Knowledge Integration via Training Data 33 22 RL Example Informed via Additional Simulated Data 34 23 Forms of Knowledge Shown to Integrate via Architecture and/or Weights 35 ix LIST OF FIGURES (Continued) Figure No. Page 24 Examples from Literature that Illustrate Knowledge Integration via Hypoth- esis Set 37 25 RL Example Informed via Architectural Elements 38 26 Knowledge-Integrated RL via Architectural Elements - Performance 39 27 Forms of Knowledge Shown to Integrate via Learning Algorithm 41 28 Examples from Literature that Illustrate Knowledge Integration via Learning Algorithm 42 29 An Example from Literature that Illustrates Integration of Auxiliary Tasks via Learning Algorithm 43 30 RL Example with Knowledge Integrated via Learning Process 43 31 RL Example with Knowledge Integrated via Learning Process - Monte Carlo Tree Search 44 32 RL Example with Knowledge Integrated via Learning Process - Performance 44 33 Forms of Knowledge Shown to Integrate via Final Hypothesis 46 34 Example from Literature that Illustrate Knowledge Integration via Final Hypothesis 46 35 RL Example with Knowledge Integrated via Final Hypothesis 47 36 Applications of Universal Di erential Equations 49 37 Learning Predator-Prey Dynamics with a Universal Di erential Equation 50 38 Replacing an Expensive Mechanistic Model with Ecient Surrogate 51 39 Training a Model to Aim a Trebuchet Using Consequence Modeling 53 40 A Programming Language Problem 62 x LIST OF TABLES Table No. Page 1 Opportunities in Terms of Knowledge Sources 14 2 Performance Gains Achieved by Knowledge-Integrated Informed AI 15 3 Opportunities in Terms of Performance Gains 17 4 Rudimentary Examples of Knowledge-Integrated Informed AI for National Security 19 5 Performance Gains by Knowledge Representation and Integration Path 57 6 References Associated with Gains Included in Performance Gains Table 58 xi This page intentionally left blank. 1. MOTIVATION AND SCOPE Arti cial intelligence (AI) techniques are numerous with many unique variants that employ di erent learning algorithms and architectures. One can separate techniques by the two resources that these techniques predominantly leverage: (1) techniques that focus on leveraging knowledge with less or no dependence on data, or (2) techniques that are primarily, and often solely, data- driven. The main diagonal of the matrix in Figure 1 captures these two overarching categories in the top left and bottom right. Historically, the former set of approaches that directly capture knowledge (scienti c, mechanistic, or domain expertise) were dominant. Recently though, data- driven approaches that learn patterns from data have taken center stage for many applications, especially speech recognition [3], game play [4], and image classi cation [5]. Figure 1. Grouping AI Techniques by Knowledge Captured and Data Dependence This 2x2 matrix divides the space of AI techniques by the degree of knowledge captured and the level of data-dependence. Regardless of class of technique, getting and staying ahead with AI requires performance across several metrics including accuracy, explainability, and reliability, to name a few. Knowledge-Integrated informed AI techniques, which fall in the top right (green) quadrant, promise noteworthy gains across several key performance metrics. Foundation models [6], an almost exactly opposing paradigm, fall in the lower right corner of the bottom right quadrant. While numerous promising research directions exist in both major categories, a third class of techniques that are hybrid (exploiting both knowledge and data in fundamentally new ways) is emerging with promise to achieve profoundly more capable AI (top right quadrant of matrix). We refer to this broad space, which encapsulates it's own set of sub-classes of techniques, as \knowledge- informed AI." There is a long list of names given to these techniques, some listed to the right of Figure 2, which suggests that terminology in this paradigm of AI is unsettled. In their excellent survey paper published in 2021 [1], Laura von Rueden et. al. coined the term \informed machine learning," while others use terms like \scienti c machine learning," \neurosymbolic AI," or \physics- 1 informed AI." As illustrated in the Venn diagram in Figure 2, symbolic AI can be considered as a class of techniques that are inherently informed by knowledge, but that's not new nor is it part of the emerging paradigm we aim to shed light on in this report. Instead, if we take standard forms of deep learning and reinforcement learning and marry them with either symbolic AI or other non-AI models with embedded knowledge, it's the areas of intersection that make up the new and emerging paradigm that is the scope of this report. We focus very little on neurosymbolic hybrid AI, where the two worlds of data-driven and knowledge-based models are usually only loosely-coupled, and we focus heavily on techniques that we refer to as \knowledge-integrated informed AI." Figure 2. Venn Diagram of Informed AI Informed AI is a superset of approaches that includes symbolic AI, Neuro-symbolic AI, and Knowledge-Integrated Informed AI. The areas of intersection (purple) represent new and emerg- ing AI paradigms. The focus of this report is on knowledge-integrated informed AI. Our de nition of knowledge-integrated informed AI is the following: Knowledge-Integrated Informed AI : AI techniques in which explicit, principled and/or prac- tical knowledge that is scienti c, domain-speci c, or data-centric is integrated (not necessarily in a separable way) into a model's development pipeline. In these techniques: Knowledge from scienti c or domain expertise is integrated to provide partial explanations or constraints for the unknown relationship that the model is built to learn, Or Understanding of properties of training data are exploited to provide more ecient/e ective mechanisms for learning. 2 This de nition is illustrated with the block diagram in Figure 3. While it can be standard practice to consider knowledge about the problem while developing machine learning models, the actual integration of explicit knowledge so that it constrains or guides the learning process is the major distinguishing factor. There are several forms of knowledge representation as well as paths of knowledge integration that are discussed later in this report. Figure 3. Knowledge-Integrated Informed AI Block Diagram The integration of explicit knowledge in data-driven AI pipelines is what distinguishes knowledge- integrated informed AI from standard data-driven AI. In some ways, the promise of knowledge-integrated informed AI techniques has already been proven. Figure 4 illustrates two well known, breakthrough developments that are precursors of this new paradigm. While the convolutional neural network changed the world of image recognition by exploiting data-centric knowledge (that low-level image features are local and exhibit translational invariance), AlphaGo changed the world of reinforcement learning by leveraging practical knowledge of the game being played (yes, the more recent MuZero masters game play without rules, but with far more compute power and time that can be considered unnecessary and wasteful). Both of these precursor examples that integrated di erent forms of knowledge through distinct entry points demonstrated superior performance and broad-reaching impact, which suggests that more deliberate integration of knowledge can lead to many more breakthroughs. A more recent example that ts squarely into this new paradigm of AI is the Neural Circuit Policy model that has received media attention [7] [8] for its ability to \learn cause and e ect" for a navigation task (Figure 5). The model learns to steer an autonomous vehicle with very few neurons, \superior generalizability, interpretability, and robustness compared to orders of magnitude larger black-box learning systems." Upon deeper investigation, we realize that this model has knowledge- integrated design elements that play a big hand in its ability to achieve noteworthy performance. It has a bio-inspired architecture where the nal output is arrived at in steps that go from sensor to decision to actuation, and the network is followed with an ordinary di erential equation (ODE) solver that captures the dynamics of the nervous system to \obtain numerically accurate and stable solutions." Without these design elements, the performance of this model would undoubtedly be unremarkable. [9] 3 Figure 4. Well-known, Early Examples of Knowledge-Integrated Informed AI Both the convolutional neural network and AlphaGo are well-known examples of breakthroughs in deep learning and reinforcement learning that leverage knowledge about the problems they are designed to solve. Figure 5. Recent Example of Knowledge-Integrated Informed AI The Neural Circuit Policy model designed by Lechner et. al. is a nice recent example of ground- breaking performance achieved by a novel knowledge-integrated informed AI approach. 4 Knowledge-integrated informed AI techniques are particularly appealing for the national se- curity domain, where there are elevated concerns with purely data-driven approaches that often lack in explainability, generalizability, traceablity, and reliability. Intuitively, techniques that ex- ploit both data and knowledge, where knowledge can be leveraged to satisfy constraints or guide learning, should provide superior performance (such as improved accuracy, reliability, robustness, and more ecient use of resources). It turns out that many recent works, in addition to the ex- amples discussed above, have now proven this intuition. Furthermore, while there exist opposing perspectives, several reputable scholars have suggested that knowledge-integrated informed AI will bring the next generation of AI, or its third wave [10] [11]. In subsequent sections in this report, we explore numerous recent examples of knowledge-integrated informed AI, what's behind these claims of game-changing potential, and what this new paradigm of AI can mean for problems of national security. In order to thoroughly explore the current state of the art of knowledge-integrated informed AI for national security, the literature review summarized in this report considers many published examples. Our e ort was hugely facilitated by the taxonomy presented in Laura von Rueden et. al.'s 2020 survey paper [1], as well as a few other recent and well formulated surveys [12] [13] [14]. Subsequent sections in this report dive into the various knowledge-integrated informed AI architectures, distinct paths of integrating knowledge into AI pipelines, illuminating examples and their reported performance gains, observed research gaps and opportunities, and how knowledge- integrated informed AI could positively impact the national security domain. 1.1 FURTHER DISCUSSION OF COMPUTER MODELS AND AI PARADIGMS We referred earlier to two broad categories of AI. Here we consider the individual strengths and challenges of knowledge-based models and data-driven AI models, what's complementary and opposing, and how these arguments lead to a hybrid class. First, the following are our de nitions of two classes of computer models: We consider knowledge-based models to broadly include models that capture scienti c or mechanistic phenomena, or human expertise, through speci c representation of fundamental understandings of a domain. Generally, these models are designed to serve a multitude of purposes that need some representation of the world to perform their assigned tasks. In terms of AI models, symbolic AI (reasoning based on symbolic knowledge) and other classical AI models (ex., logic-based models and probabilistic graphs) fall into this category. As de ned, this class of computer models is more broadly inclusive, and therefore also includes all kinds of models that capture and represent knowledge, even non-AI models. We de ne data-driven AI models as essentially universal function approximators that learn from data, such that model architectures and/or weights are updated via learning pipelines have no explicit consideration of domain knowledge. Generally, these models are designed to perform various tasks such as perception, reasoning, planning, and even to represent and understand the world. Learning models that also exploit explicit knowledge in addition to data do not t into this category. 5 Many in the AI community refer to an earlier paradigm of AI being one of symbolism, or reasoning based on symbolic knowledge (a paradigm that dominated until the 1980s), and the more recent paradigm as one of connectionism, or data-driven decision making using neural net- works (which began increasing in popularity in the 1990s). While connectionist models are ex- actly analogous to data-driven AI paradigm, referring to only symbolic models as the opposing knowledge-informed AI paradigm paints an incomplete picture, because knowledge can be repre- sented in non-symbolic ways. But the fusion of symbolic and connectionist AI is gaining similar attention as the fusion of knowledge-models with data-driven AI models. So this discussion here draws comparisons between both symbolic and connectionist models as well as knowledge-informed and data-driven models. All classes of computer models in discussion here have been and continue to be used extensively across domains, and have demonstrated their usefulness. With their respective strengths, they each come also with many challenges that deem them unfeasible or insucient for certain applications. There is undoubtedly decades of knowledge and domain expertise built into many high- performing knowledge-based models. To their advantage, these models are able to represent and reason about complex, causal, and abstract concepts with solutions that are traceable, interpretable and explainable. These valuable strengths however, come with design processes that are all-too- often manually intensive and highly problem-speci c, with many opportunities to insert biases, and even preclude discovery of new insights. Furthermore, problems can more quickly become computationally unfeasible as cost to run models at appropriate resolutions in space and/or time is often high, and the number of samples needed to provide good estimates can be prohibitively expensive [12] [15] [16]. For example, in symbolic systems that are developed for classi cation problems, rules or other non-numerical methods are used to reason about properties (often abstract) of objects to draw conclusions about them. One common example is a decision tree. In the case of cat classi cation, one can imagine a decision tree that asks if an animal is a vertebrate, a mammal, etc., to determine whether it is a cat. A well-known example of such classi cation systems is taxonomy in biological systems, such as the Linnaean taxonomy. The strength of such symbolic models lies in their interpretability and explainability. They are very transparent, and they can reason about arbitrary and abstract properties. However, while these symbolic models have strong ability to reason, they are not good at processing data. For example, there is not a natural way for a decision tree to classify the contents of an image. Similar to knowledge-based models, data-driven AI models have clearly showcased winning performance and impact. What formulates these models can be viewed as incredibly versatile|more general-purpose and less problem-speci c yet still providing desirable performance. Their versatility is largely attributed to the pliable nature of the learning framework and powerful constituting elements. As much as they are appreciable, they also exhibit several drawbacks. Training the models requires massive datasets (or intensive interaction with the environment for reinforcement learning) which are easily entrenched with a variety of imperfections (noise, incomplete coverage, lack of variability/representation, etc.), and the nal product often fails to generalize beyond its training distribution [17]. What machines learn via purely data-driven approaches is a direct product of the 6 imperfect data combined with algorithms that cannot compensate for those imperfections. While learning may be layers deep, actual understanding is easily only surface level or just indemonstrable. Furthermore, like knowledge-based models, this class of AI also has many entry points for bias (data collection or conditioning, the learning algorithm, and even the network architecture), and comes with computational drawbacks, such as an alarming amount of energy needed to supply the staggering amounts of compute power used for model development [18] [19]. Furthermore, their lack of interpretability, and explainability, and their sensitivity to adversarial attack raise many more alarms. To continue with classi cation as the example problem, rather than reasoning through the task, a connectionist model (a.k.a., a data-driven deep learning model) performs via numerical rep- resentation and operations that use lots of data to learn relationships between inputs and outputs. For example, to classify images of cats, a neural network typically needs to be shown a very large number of images, with labels indicating the presence or absence of cats. Generally, this need for a large amount of data is not present in symbolic AI. Once a network has the data, it performs numerical operations on the data. In the image classi cation example, the system performs matrix operations on pixel values in order to nd commonalities among similarly labeled images. The resulting trained neural network can determine whether a previously unseen image does or does not contain a cat. It is good at perceiving and processing data, but its process for classifying is opaque. Figure 6. Symbolic vs. Numerical Classi cation A decision tree as an example of a symbolic cat classi er (left), and a neural network as a numerical cat classi er (right). Symbolic classi cation reasons directly on properties (possibly abstract) using methods based on \symbolic" (human-readable) representations of problems, logic, and search, where symbols are manipulated symbolically in otherwise purely numeric computation frameworks [10]. Conversely, in numerical classi cation, an image classi er learns to classify based on relationships between pixels in an image le using numerical operations. 7 As the strengths and challenges of knowledge-based models and data-driven AI models lead to a combined class of knowledge-integrated informed AI, so do the strengths and challenges of symbolic and connectionist AI lead to neurosymbolic AI. In the case of knowledge-integrated deep learning, these models are more accurate, general- izable, interpretable, data-ecient and computationally-ecient because of the knowledge infusion into otherwise black-box, unaware neural networks. Their learning is guided by things we know about the relationship that needs to be learned or, if not course correcting, then knowledge is constraining or restricting what the models learn tying them down with known aspects of the relationship that has unknowns to learn. In the case of neurosymbolic AI, models possess reasoning abilities of symbolic systems and the perception and data processing abilities of numerical systems. In terms of the cat classi cation problem, consider the case of distinguishing among closely related species of rare wild cats. There may not be enough imagery of any one species to train a neural network. But there may be too much imagery (e.g., from cameras placed in remote locations to capture footage of wild animals) for a human to process alone. In that case, humans can use their expertise to build a decision tree about di erent traits that each species of cat has|e.g., tufted ears, stripes, long whiskers, etc. Imagery of di erent species can be used to train a neural network classi er on each of those traits. For a new image, the results of the classi ers can be processed on a decision tree to determine if the image contains one of the rare species of interest. Thus, the perceptive neural network feeds the decision tree, which can reason about the output. Figure 7 illustrates a neurosymbolic concept. Figure 7. Neurosymbolic Classi cation Neurosymbolic AI possesses the reasoning abilities of symbolic systems and the perception and data processing abilities of numerical systems. 1.2 AN OPPOSING AI PARADIGM This discussion of emerging AI paradigms would be incomplete without reference to an entirely opposite research direction that aims for AI that does not leverage knowledge. 8 1.2.1 Foundation Models Foundation models are also at the forefront of AI research, especially for natural language processing (NLP) and computer vision (CV) applications [6]. These models eschew the use of highly-regimented model architectures (e.g., recurrent layers for NLP applications, and convolution layers for CV) trained in a supervised setting, in favor of less-structured architectures that learn in a self-supervised manner. Models like BERT [20], GPT3 [21], and CLIP [22] are among the most widely recognizable and in uential foundation models. These approaches demonstrate that the availability of large amounts of highly varied, unlabeled data|even if the data is very noisy|can be sucient to produce a model that can adapt to novel tasks in a zero-shot manner and that achieves state-of-the-art performance on these tasks. The most costly and obstructive aspect of supervised deep learning is the need for large, hand-labeled data sets to inform one's model. Foundation models eliminate this cost altogether, as they have proven capable of learning from large, multi-modal repositories of structured (but unlabeled) data (e.g., Flickr [23] and Wikipedia [24]). However, these public data repositories are, themselves, unrivaled in size: they are the culmination of decades-long contributions from millions of individuals from across the globe. Furthermore, the process of training these models represents a paradigm-shift toward being increasingly sophisticated and prohibitively expensive. For instance, a standard ResNet-50 CV model can be trained in a supervised fashion to achieve competitive performance on ImageNet using only 64 GPU hours [25]. By contrast, training CLIP to achieve state-of-the-art performance on ImageNet zero-shot required 86,000 GPU hours [26]. It must be emphasized that this is notan apples-to-apples comparison, and that the resulting CLIP model is fundamentally distinct in nature, as it is capable of performing far more generalized tasks than is the ResNet model. That being said, the 1,000x di erence in computational cost to develop the model is resounding nonetheless. It remains to be seen how foundation models will impact the role of deep learning in scienti c elds, where knowledge-informed AI techniques can more naturally nd purchase [27]. 1.2.2 Reinforcement Learning without Knowledge In recent years, some researchers|most notably DeepMind|have made an e ort to reduce the knowledge used in deep reinforcement learning (Figure 8). Like foundation models, this approach is in direct opposition to the notion of knowledge-informed AI. DeepMind developed AlphaGo in 2016 [4], which learned to play the game Go using data from human gameplay, domain knowledge, and the rules of the game. They have progressively pared back the knowledge provided to their system. Notably, AlphaZero [28] used a single algorithm for multiple games, although it had rules for all three games it learned. To date, this e ort has culminated in MuZero [29], which learned four games with a single algorithm, without knowledge of the game. MuZero learns three models for playing a game (Figure 9): a representation of the state, the dynamics of the game, and the policy and reward. In a typical reinforcement learning scenario, the state and reward are chosen by a user or designer, and even the dynamics may be known. In that case, the system only needs to learn the policy. Here, the system learns the three components from scratch. 9 Figure 8. Removing Knowledge from the RL Pipeline Since 2016, DeepMind has pursued reinforcement learning that uses increasingly less knowledge. This line of research, starting with AlphaGo and culminating in MuZero in 2020, provides an approach for mastering several games with no human data, domain knowledge, or rules. This approach is in direct contrast with an Informed AI approach. [30] 10 Figure 9. Reinforcement Learning without Prior Knowledge MuZero learns three separate models for playing games. Function his an encoding of the current state,gmodels the dynamics of the game, and fmodels the policy and reward. [29] We note that MuZero uses a single algorithm to learn a variety of games without any prior knowledge. Unlike foundation models (see Section 1.2), it does not generalize from one game to another. Foundation models provide a trained model that can be modi ed or tuned to suit a variety of problems. MuZero, on the other hand, provides an algorithm that is broadly useful for many games, but needs to be separately trained on each game. We note also that the computational results reported for MuZero require many hours or days of specialized computational hardware. One goal of informed AI is to reduce computational requirements by informing the system. 11 This page intentionally left blank. 2. KNOWLEDGE-INTEGRATED INFORMED AI FOR NATIONAL SECURITY As research and development of AI solutions has become increasingly prevalent toward ad- dressing problems of national importance, the performance requirements of these systems have been evolving. We have realized that we need AI that not only performs accurately, but that is also traceable (i.e., correct-by-construction, and interpretable), reliable (i.e., generalizable to novel instances), and robust against adversarial attack [31]. Some of these metrics are often posed as ethical guidelines, but they are also markers for well-engineered systems that enable better human- machine teaming. Bottom line up front: Utilizing valuable knowledge from the extensive range of scienti c and domain expertise that exists within the national security do- main in knowledge-integrated informed AI solutions will help realize many AI system performance goals. Technology for national security is a broad mission that includes many diverse sub-domains (ex., air/land/sea/space defense, homeland protection, air trac control, cyber security, intelli- gence, surveillance, reconnaissance, defense against biological and chemical weapons, etc.). A vast and constantly evolving set of technologies are developed to address important problems associated with each of these mission areas, which all leverage one or more of the following knowledge-bases: Scienti c Expertise (ex., physics, biology, chemistry, etc.) Scienti c/Expert Understanding of Technologies (how they are designed, fabricated, their limits, etc.) Expert Understanding of the Applications for which Technologies are Developed Nearly all formal representations of knowledge (in the form of equations, dynamical systems, probabilistic relations, logic rules, knowledge graphs, simulations invariances, or human feedback), which are presented later in this report (Section 3.2), come into play for each of these major categories of knowledge in the technology for national security domain. As AI is making headway across this domain, leveraging these knowledge sources more deliberately and explicitly can be game changing. Table 1 suggests just a few example ways these knowledge sources may integrate with AI and prove to be strategically advantageous. 2.1 PERFORMANCE GAINS REALIZED VIA KNOWLEDGE-INTEGRATED IN- FORMED AI The hypothesis going into this study was that integration of knowledge into typically data- driven AI should yield signi cant gains for speci c performance metrics that matter for problems of national security|at a minimum, producing more trustworthy AI because it satis es, is consistent with, or is guided by knowledge de ned by science or domain experts. The idea was that this kind of knowledge integration can lead us to more capable AI that requires less data, is more explain- able with better interpretability, and generalizes better to novel samples unseen during training. 13 TABLE 1 Opportunities in Terms of Knowledge Sources Knowledge Sources and Likely Representations: Possible Knowledge-Integrated In- formed AI Goals / Use Cases Scienti c Expertise (ex. Physics, biology, chemistry): - Use domain knowledge (simulators, governing equations, dynamical systems, relations, associations, etc.) to ground models that aim to discover/learn from data - Develop neural network surrogate solutions that demonstrate consistent/accurate scienti c phenomena with computational speedup and/or higher-order approximationsAlgebraic Equations Knowledge Greaphs Di erential Equations Simulations Probabilistic Equations Invariances Logic Rules Human Feedback Scienti c/Expert Understanding of Technology: Algebraic Equations Knowledge Graphs Dynamical Systems Simulations Probabilistic Equations Invariances Logic Rules Human Feedback Expert Understanding of Technology Applications: - Employ AI solutions that are constrained/guided by policy/rules of engagementAlgebraic Equations Knowledge Graphs Discrete-time Models Simulations Probabilistic Equations Invariances Logic Rules Human Feedback 14 TABLE 2 Performance Gains Achieved by Knowledge-Integrated Informed AI Performance Gain Quanti ed / Quali ed Gain Greater Accuracy +0.6-20% Accuracy, -(4-70)% Test Er- ror, 1-8x Score Extended Applicability Implied Enabling New Capabilities and Discoveries Implied Improved Safety and Reliability: - Correct-by-Construction Implied - Interpretable*, Traceable*, Explainable* Claimed Qualitatively - Robustness via Generalizability Claimed Qualitatively (usually) More Ecient Use of Resources: - Data Eciency 40-96% less data, 0-1% of labels - Computational Speedup 2-15,000x - Network Size 1/3rd parameters Exploration of the state-of-the-art, from examples that date back decades to now, con rmed these hypotheses along with additional unanticipated bene ts. Knowledge-integrated informed AI, all variants included, is growing in popularity and has been shown to o er one or more of the gains in performance summarized in Table 2. Note that some improvements are only claimed qualita- tively, while others vary and fall inside the spread that we aggregated from the subset of papers we reviewed. The reasoning behind knowledge integration leading to an applicability extension beyond the already broad application space is twofold. Firstly, this can be attributed to the increases in accuracy and the mechanism to ground a model by the truths of a domain which is, in turn, leading to more domain experts adopting modern AI techniques (especially in scienti c domains). Secondly, these approaches are opening up the application space of deep learning from mostly perception and classi cation or generative tasks, to neural network surrogates that can replace computationally expensive knowledge-based models (expensive physics-based or earth-science models, for example). There were some expected performance gains that were not found in the literature. First, neither robustness to adversarial attack, nor deeper and/or contextual understanding came up in any of the reviewed papers. While these possible bene ts have not yet been highlighted, they likely will be in the future. Furthermore, as we surveyed the literature, we postulated other additional bene ts that also have not been found to be prevalent in the literature, but likely will be in the future. First, some explicit knowledge integration approaches may be more easily updated new to accommodate situational constraints to enabling better adaptability. Second, easier veri cation using more standard methods could be achieved for models with fewer neurons. Third, knowledge integration could improve a model's memory eciency, i.e., ability to maintain fundamental \under- standing" even if trained to perform multiple tasks (that generally can cause memory loss|a.k.a., 15 catastrophic forgetting). Section 6 lists all of these performance gains that we found to be uncharted in the literature as worthwhile future research directions. 2.2 BENEFITING NATIONAL SECURITY APPLICATIONS Many example national security applications stand to bene t from one or more of the perfor- mance gains o ered by knowledge-integrated informed AI approaches. Table 3 lists several example applications while Table 4 o ers some speci c, but rudimentary, example problems. The rst row block in Table 3 suggests that we can expect greater accuracy with knowledge- informed AI for AI tasks that fall in or around a processing chain with an available data stream andassociated valuable knowledge (formally representable). For example, many signal processing applications t this description, along with traditional, computationally-expensive models that can entirely or partially be replaced with neural network surrogates. Furthermore, given this strong potential for improvements in accuracy, applicability to problems for which modern AI techniques have been previously rejected may now be reconsidered. Moreover, this broadening of applicability is leading to new capabilities and new discoveries. The second row block in Table 3 suggests that improved safety and reliability, which come through techniques that are correct-by-construction, interpretable (through traceability and ex- plainability), and more robust because of generalizability, will also be bene cial for many na- tional security applications. Obviously bene ting applications are ones that are described as high- consequence and safety-critical, and those that rely on humans and machines performing as a team. Additionally, problems that present dynamic and unpredictable settings will bene t from greater generalizability, as will problems where the knowledge may be changing with time (explicitly in- tegrated knowledge is more easily updated). Perhaps a less obvious bene ting application is of problems that need multi-task solutions, which can su er from catastrophic forgetting, but can bene t from integration of strong and unchanging foundational knowledge. Finally, the third row block in Table 3 suggests that many national security applications also stand to bene t from more ecient use of resources, whether the resource is data, compute power, or the size or simplicity of the neural network. These applications include those that belong to data-limited domains (many if not most, most if not all), and problems that use AI at the tactical edge, especially if the AI is deployed on a platform with restricting size, weight, and power (SWaP). 16 TABLE 3 Opportunities in Terms of Performance Gains Performance Gain Achieved Bene ting National Security Domains/Applications Greater Accuracy (+0.6-20% Accuracy, -(4- 70)% Test Error, 1-8x Score) Extended Applicability (Implied) Enabling New Capabilities and Discoveries (Implied)- Processing chains with available data streams and formally expressible knowledge: ex. Radar, SAR, GMTI, IR, SIGINT, LIDAR, GPS/GPS-denied, etc.; Materials, weather, climate, chemistry, biology, physiology, etc.; C2, IC, LE, HADR, IO, etc. - Traditional, computationally-expensive models { NN surrogates Improved Safety and Reliability: - Correct-by-Construction (Implied) - Interpretable*, Traceable*, Explainable* (Claimed Qualitatively) - Robustness via Generalizability (Usually Claimed Qualitatively)- High-consequence, safety-critical - Human-machine Teaming (ex., decision support in adversarial or safety-critical settings) - Open-ended (i.e., dynamic and unpredictable) environments that will bene t from generalizability and learning in the eld - Multi-task solutions (can bene t from context-aware memory) More Ecient Use of Resources: - Data Eciency (40-96% less data, 0-1% of labels) - Computational Speedup (2-15,000x) - Network Size (1/3rd parameters)- Data-limited domains - Tactical edge (including continuous learning at the edge) - SWAP-constrained platforms 17 2.3 ROADMAP TO ACCELERATE KNOWLEDGE-INTEGRATED INFORMED AI RESEARCH AND DEVELOPMENT FOR NATIONAL SECURITY To quickly begin realizing the performance bene ts that knowledge-integrated informed AI techniques promise to deliver to the national security domain, a rst step is acknowledging \knowl- edge" as an important building block of AI technology. While other research directions are pursued to advance modern AI without knowledge integration, we should be considering which knowledge- bases should be integrated and how much importance the provided knowledge should be given over the standard approach. Pursuing knowledge-integrated informed AI research for national security can come down to several deliberate steps that are data-oriented, knowledge-oriented, or that serve to advance the state of the art and its adoption into the national security domain. Figure 11 lays out some immediate steps we should be taking. In addition to realizing the performance advantages described in this section, all of these steps can be honed toward addressing existing challenges and pursuing promising yet untapped research opportunities (discussed later in Section 6) 18 TABLE 4 Rudimentary Examples of Knowledge-Integrated Informed AI for National Security Biotechnology Missile Defense Air Trac Control Deep learning based ultrasound image interpretation guided by knowledge graphs !more gen- eralizableIntegrate knowledge of atmo- spheric e ects on signal propa- gation into deep learning for sig- nal classi cation/discrimination !greater accuracyConsider symmetries in rein- forcement learning approach for air trac collision avoidance ! faster and more ecient learning Cybersecurity Communications Engineering Consider symmetries in cyber- network-based neural network topologies to maintian connec- tivity and immunity to vulner- abilities!more ecient learn- ing, more veri able modelsIntegrate communication net- work access rules and limits into a deep learning based secure access and routing scheme ! correctness-by-constructionConstrain autonomous control and planning algorithms with necessary rules of engagement ! improved accuracy and validity of recommendations Technology Space Systems Surveillance Use expert-speci ed examples and physical constraints to dis- cover and recommend better experiment con gurations ! greater likelihood of valid recom- mendationsUse orbital mechanics equations to constrain optimal satellite control learning algorithm ! greater accuracy and more gen- eralizableExploit human implicit and explicit semantic knowledge (graphs) for reasoning over object categories !greater accuracy and more generalizable 19 Figure 10. Add Knowledge and Domain Expertise as an Important Building Block of AI Technology Knowledge/domain expertise should be recognized as important building block of AI technology for national security. 20 Figure 11. Roadmap to Accelerate Knowledge-Integrated Informed AI Research and Development for National Security Here we list several steps that will help accelerate knowledge-integrated informed AI research and development for national security that are either data-oriented, knowledge-oriented, or serving to advance the state of the art and its adoption into the national security domain. Circled one, two, and three indicate rst, second, and third priority. 21 This page intentionally left blank. 3. WHAT IS KNOWLEDGE-INTEGRATED INFORMED AI? We mostly adopted Laura von Rueden et. al.'s de nition of \informed machine learning" into our de nition of \knowledge-integrated informed AI," which we presented earlier in Section 1 and repeat here. Note that we more exclusively focus on the integration of knowledge into more \modern" AI techniques, i.e., deep learning and reinforcement learning. Laura von Rueden et. al.'s de ne \informed machine learning" in their 2021 survey paper [1] as the following: Techniques that learn from a hybrid information source which consists of both data and prior knowledge . In this de nition, knowledge is \true" or \justi ed belief," validated either via science, empirical studies, or ex- perts, where the more formally knowledge is represented, the more easily it can be integrated. the source of knowledge can be from the natural sciences, social sciences, expert knowledge, or world knowledge. prior knowledge is explicit, pre-existent, separate from the data, and integrated into the machine learning pipeline. informed deep learning and reinforcement learning are encompassed, and other classes of AI are not excluded (for example they include probabilistic graphical model based methods, like Bayesian networks). Our adapted de nition of knowledge-integrated informed AI is the following: Knowledge-Integrated Informed AI : AI techniques in which explicit, principled and/or prac- tical knowledge, that is scienti c, domain-speci c, or data-centric, is integrated (not necessarily in a separable way) into a model's development pipeline. In these techniques: Knowledge from scienti c or domain expertise is integrated to provide partial explanations or constraints for the unknown relationship that the model is built to learn, Or Understanding of properties of training data are exploited to provide more ecient/e ective mechanisms for learning. 3.1 ARCHITECTURES FOR INTEGRATING KNOWLEDGE Before necking down into knowledge-informed AI pipelines, we should discuss some di er- ent architectures for knowledge integration. The integration of explicit knowledge into otherwise \uninformed" AI pipelines can be anywhere from loosely coupled to tightly integrated. Figure 12 illustrates the various system architecture we've encountered. For this study, we focused largely on tightly integrated architectures of informed deep learning and reinforcement learning|covered in Sections 3.3 and 3.4 with illustrating examples in Section 4 We also looked into embedded and jointly trained neural networks, which we cover in Section 4.5. 23 While also a worthwhile direction, since such approaches are more common and straightfor- ward, we did not deeply investigate loosely coupled architectures. These consist of serial approaches where (1) a knowledge-based model could narrow the search space for a neural network, or (2) where a knowledge-based model re nes or facilitates the predictions that come out of a neural network; and \complementary" con gurations where there can be adaptive selection between knowledge- based or neural network models. In other words, we did not deeply explore neuro-symbolic AI that has been de ned as \modular systems that seek to have the property of compositionality" [32]. Figure 12. Loosely Coupled to Tightly Integrated Informed Deep Learning Architectures Loosely coupled: Serial hybrid or complimentary hybrid con gurations. Tightly integrated: addi- tional knowledge-containing dataset (a.k.a., informed via training data), knowledge-based architec- ture and/or weights (a.k.a., informed hypothesis set), informed learning algorithm, learned-model consistency check (a.k.a., informed nal hypothesis). 3.2 KNOWLEDGE REPRESENTATIONS Illustrated in Figure 13, Laura von Rueden et. al. enumerate eight forms of knowledge representation [1] that can be integrated into informed AI pipelines. We mostly adopt their list making only one minor modi cation: renaming to \di erential equations" to \dynamical system." Also, we indicate on their list which of the knowledge forms can have symbolic representations. 3.3 KNOWLEDGE-INTEGRATED DEEP LEARNING PIPELINE In general, deep learning models are developed to learn unknown input/output relationships of a given problem. With features inand predictions out, the model is trained to t the provided 24 Figure 13. Forms of Knowledge Representation from Laura von Rueden et. al. [1] This is an excellent and comprehensive list of knowledge representations. *We modi ed this list slightly, renaming \di erential equations" to \dynamical systems" so to also include discrete-time models|ex., Markov decision processes, state machines, etc. 25 data. Laura von Rueden et. al. de ne a generic \informed machine learning" pipeline illustrated in Figure 14. This pipeline works for the new and emerging paradigm of knowledge-integrated informed AI that is based on deep learning techniques, however not perfectly, and we indicate why below. Figure 14. Laura von Rueden et. al. Informed Machine Learning Pipeline [1] As drawn, the pipeline in Figure 14 does not clearly highlight what's di erent about this AI paradigm from standard approaches. Namely, this pipeline can be used also for approaches where knowledge is used to only implicitly inform or in uence learning. For example, considering prior knowledge in the data pre-processing step is common practice that many refer to as feature engineering. However, implicit informing and standard feature engineering don't t the textual def- inition (not ours nor Laura von Rueden's). Instead, new ways of explicitly integrating knowledge do t|for example, leveraging additional knowledge-based datasets (though not the most promising knowledge integration path, more on this in Section 4). In Section 4, we discuss the four established integration paths (training data, hypothesis set, learning algorithm, and nal hypothesis) in more detail, along with examples that we found to be illuminating. We also share our overarching observations based on reported performance gains and the apparent state of the art. As we discuss illuminating examples of knowledge-integrated informed AI variants, we pop- ulate a table of reported performance gains that is structured around the taxonomy of knowledge forms and integration paths. This table, presented in Section 5, allows us to extend the discussion from promising performance gains to which variants may be more suitable for di erent problems. 3.4 KNOWLEDGE-INTEGRATED REINFORCEMENT LEARNING PIPELINE The generic machine learning pipeline described in the previous section is broadly relevant, beyond just deep learning, and even applies to reinforcement learning. However, while the forms 26 of knowledge representations can largely remain the same, informed reinforcement learning has its own framework with unique integration paths. Generally, reinforcement learning (RL) is a sub- eld of machine learning where the goal is to train an agent to make optimal choices in a sequential decision making process. In other words, at any given moment, the agent should select the best action, taking into account the future e ects of its decision. The action can come from a policy, which maps a state to an action; a value function, which maps a state to the expected future performance from that state; or a q-table, which maps a state-action pair to the expected future performance. In standard (i.e., non-deep) RL, policies, value functions, and Q-tables are typically repre- sented as lookup tables. These tables need an entry for every state or state-action pair. For many real-world systems, the size of these tables renders the learning process e ectively intractable. Fur- thermore, all states (or state-action pairs) need to be explored in order for the agent to have a good idea of what the performance will be in those states. Entries in the policy, value table, or q-table do not generalize to other unvisited states. Figure 15. Components of Non-deep RL Components that are learned in non-deep RL setting include policy functions, value functions, and/or Q-tables. These functions are trained to return the best action from a given state, or the value of an action from a given state. These methods have issues of scalability and generalizability to states that were not seen during training. Because of these issues of intractability and generalizability, there is interest in deep RL where the policy, value table, or q-table is replaced with a neural network approximator. Now, instead of large tables, the learned function is a relatively small network. For example, in the case of a three-dimensional state space, with ve possible values per state and ve actions, the Q-table needs 625 entries. The corresponding neural network needs three input nodes (one per dimension) and ve output nodes (one per action), and a small number of hidden layers. In concrete terms, this can lead to a signi cant reduction in the storage size of the policy. For example, in ACAS-X, the policy table is 1.22 GB, but a neural network approximation is 104 KB (Figure 17) [33]. 27 Figure 16. Components of Deep RL Components that are learned in deep RL setting policy networks, value networks and/or Q networks. These networks approximate the tables and functions in Figure 15. They do not su er from the same issues of scalability and generalizability encountered in non-deep RL. Figure 17. Q-Table vs. Neural Network ACAS-X collision avoidance policy as a Q-Table (left) and a neural network (right). The neural network requires much less storage, but it may di er from the original policy. Images from [33] 28 A neural network representation can also generalize, assuming similar inputs should lead to similar outputs. E ectively, unvisited states get evaluated according to the \closest" state that has been visited. On the other hand, this generalization makes veri cation and other analysis harder for a neural network than a table. The deep RL pipeline has essentially four main steps (Figure 18). First, the agent takes action according to its policy and observes the results transition, observations, and rewards. Second, the history is saved. The process of saving the history varies according to di erent algorithms (out of scope for this report). Third, a loss function is computed. Again, this process varies according to the speci c algorithm under consideration, but the loss is often a property of the value network or Q-network (such as self-consistency according to the Bellman equation) or the di erence between observed and expected performance. The loss is weighted by the accumulated reward. Fourth, the loss is backpropagated across the neural network that represents the policy, value network, or Q-network. These steps repeat until performance stabilizes, or some other termination criteria are reached. Figure 18. Deep RL Pipeline The typical deep RL pipeline includes fours steps: 1) Executing a policy and observing the results, including transitions and rewards; 2) saving the results; 3) computing a loss function on the results, weighted according to the received reward; and 4) updating the policy via backpropagation. This process repeats until a termination condition is met. Di erent pathways for informing the RL pipeline are highlighted for each of the four steps in the pipeline. Blue text indicates pathways for informing the pipeline. The dashed box applies to model-based RL. During the literature review performed for this report, we came across several main pathways for informing deep RL. These pathways are indicated in green in Figure 18. For step 1, reward shaping or modifying the loss function are common pathways for informed deep RL. Additionally, step 1 can also accommodate rules, strategies, and models that can be used by a planner to 29 select actions based on a policy or value network. For step 2, external data, such as human gameplay or other demonstrations, can be added to the set of saved experience. In step 3, human feedback or importance sampling can determine what data is used, or how it is weighted, in the loss function. For example, human demonstrations could be sampled more frequently to force the RL process to mimic the human examples. In step 4, di erentiable models can be used as part of the backpropagation process, sometimes termed \backpropagation through time." This type of backpropagation is sometimes accomplished using an LSTM. In subsequent sections of this report, we highlight examples of some of these pathways for informing deep RL. We note that the pipeline we describe here, displayed in Figure 18, is geared speci cally toward deep RL. However, as the reader might infer from Figures 15 & 16, the main di erences between RL and deep RL lie mainly with how policies are stored and updated. Many of the other components in the learning pipeline are the same. Therefore, many of the pathways for informing the deep RL pipeline apply equally well to RL in the absence of deep learning. 30 4. ILLUMINATING EXAMPLES 4.1 KNOWLEDGE-INTEGRATED VIA ADDITIONAL KNOWLEDGE-CONTAINING DATASET(S) Explicitly integrating knowledge into training data comes down to augmenting an original data set with an additional, (separate) dataset that captures important patterns with some certainty . Figure 19 aptly captures this concept. Figure 19. Architecture with Additional Training Data from [2] (Aug 2018) This broadly applicable approach of leveraging prior knowledge through synthetic data generation is described nicely in \Learning from the experts: From expert systems to machine-learned diagnosis models" [2] (Aug 2018). The additional dataset(s) often, but not necessarily, come from simulations (that can leverage understanding of the domain in the form of algebraic equations, knowledge graphs, or invariances), or via expert demonstration of the task at hand (Figure 20). For example: When knowledge is represented in the form of algebraic equations or knowledge graphs, it can be leveraged to train data generators [34] such that the generated data is then used to supply samples into a widened neural network input layer. Additional data may also come from having access to additional information, such as access to a simulation's background processes [35]. These kinds of data augmentation have been shown to improve accuracy and generalizability. 31 Likewise, supplementing with data generated by human demonstrations of performing the task at hand (that the neural network is being trained to perform) also has been shown to result in improved accuracy [36]. Claims of greater accuracy and generalizability along with data and computational eciencies come from integrating knowledge of invariances (i.e., symmetries across mathematical oper- ations, such as translation, re ection, rotation, and scaling of an image [37], which may be used to create and use new, virtual examples that are transformations of samples in original datasets [37] [38]. Many of the examples listed above may blur as, simply, \feature engineering," but the knowledge-based data source(s) being explicit or separate from other data is a distinguishing fac- tor. Nonetheless, the performance gains reported from integrating knowledge into deep learning pipelines via additional training data touch several important metrics: accuracy, generalizability, data, and computational eciencies. Performance in terms of accuracy is sometimes claimed only qualitatively, but when quanti ed we saw gains from 1-7%, 0.6-1.7%, and 2.4%, for example. These gains, when compared to other knowledge integration paths, may be considered marginal to mod- erate, however may be be sucient for certain applications. Improvements in generalizability and data or computational eciencies may also be worthwhile gains that make all the di erence. Figure 20. Forms of Knowledge Shown to Integrate via Training Data [1] 32 Figure 21. Examples from Literature that Illustrate Knowledge Integration via Training Data PANEL A, from [34], illustrates data augmentation from both explicitly and implicitly de ned knowledge graphs. PANEL B, from [35], shows four networks (G-generator, D-discriminator, T- perception task, and P-prediction of simulator's privileged information) that are trained jointly to improve the perception task's performance. PANEL C, from [36], illustrates the active learning approach where a human analyzes model results in order to suggest new instances for data labeling. 33 4.1.1 Integrating Knowledge via Training Data for Informed Reinforcement Learn- ing For reinforcement learning, it has been shown that simulation results can be leveraged to augment the main learning loop. Inherently, reinforcement learning involves executing actions in the real world or in simulation. However, additional information can be provided via simulation for use during online adaptation. For example, in \Robots that can adapt like animals" [39], the authors build a low-dimensional behavioral map of robot performance in simulation during the initial train- ing phase (Figure 22, top row). The low-dimensional map reduces the high-dimensional state space of the robot to a behavioral map that measures expected performance along di erent behavioral axes (e.g., speed, heading, etc.). At run time, the robot can compare its actual performance to its expected performance via the behavioral map (Figure 22, bottom row). If the performance degrades when the robot is deployed, due to damage or other issues, the robot will identify the discrepancy between its expected performance and the behavioral map. It will then update the map and attempt a new strategy from the updated map. Figure 22. RL Example Informed via Additional Simulated Data (LEFT) The two steps in using a performance map to improve online learning to recover from damage. Top row: creating an initial low-dimensional performance map from simulation data. Bottom row: updating the map online and using it to sample new behaviors. (RIGHT) Comparison of algorithm performance on a damaged hexapod. Bottom blue and orange curves represent unin- formed RL approaches. The top set of curves represent variants of the algorithm presented in the paper. [39] This ecient sampling and updating of simulated data with real-world experience can lead to signi cant performance improvement in reinforcement learning applications. The results of this process for a hexapod robot led to approximately 3-5x performance improvement over uninformed RL as measured by the walking speed of the robot after sustaining damage (Figure 8). RL is in gen- eral a resource intensive process, requiring signi cant training experience to get good performance, 34 and it can struggle to scale with high-dimensional spaces. Informing RL with this type of simula- tion data aids the process by providing an expectation of performance a priori (thereby reducing the amount of experience that needs to be generated online), and reducing the dimensionality of the search space (thereby reducing the diculty of eciently sampling the search space). 4.2 KNOWLEDGE-BASED ARCHITECTURE OR WEIGHTS (A.K.A., HYPOTH- ESIS SET) Our literature review included many creative examples of integrating knowledge into the hypothesis set, i.e., the initial architecture and/or weights of a neural network. Collectively, the examples we reviewed indicate that this integration path can often deliver signi cant improvements in performance. We found that the degree of knowledge integration into neural network architec- tures can range from modifying the input layer (via invariances, knowledge graphs), to constraining intermediate layers (via algebraic equations, logic rules), to in uencing the overall framework (using simulation results, probabilistic relations, or dynamical system constructs). Human feedback by model steering (via weight sliders, choice of model, etc.) is also considered a means of integrating knowledge into the neural network hypothesis which can be useful in applications where valuable knowledge may not be as readily and formally represented. For example: Figure 23. Forms of Knowledge Shown to Integrate via Architecture and/or Weights [1] To account for known invariances, additional input layers can be embedded into an architec- ture [40] [41] (panels A and E in Figure 24). Note that embedding invariances into a neural network architecture has been reported to yield both a larger network (1.5x [42]) as well as a smaller network (1/3x [41]). 35 Alternatively, a neural network can be architected to allow \dynamic" input vectors with zero padding that is based on importance and determined, for example, via reasoning through a knowledge graph [43] (panel B in Figure 24). Moving beyond the input layer, physics-based constraints can be enforced by mathematically derived (via algebraic equation) intermediate variables in a neural network [44] (panel C in Figure 24). Graph neural networks (GNNs) have a unique architecture that allows them to directly process graphical data (including probabilistic graphs) where knowledge may be present in the graph structure and/or priors. GNNs have been shown to o er greater ecacy in handling loopy graphs. They also deliver greater accuracies, and generalizability (than graphs that are larger or with less "informed" structure) [45] [46] (panel F in Figure 24). More holistically, a neural network can be architected to learn unknown parameters in di er- ential equations, or other mathematical frameworks, such that the architecture is determined by the necessary mathematical constructs [47] (panel D in Figure 24). Deep Lagrangian Networks (DeLaN from [47]) is a particularly elegant example where prin- cipled knowledge was infused into the deep learning system. Both the model's architecture and the gradients used to train it are derived from a rich understanding of mechanics (Euler-Lagrange equa- tions of motion for mechanical systems). In this example, the aim is to solve a second order ordinary di erential equation (ODE) by being trained to infer, from generalized positions, velocities, and ac- celerations of a mechanical system's components. The DeLaN architecture explicitly incorporates the structure of the Euler-Lagrange equation and is designed speci cally to relate a mechanical system's state with a control input (e.g., motor torques) to predict the system's evolution{outputs of the feed-forward neural network. The rest of the architecture incorporates these quantities with the observed system state to produce the generalized forces, via exact Euler-Lagrange calculations. The full architecture is trained in a traditional way, where the loss function e ectively optimizes the DeLan's parameters to minimize the violation of Lagrangian mechanics. In terms of e ectiveness, the DeLan model e ectively learns to relate system dynamics to produce joint torques for desired motion trajectories|directly attributable to the physics-informed design of its architecture. In terms of performance, not only does the learned model come with a high degree of interpretability and ability to handle arbitrary velocities and accelerations that fall outside of the training data, it reached this state using less data. These gains in performance are not quanti ed, but are nonetheless signi cant. Generally, integrating knowledge via the hypothesis set into deep learning pipelines has been shown to deliver signi cant performance gains in terms of accuracy, generalizability, eciencies in data, compute, and model size, and interpretability. Most of these improvements are reported qualitatively, but we found improvements in accuracy ranging from from 0.6-16%, reductions in test error on the order of -6 to -70%, requiring 40-57% less data, providing similar performance with a fraction (1/3rd) of network parameters, and 440x speedup in training time. 36 Figure 24. Examples from Literature that Illustrate Knowledge Integration via Hypothesis Set PANEL A, from [40], shows how invariances deduced from Reynolds Navier-Stokes equations can be embedded into a neural network architecture. PANEL B, from [43], illustrates how weights/zero- padding in a network can be determined from propagation through a knowledge graph. PANEL C, from [44], illustrates physics-constrained intermediate variables. PANEL D, from [47], shows how physics-based, di erentiable transformations can be built into a neural network. PANEL E, from [41] demonstrates how simple cyclic rotations are exploited in convolutional neural networks. PANEL F, from [46], illustrates a probabilistic graphical model is translated so that it can be pro- cessed by a graph neural network.37 4.2.1 Integrating Knowledge via Hypothesis Set for Informed Reinforcement Learning The reward function is a critical architectural element that directly e ects the performance of reinforcement learning algorithms. It is the primary mechanisms for conveying the quality of an action or policy to the learning algorithm. For many tasks, designing a good reward function is hard, especially for tasks that are complex, have many steps, or have delayed rewards. With typical, Markovian, state-dependent rewards, accomplishing these types of tasks can require very long training time. Using logical rules and automata that encode knowledge about the structure of the task to inform the reward function has strong potential to address these diculties. The authors of the paper DeepSynth [48] present a method for informing the reward function using logical rules based on simulation results. At run time, a semantic segmentation module observes the scene and the reward (Figure 25, left). By identifying the semantic data corresponding to the reward, the algorithm can build an automaton capturing the reward (Figure 25, right). The only rewards speci ed by a human are very sparse|a reward of 100 for nding a key, and a reward of 300 for opening the door. This sequential task with sparse rewards is very dicult for most RL algorithms, but providing a reward automaton to the algorithm allows the agent to reason about the tasks that will lead to higher rewards in the future, resulting in convergence two orders of magnitude faster than the next best algorithm (Figure 26). Figure 25. RL Example Informed via Architectural Elements Left: A semantic segmentation module captures information about the task during training and converts it to a reward automaton. Right: The reward automaton synthesized from Montezumas Revenge captures the high-level steps involved in playing the game. [48] This logical pathway for informing the reward has several other bene ts. First, in addition to an automaton that is automatically constructed, it can also accept an automaton or partial automaton provided by the user. Rather than, or in addition to, specifying a sparse reward, a user can provide high-level tasking as an automaton. This automaton allows the agent to better understand the operator's goals. Further, the policy representation conforms to the automaton. Therefore, the operator gets the added bene t of interpretable policies. Thus, informed rewards 38 Figure 26. Knowledge-Integrated RL via Architectural Elements - Performance The methodology proposed in DeepSynth suggests that reward automatons that capture high-level aspects of the task outperform other RL methods and converge two orders of magnitude faster. [48] o er bene ts to the algorithm and to the user. A user can specify non-trivial tasks that are dicult to express, resulting in much faster convergence, and the algorithm provides output that also informs the user about its goals and its policy. 4.3 KNOWLEDGE-INTEGRATED LEARNING ALGORITHM A typical neural network supervised learning approach involves a loss function that aims to minimize the prediction error produced by the function f(x) by nding optimal weights, , for each sample, x1:::xN, using real labels, y, as in Equation 1, where Rmay be any standard regularizing term. So, integrating knowledge into the learning algorithm generally amounts to adding an additional term, or terms, to this loss function as in Equation 2. Note that additional parameters, as in Equation 3, may be included to control how steep the penalty should be for violating any of the terms in the loss function. arg min f NX i=1L(f(xi);yi) +R(f)! (1) arg min f NX i=1L(f(xi);yi) +R(f) +Lk(f(xi);yi)! (2) arg min f NX i=1lL(f(xi);yi) +rR(f) +kLk(f(xi);yi)! (3) 39 According to the survey paper and our own literature review, it has been shown that several of the forms of knowledge representation can be integrated into the learning algorithm, or conversely, that the learning algorithm can be used to engage human expertise. For example: We found that additional knowledge-based loss terms, de ned by algebraic or di erential equations, have a straightforward path into the learning function and o er signi cant improve- ments to performance via improved accuracy or reduced test error, computational speedup and data eciency [49] [50] [51] (examples B, C, and D in Figure 28). Note that if knowledge is represented as logic rules, appropriate continuous and di erentiable constraints need to be selected for the additional loss term(s) [52]. If knowledge is represented with a knowledge graph, the additional terms can e ectively enforce strongly connected variables to behave similarly in a model [53]. Example A in Figure 28 claimed greater interpretability with their approach which led to the discovery of new clinically-relevant patterns. If knowledge is captured in a simulation or in auxiliary models, the learning algorithm can be modi ed to ensure that only predictions that obey constraints that are implied or em- bedded in those models e ect the neural network learning. [54] illustrated in Figure 29 is an example that suggests that using semantic knowledge through regularized objectives can lead to slightly greater improvement in performance (overall accuracy and better generalizability) than additional inputs. We did not come across any reporting of the e ect of integrating aux- iliary tasks on compute time. Another example, [55], employed a similar approach by jointly running a physics-based simulation during training. This example claimed a reduction in the amount of data required for training along with improvement in accuracy and generalizability (but led to signi cantly more expensive compute time in simulation). Alternatively, a human can exploit the loss function using visual interactive analysis tech- niques. One approach is described in [36], where the training process using the standard loss function supplies samples on which the model performs poorly, and similar data samples are gathered and labeled for additional training{e ectively focusing the model to train on its weaknesses. Another approach is described in [56] where a penalty is imposed when sam- ples are classi ed correctly but for wrong reasons{thereby encouraging the model to focus on relevant features. Overall, integrating knowledge into deep learning via the learning algorithm has been demon- strated to provide signi cant improvements across several important performance metrics: greater accuracies (+1.5 to +20%, or -4 to -43% test error), better interpretability and generalizability, data eciency from requiring less data (as little as 4%) or less labeled data (1% to no labels at all), and computational speedups (4x). 4.3.1 Integrating Knowledge via Reinforcement Learning Algorithm In some problems that are dicult for machine learning, humans are able to make decisions that are \good enough," without much diculty. Often, this is because humans understand the 40 Figure 27. Forms of Knowledge Shown to Integrate via Learning Algorithm [1] rules of the game they are playing, or the way that other players will react to them. Imbuing an RL agent with this same understanding can signi cantly improve the performance of the RL algorithm. When actual human experience is included, the system has the potential to quickly match or even exceed human performance. One example of integrating knowledge into the RL algorithm in this way is Alpha Go [4]. This approach includes models of the evolution of the game in the RL algorithm, along with human gameplay data, to develop a superhuman player for the game of Go. First, human expert data is used to initialize a policy network, which can then be used as the initial policy network by the RL agent (Figure 30). This network is then re ned through self-play to improve over the human baseline. A value network is trained by comparing expected performance against actual performance using Monte Carlo tree search (Figure 31). This comparison is enabled by both a dynamical system representation of the game's dynamics (for determining the possible evolutions of the game), as well as the human data (for initializing the probability of moves by the other player). As a result, the Alpha Go algorithm was the rst AI system to beat a professional-level human player in Go, and ranked 1.6 times higher by ELO (a common ranking system for comparing two players in zero-sum games) than the next best AI for Go. This example demonstrates the value of incorporating knowledge into the algorithm, as well as human expertise directly in the training data. The state space of Go is extremely large, so a complete enumeration of the state space is intractable. By leveraging the models of the evolution of the game, and using probabilities from human gameplay and self-play, the system can develop useful internal models of how the system will evolve. It can measure the di erence between the expectations output by its models and how the game unfolds to improve performance. Likewise, by 41 Figure 28. Examples from Literature that Illustrate Knowledge Integration via Learning Algorithm Excerpts from several papers illustrate di erent modi cations of the neural network loss function. In PANEL A, from [53], the learning algorithm encourages neural network predictions to be similar to the matrix representation of the probabilistic/knowledge graph. In PANEL B, from [49], an additional loss term forces monotonicity among speci c input features. In PANEL C, from [50], the prediction loss is minimized while considering additional physics-based terms. In PANEL D, from [51], not satisfying a partial di erential equation (PDE) encourages a generator (G) to produce samples that satisfy the PDE. 42 Figure 29. An Example from Literature that Illustrate Integration of Auxiliary Tasks via Learning Algorithm The main objective LMin this example from [54] is to predict the center word given surrounding context. Additional loss terms, LEandLR, incorporate semantic and syntactic knowledge via auxiliary tasks that leverage an entity vector and a relation matrix. Figure 30. RL Example with Knowledge Integrated via Learning Process LEFT: Human expert data is used to train a network to sample actions (rollout policy) and a network to play the game (SL policy network). RIGHT: The network trained on human gameplay is used to initialize an RL policy network that also learns a value network for the game via self- play. [4] 43 Figure 31. RL Example with Knowledge Integrated via Learning Process - Monte Carlo Tree Search Monte Carlo tree search is used to evaluate moves based on game play. First, an action is selected with the maximum Q value (a). Then actions are sampled according to the policy network (b). The learned value network and rollout policy network (trained on human gameplay) estimate the outcomes of the game (c). Finally, Q values are updated along the tree. [4] Figure 32. RL Example with Knowledge Integrated via Learning Process Alpha Go (blue) was compared against several other Go programs (red) and a human expert (green). Distributed Alpha Go outperformed the human player, and both Alpha Go variants signi cantly outperformed the other Go programs, even with handicaps (pale red bars). [4] 44 including human expert data, the search space is further limited, and a more intelligent valuation of moves is possible. It also provides a warm start to the learning process. Instead of learning from scratch, the algorithm can learn from a good policy based on what expert humans perform. Informing an algorithm in this manner should lead to a signi cant reduction in training time. Without this information, the system would have to essentially luck into winning positions, and then reinforce those strategies. It should be noted that recent research has focused on achieving similar or better results without using problem-speci c algorithms or human data. For more information on those research directions, see Sec. 1.2.2. 4.4 LEARNED-MODEL CONSISTENCY CHECK (A.K.A., FINAL HYPOTHESIS) Generally, the aim of training a deep learning model is to arrive at a nal hypothesis, after pursuing intermediate hypotheses, for the model and its weights. As such, there is opportunity to integrated knowledge into the learning pipeline during training. Knowledge is often leveraged as standard practice-when the performance of deep learning models is compared to selected baselines which are knowledge-based models (such as algebraic equations or simulation results). Here we shed light on a how knowledge may be leveraged by re ning a model, as a nal step or an inte- grated intermediate step in the learning process. As knowledge-integrated deep learning gains in popularity, we expect approaches that integrate knowledge at the nal stages will too as they have been shown to yield greater accuracy, generalizability, and computational eciency. The following few examples suggest how knowledge may be integrated as an intermediate step to re ne the learned model just before or while arriving at a nal model: In [57], a neural network is fed sequences with temporal order to iteratively correct the learned model. The approach is claimed to more accurately reproduce small-scale temporal e ects. In [58], knowledge is integrated through a semantic consistency modi er that improves object detection through a re-optimization step. Similarly in [59], a surrogate model is iteratively improved with new observations. This approach is illustrated in Figure 34 and claimed improvements in accuracy, as well as a computational speedup. Another way of leveraging knowledge in the nal hypothesis is by passing the NN predicted outputs into an embedded knowledge-based model, that can provide a more informed loss as the model trains. This sub-class of techniques is covered in greater depth as an embedded and jointly trained approach in Section 4.5.3. 4.4.1 Knowledge Integrated into Final Hypothesis for Reinforcement Learning In reinforcement learning, the reward function is used to weight actions and trajectories that meet some criteria of \goodness" better than other actions and trajectories. In this sense, the reward can be viewed as both the initial hypothesis or the nal hypothesis. Nevertheless, the 45 Figure 33. Forms of Knowledge Shown to Integrate via Final Hypothesis [1] Figure 34. Example from Literature that Illustrate Knowledge Integration via Final Hypothesis This diagram illustrates a training approach from [59] that iteratively improves a surrogate model with new observations. 46 typical use of the reward function is to reward (or penalize) outcomes based on a designer's initial notion of good behavior of the reinforcement learning system. In some cases, the reward can be used to explicitly inform the reinforcement learning process as a nal hypothesis, operating as a consistency check on the results of the learning process. For example, in [60], the authors study branes from string theory. In particular, they wish to nd vacuua , stable states that can be used for making predictions in physics. The space of plausible vacuua is at least 10500, and therefore they use a deep RL agent to explore this large space. Not all vacuua are consistent with known physics. The authors use the consistency of each vacuua the system discovers to inform the reinforcement learning process via the reward function. Thus, vacuua that are consistent with known and accepted conditions in physics receive higher reward than those that are inconsistent. Some selected results from [60] are presented in Fig. 35. First, the authors observed a O(200) increase in the number of feasible solutions discovered relative to an uninformed method. An additional, qualitative result, is that the consistency checks used in the reward function resulted in a strategy that is similar to how humans think about branes and vacuua. They use so-called \ ller" branes to ensure some degree of consistency with known physics, and then test the results of adding other branes. The right-hand side of Fig. 35 displays the results of the RL system learning to use ller branes. These results show that integrating knowledge into the nal hypothesis via the reward function is a promising and viable path for RL applications. Figure 35. RL Example with Knowledge Integrated via Final Hypothesis LEFT: The knowledge-integrated RL approach (orange and blue markers) outperforms naive ap- proaches (green, red, and purple markers) by nding O(200) more solutions. RIGHT: The knowledge-integrated RL approach learns to use \ ller" branes to nd a larger set of feasible solu- tions. [60] 4.5 EMBEDDED AND JOINTLY TRAINED NEURAL NETWORKS A special variant considered in scope for this study, even though the knowledge is not explicitly and tightly integrated, is neural networks (NN) that are embedded and jointly trained. This variant 47 is not explicitly constrained or informed by architecture or topology, rather the NNs are informed by their surrounding context. We found three major categories of embedded and jointly trained neural networks that are described in this section. 4.5.1 Neural Network Surrogates A neural network surrogate is an embedded and jointly trained NN that can take the place of a computationally expensive knowledge-based model, or that can serve to supplement a knowledge- based model in order to represent known, but un-characterized phenomena in the system being described. This strategy exploits the fundamental notion that an NN e ectively serves as a universal function approximator, as well as the fact that modern NN architectures are designed in terms of highly ecient, parallelizable array operations. In this way, an NN surrogate can be expected to be both suciently expressive as to encapsulate complex and realistic processes in the modeled system, and to be ecient so as to eliminate signi cant computational bottlenecks from the system. 4.5.2 Neural Networks with Surrounding Mathematical Structure Another avor of embedded and jointly trained NNs is where the NN is architected to learn unknown terms in an otherwise known mathematical structure, such as a di erential equation. In the latter context, the NN can learn to embody speci c, unknown mechanisms in a system that is otherwise well characterized by the surrounding informed structure. There can be many bene ts to embedding an NN in this way. Generally, provable properties about the encompassing mathematical structure can provide valuable insights into{and even bounds on{the behavior of the NN. Additionally, tools of analysis and methods for working in this context (e.g., numerical methods for solving di erential equations) can lead to specialized and principled approaches for training the NN. \Universal di erential equations" (UDEs) is a methodology for incorporating neural networks (or other machine-learnable models) into a di erential equation, so that prior structural (e.g., scienti c) knowledge can be combined with exible, data-driven approaches for modeling systems [61]. In a UDE, the neural network is embedded in a di erential equation (or a system of equations) alongside known, analytical terms that describe the system's behaviors or dynamics. Thus, in a UDE, the NN is not responsible for learning the governing dynamics of the systems in their entirety, rather it is tasked with modeling the dynamics associated with a subset of the mechanisms at play in the system. In this way, the governing dynamics of the unknown mechanisms or terms can be learned from data, while prior scienti c knowledge (e.g., conservation laws, incompressibility constraints, transport models) of the system are modeled explicitly. The embedded NN is trained by solving the di erential equations and comparing the result to recorded data. Note that the adjoint sensitivity method enables backpropagation through standard ODE solvers [62], which can be treated as black boxes. Thus the full array of numerical methods for solving di erential equations can be leveraged to train the embedded NN. For example, the UDE methodology for embedding NNs has been demonstrated to enable the recovery of governing equations using less data, to utilize arbitrary conservation laws as prior knowledge in a model, to discover and recover the exact mechanisms that govern a realistic biological 48 PDE, to enable an adaptive method for solving a 100-dimensional nonlinear PDE, and to automate the modeling of a fast and physically-realistic surrogate of a climate simulation [61]. Figure 36. Applications of Universal Di erential Equations LEFT: A Susceptible, Infectious, and Recovered (SIR) model is augmented with neural network terms to produce a learned model of the quarantine impact on the spread of Covid-19 [63]. RIGHT: A model for battery degradation in electric aircraft combines mechanistic and learned terms to describe the system's dynamics [64]. Although the UDE methodology was only proposed in 2020, it has already made an impact on real-world applications. In the eld of infectious disease modeling, a UDE was developed to augment a Susceptible, Infectious, and Recovered (SIR) model to create a learned model of the impact that quarantine policies have on the spread of Covid-19 [63] (Figure 36). A \universal battery degradation model" was used to model the battery performance and degradation for electric aircraft [64]. As depicted in Figure 36, an NN is incorporated with an electrochemical cell model, and various mechanistic models for battery degradation. This research produced a superior physics- informed and data-driven model that better predicts battery degradation and performance than the purely mechanistic battery models. UDEs can also enable researchers to discover exact analytical forms of dynamical mechanisms in their systems. To demonstrate this, the famed Lotka-Volterra predator-prey model was used to generate noisy population data, and a UDE was formed to explicitly include the prior knowledge of exponential growth (decay) in the prey (predator) while using an NN to represent the unknown population-interaction mechanisms [61]. As shown in Figure 37, the UDE-embedded NN was trained over a narrow domain of training data, which did not encompass a full period of the cyclic solution. The resulting learned predator- prey interaction model proved capable of yielding high-quality extrapolation of the populations far beyond the regime covered by the training data. Furthermore, the authors were able to use 49 Figure 37. Learning Predator-Prey Dynamics with a Universal Di erential Equation A universal di erential equation with learned terms for the predator-prey interactions of a Lotka- Volterra system is able to recover, to high delity, the quadratic interaction dynamics from only a limited regime of training data. 50 the method of sparse identi cation of nonlinear dynamical systems (SInDy) [65] on the learned component of the UDE to recover{on a polynomial bases{the exact quadratic relationship that describes the predator-prey interaction dynamics in the Lotka-Volterra system. Figure 38. Replacing an Expensive Mechanistic Model with Ecient Surrogate A continuous-time echo state network surrogate of an HVAC system reliably and eciently models a heating system interacting with ten rooms. A learned model embedded in a UDE can also be tailored to the challenging dynamical features of its encompassing system, such as the manifestation of processes on highly disparate time-scales, which is often referred to as \sti ness" in a system. For example, Robertson's chemical reaction equation features sti ness in the form of a rapidly-transforming transient chemical species amidst a slowly-occurring, stable chemical reaction. Models like physics-informed neural networks [66] and recurrent neural networks struggle to learn to be reliable surrogates for such sti non-linear systems. Thus the embedded model ought to be further informed via its design to try to accommodate these sophisticated dynamics. Continuous-time echo state networks (CTESN) [ ?] exemplify such an informed design. They prove to be capable of learning highly sti dynamics, and thus can serve as fast and reliable surrogates for systems that are typically expensive to simulate. A compelling application of a CTESN demonstrated that the learned model was capable of serving as an accurate learned surrogate of a heating, ventilation, and air conditioning (HVAC) system (Figure 38). The runtime of the surrogate scales linearly with system size, whereas the standard implicit ODE solver 51 scales cubically with system size; thus for realistic settings, the learned surrogate accelerates the HVAC simulation by nearly 100x. 4.5.3 Consequence Modeling Consequence modeling is a phrase that we have coined to represent a special case where knowledge is integrated via the nal hypothesis of the ML system1. Note that this discussion could just as well have been covered in Section 4.4. A consequence model maps the NN's output (e.g., the control parameters for aiming a trebuchet) to a meaningful, measurable quantity of interest (e.g., where a launched projectile ultimately lands, given that the trebuchet was red using said control parameters). The consequence model (e.g., di erential equations describing the kinematics of a trebuchet) is assumed to be di erentiable so that its results can be incorporated into the objective function that is used to train the NN; thus the training gradients become informed by the expert knowledge that is instilled via the consequence model. This is in stark contrast to standard approaches to providing feedback to an NN via its nal hypothesis; typically, this entails designing a loss function only to have nice mathematical properties for solving a general optimization problem (e.g., an L2 loss measuring the discrepancy between the NN's predicted control parameters against known control parameters). Consequence modeling instead uses expert knowledge to formulate a meaningful and interpretable loss function, whose landscape captures the interplay of quantities of interest{including the NN's various outputs{in a principled way. Incorporating a di erentiable simulator down-stream of a NN is an elegant and powerful way to inform a learned model. Consider, for example, an NN that is responsible for aiming a trebuchet: given the target's position and the wind conditions, the NN must learn to deduce the control parameters (e.g., the counter weight magnitude and angle of release) that will lead to a precise hit. A traditional approach to training a surrogate to this inverse physics problem would be to gather or simulate a large collection of data{control parameter, wind, and target tuples{and then train the NN in a supervised fashion using this training set. By contrast, an informed approach to this problem was demonstrated in [67] wherein the neu- ral network was placed upstream a di erentiable physics model of the trebuchet. In this approach, which is depicted in Figure 39, the neural network is trained by predicting control parameters, which then are used to \ re" the trebuchet. The neural network is thus trained not against a na ve L2 loss between predicted and desired control parameters, but instead against the measured displacement between the desired target and the actual strike point. In this way, the gradients that propagate back to the network are informed by the trebuchet model that is used to mea- sure the meaningful \consequence" of the NN's prediction. As such, the NN is optimized based on the known interplay between disparate quantities (e.g., counter weight and launch angle) that are naturally and equitably combined in forming the consequence-based loss. Not only does this process quickly train a reliable and lightweight surrogate, but it also provides a clear path forward toward improving the system delity: rely on scienti c expertise to develop increasingly detailed di erentiable simulators. 1Term aptly coined by Jason Nezvadovitz, MIT Lincoln Laboratory, Group 76 52 Figure 39. Training a Model to Aim a Trebuchet Using Consequence Modeling A neural network is provided with the wind speed and desired target distance, and is tasked with aiming the trebuchet to hit the target. The model's outputs are veri ed by simulating the resulting shot with a di erentiable physics simulation of the trebuchet. Aspects of \consequence" modeling have found early success in realistic control problems. In these applications the consequence can manifest in diverse ways, such as a measured departure from an ideal trajectory [68], or as a violation from a control Hamiltonian [69]. In these cases the objective function can be used to encode directly the constraints of the optimal control problem and thus provide the learned components of system physics-informed feedback to its evolving policy. 53 This page intentionally left blank. 5. PERFORMANCE GAINS VS. VARIANTS OF KNOWLEDGE INFORMED AI As previously noted (Section 2), knowledge-integrated informed AI techniques have been shown to lead to signi cant improvements in performance when measured by a variety of metrics that matter for technology problems of national importance (accuracy, enabling new discoveries through broadened applicability, improved safety and reliability, ecient use of resources). Looking closely at which of the forms of knowledge representation and which integration paths lead to these gains in performance suggests that there is a tradespace across informed techniques that should be considered in order to develop the best performing approach for any given problem. In Table 5, we gather quanti ed and quali ed performance gains reported for each example described in the previous section (and more) into one large table divided by integration path and form of knowledge representation. Comparing reported performance across the rows and columns suggests (1) some gains are achieved regardless of the selected knowledge representation and/or integration path (with di erences in magnitude), and (2) some gains are more prevalent for particular knowledge representations and/or integration paths. The following are some observations drawn from this table: All integration paths and knowledge forms have demonstrated accuracy or score improve- ment. The magnitude of these gains suggests that stronger improvements are achieved when knowledge is integrated via the learning algorithm, then hypothesis set, then training data. Reporting of improvements in accuracy sometimes coincides with better generalizability be- yond training distribution. In terms of resource eciencies: {Data eciency is another improvement that is common across integration paths and knowledge representations, although not always reported on. {Claims of improvements in computational eciency often coincides with data eciency. Also note that knowledge integration can instead increase compute requirements. This has at least been reported in two cases where knowledge is integrated into the learning algorithm [55] [70]. {E ect on network size (i.e., network simplicity) is not commonly measured. Few reports suggest e ect on size can be better (fewer parameters, smaller network) or worse (more parameters, larger network). Furthermore, we found that this e ect is only reported when the knowledge form is invariances. Claims of interpretability are made mostly when the knowledge representation has some symbolic reasoning power. An exception is where the network architecture itself is based on knowledge. Informed RL examples mostly use symbolic representations of knowledge (logic rules, knowl- edge graphs, and human feedback) and often claim improvements in interpretability (espe- cially if integration path is via hypothesis set). 55 There is relatively less research activity for knowledge integration into training data and hypothesis set. This may shift with greater awareness of informed techniques. There are several factors to note as we draw conclusions from this table: 1. Researchers more often report on performance gains, not disadvantages. 2. Reporting on some metrics is standard (accuracy, score improvements, reductions in test error), however reporting on other metrics (generalizability, data eciency) is entirely up to the authors. 3. Several observed reporting inconsistencies: Improvements are not always reported using the same metrics|overall accuracy, AU- ROC, RMSE, test error, mAP, IoU, etc. Certain metrics are rarely or never quanti ed (generalizability, interpretability, reliabil- ity). Choice of baseline model is sometimes questionable or missing entirely. 4. Publications generally document unique solutions for di erent problems. Therefore, while comparing performance gains across solutions suggests an interesting tradespace, theoretical and empircal analysis across various techniques for select problems, or classes of problems, would be more conclusive. 56 TABLE 5 Performance Gains by Knowledge Representation and Integration Path 1.Accuracy (%a), Test error (%e) / Score (xS) 2.Generalizable (G) / Interpretable (I) 3.Resource Eciency (% <D data, % <L labels, xC computational speed-up, fraction N of network size) White Box Highlight = RL Examples +Only quali ed (or quanti ed with unique metric) 1st sh = First super-human performance Training Data Hypothesis Set Learning Algorithm Final Hypothesis 1 2 3 1 2 3 1 2 3 1 2 3 Algebraic Equations2-16%a 30-60%e440xC 10xC-4-43%e 0%<L %a+ Logic Rules xS+, 8xSI+, I+, I+57%<D, <D+,xC+, 100xC20%a, 8xSI+0%L, 1%L Simulation Results1-7%a, 3-5xSG+40%<D, 100xC%a+, %a+G>, G+<D+, >C%a+xC+ Dynamical Systems%a+, same, -99%eG+<D+, xC+, <D+, 15kxC, 2xC-30%e, %a+, %a+, 1st sh<D+, 96%<D, 4xC Knowledge Graphs.6-1.7%a, 2.4%aI+.6-1.7%a, 2.4%a, 8xSI+, I+Discovery, 1.5- 6.5%a, 8xSI+, G+, I+%a+, %a+G+, G+ Probabilistic Relations9-10%a,- 50%e, EcacyG+, G+<D<D> Invariances %a+G+<D+, xC+-6%e, -30%e, -3.3%eG+, I+1/3rdxN, 1.5xN, xC+I+xC+ Human Feedback%a+, %a+, 1-4.5xS, 1st sh<D+, xC+%a+, 1.5-3xS, 1-4.5xS 1st shI+<D+, xC+1st sh, 1st sh, 1.4-6.5xSTrust+xC+, <D+ 57 TABLE 6 References Associated with Gains Included in Performance Gains Table 1.Accuracy (%a), Test error (%e) / Score (xS) 2.Generalizable (G) / Interpretable (I) 3.Resource Eciency (% <D data, % <L labels, xC computational speed-up, fraction N of network size) White Box Highlight = RL Examples +Only quali ed (or quanti ed with unique metric) 1st sh = First super-human performance Training Data Hypothesis Set Learning Algorithm Final Hypothesis 1 2 3 1 2 3 1 2 3 1 2 3 Algebraic Equations[44] [71] [71] [72][49] [73] [57] Logic Rules [74], [75][32], [74], [75][76], [32], [32], [32], [48][77], [75][75] [73], [77] Simulation Results[35], [39][35] [78], [48][79], [55][79], [55][55], [55][59] [59] Dynamical Systems[51], [80], [80][47] [47], [47], [51], [80], [80][50], [51], [81], [4][51], [82], [82] Knowledge Graphs[34], [43] [43] . [34], [43], [75][43], [75][53], [54], [75][53], [54] , [75][83], [58][83], [58] Probabilistic Relations[45], [45], [46][45], [46][51] Invariances [38] [38] [38], [38][41], [40], [42][40], [84][41], [42], [84][84] [84] Human Feedback[85], [36], [86][86], [86][85], [87], [86], [86][87] [86], [86][4], [88], [89][56] [88], [70] 58 6. CHALLENGES AND UNTAPPED RESEARCH OPPORTUNITIES For a lengthy set of challenges and directions for knowledge-integrated informed AI ap- proaches, we refer to the last section in Laura von Rueden et. al.'s survey paper [1]. Here we o er a brief collection of promising research directions and application areas we found to be un- tapped and worthwhile for advancing technology for national security. Each item here stands as a gap in foundational research, or an applied technology opportunity. The tradespace across knowledge-integrated informed AI techniques (employing di erent knowledge representations and integration paths) has been barely explored beyond draw- ing comparisons across dispersed research activities. One example paper from more than two decades ago discussed the di erence between di erent integration paths and suggested that \expanding the e ective training set size . . . is mathematically equivalent to incorporating the prior knowledge as a regularizer" [38]. Exploring the tradespace theoretically and empir- ically, considering each form of knowledge representation, their plausible integration paths, and how much importance to give to the knowledge vs. data will help lead to the development of techniques that are most optimal for any given problem. Others have also suggested this as an important research direction noting that \as the eld of AI moves towards agreement on the need for combining the strengths of neural and symbolic AI, it should turn next to the question: what is the best representation for neurosymbolic AI?" [10] This sentiment can, of course, be extended beyond just neurosymoblic AI. Performance degradation caused by knowledge integration is largely unreported. For example, it is plausible that knowledge integration can be too constraining, or perhaps even misleading, precluding learning. Alternatively, knowledge integration could be a computational bottleneck that slows down training and/or learning. These kinds of issues are yet to be explored and understood. Knowledge representation in deep learning may not scale well, therefore there is a need to understand the limits and the reasons for those limits. In other words, \as we expand and exercise the symbolic part and address more challenging reasoning tasks, things might become more challenging... For example, among the biggest successes of symbolic AI are systems used in medicine, such as those that diagnose a patient based on their symptoms. These have massive knowledge bases and sophisticated inference engines. The current neurosymbolic [informed] AI isn't tackling problems anywhere nearly so big." [90] This observation too can be extended beyond just neurosymbolic AI. It's plausible that the integration of knowledge that constrains or guides learning would result in solutions that are more robust to adversarial attack. Analysis that speci cally explores sensitivity to such counter-AI techniques with and without knowledge integration may show strong ability to address concerns of robustness in today's AI with knowledge-integrated informed AI. Veri cation and validation of engineered systems is an exponential problem in general, and becomes more so for systems that employ AI. It's plausible to suggest that explicitly informed 59 techniques are easier to verify/validate. Developing proof of this hypothesis along with speci c methodology for testing informed systems can help make strides toward safer AI. The recent dramatic successes of transformer-based, self-supervised models in the domains of both computer vision (CV) and natural language processing (NLP) may serve as a blueprint for developing foundational models for scienti c domains that are informed by rich simulation frameworks. It is an open question as to whether a traditional, attention-based transformer architecture will be as impactful in problem spaces where informed AI techniques are es- pecially promising, or if such architectures and learning paradigms can be augmented to explicitly incorporate expert knowledge. Updating of knowledge that is explicitly integrated could enable AI systems that are more adaptable via continuous learning and continuous integration. This research direction would be valuable for learn-in-the- eld (edge-AI) applications. There's a dependency on programming languages and packages that allow for knowledge integration. Advancements in the state of the art of knowledge-integrated informed AI there- fore depend on continued e orts to develop languages and packages that allow all kinds of knowledge integration. This issue is discussed in more detail below. A Language Problem Among the many research projects examined here, the vast majority of them are implemented using one of a handful of Python-based libraries for deep learning. Over the past decade, Python has emerged as the language of choice among data scientists and machine learning researchers; its popularity has earned it a critical mass of users such that a rich, unparalleled ecosystem of powerful and polished tools, libraries, tutorials, and reference implementations has manifested, with the likes of Microsoft and Facebook having invested heavily into it. It is this Python-centric ecosystem that has fostered the recent deep learning revolution in domains like computer vision and natural language processing. But is this sustainable? Is Python a language that can foster the expansion of novel applications of deep learning into other scienti c elds? The fact of the matter is that all scienti c computing Python libraries, in one way or an- other, depend on a faster, \lower-level" language{like C{to handle all of the fast mathematical computations. Thus far, deep learning models for computer vision and natural language processing applications have not been so hindered by this \two language" paradigm; dense, convolutional, and recurrent neural networks can be framed in terms of vectorizable, linear algebra routines, which can easily be dispatched to a lower-level language. Indeed these are the corner stones of libraries like PyTorch and TensorFlow. However, as the disciplines of scienti c computing and machine learning meet and begin to cross-pollinate, there will be a need to support fast computation and automatic di erentiation involving adaptive algorithms, non-standard algebras (e.g., quaternion algebra), and the ability to interface with other programming paradigms, such as probabilistic programming languages. In short: knowledge-integrated informed AI will likely lead us away from the land of multiplying massive matrices of real numbers. 60 Python, in its present form, can never address such gaps itself. It can only be used as an interface with other languages that can then attempt to solve these problems. This is why Jeremy Howard, creator of the fast.ai deep learning library (which is Python-based, of course) asserted: \Python is not the future of machine learning. It can't be. It's so nicely hackable, but it is so frustrating to work with a language where you can't do anything fast enough unless you call out to some external CUDA or C code [...] It's ne for a lot of things, but not really in the deep learning or machine learning world. I really hope that Julia is successful [...] Most importantly, it's Julia all the way down, so you can write your GPU kernel in Julia." [91] Google's JAX [92] is a Python library that is growing in popularity as a route toward ecient, di erentiable programming; it provides an XLA-based (Accelerated Linear Algebra) just-in-time compiler that can operate on a subset of Python and NumPy to accelerate computations that would otherwise fall outside of the \vectorizable" suite of code patterns. JAX makes it simple to write fast, compose-able NumPy-based functions, and it provides a rich automatic di erentiation framework for accessing various high-order derivatives, Jacobian-vector products, etc., of these functions. As such, it is being used in some pioneering works in the knowledge-integrated informed AI space, such as the development of a di erentiable molecular dynamics simulator [93] and a di erentiable quantum chemistry simulator (Li, 2021). However, JAX is ultimately limited by its roots in the Python language; simple control- ow logic and in-place operations on data structures are not supported by JAX, which limits its applicability to many domains of scienti c computing and thus knowledge-integrated informed AI. Beyond the world of Python, Julia is a programming language that is designed to be just- in-time compiled such that performant code can be written natively in Julia, while still being accessible and readable like Python. Julia is likely the language that boasts the most comprehensive automatic di erentiation capabilities; for instance, the Zygote automatic di erentiation library operates on Julia's abstract syntax tree itself to enable di erentiable programming [67]. In this way, Julia is similar to but exceeds JAX in a fundamental way: it is unimpeded by the two (or N) language problem. For instance, one can easily write highly performant Julia code that operates on exotic data types using non-standard algebras and then use this code in a larger machine learning framework. Additionally, Julia is designed around a multiple dispatch system [94], which fosters rich code reuse such that features like automatic di erentiation, GPU dispatch, and fast linear algebra routines can easily be extended to one's novel mathematical framework. In terms of design, Julia is the most natural language to do innovative research in the realm of knowledge-integrated informed AI. But the cost of leaving the rich ecosystem of Python-based machine learning and data science resources is dicult to overstate. It has both practical and far-reaching implications ranging from Julia's relative lack of infrastructure and convenient tooling, to making one's research inaccessible to others in the eld, to Julia being unknown or untested by the software security standards that are required for deploying solutions in secure environments. 61 Figure 40. A Programming Language Problem A heat map that indicates the approaches to informed AI that are most likely to be obstructed by the \two language problem." To tightly incorporate simulations and models of dynamical systems into one's ML framework is often a substantial challenge in Python, whereas Julia is highly capable toward this end. 62 7. SUMMARY AI that leverages both data and knowledge is continuously gaining interest as it delivers noteworthy and game-changing performance. Within the national security domain, where there is ample scienti c and domain-speci c knowledge that stands ready to be leveraged, and where purely data-driven AI can lead to serious unwanted consequences, this emerging class of AI techniques should be deliberately pursued. In this report, we discuss the motivations behind what we refer to as \knowledge-integrated informed AI", along with our ndings from a thorough exploration of approaches that fall into this paradigm of AI. Through illuminating examples of knowledge integrated into both deep learning and reinforcement learning pipelines, we shed light on speci c kinds of performance gains that knowledge-integrated informed AI has been shown to provide. We also discuss an apparent trade space across variants, along with observed and prominent issues that suggest worthwhile future research directions. Most importantly, we discuss how performance gains can bene t problems of national security, and we lay out immediate and longer term steps we can take to appropriately adapt and apply knowledge-integrated informed AI techniques. 63 This page intentionally left blank. REFERENCES [1] L. von Rueden, S. Mayer, K. Beckh, B. Georgiev, S. Giesselbach, R. Heese, B. Kirsch, M. Wal- czak, J. Pfrommer, A. Pick, and et al., \Informed machine learning - a taxonomy and survey of integrating prior knowledge into learning systems," IEEE Transactions on Knowledge and Data Engineering pp. 1{1 (2021), URL http://dx.doi.org/10.1109/TKDE.2021.3079836 . [2] X. Amatriain, \Data as prior/innate knowledge for deep learning models," (2019), URL https://xamat.medium.com/ data-as-prior-innate-knowledge-for-deep-learning-models-23898363a71a . [3] G. Hinton, L. Deng, D. Yu, G.E. Dahl, A.r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T.N. Sainath, and B. Kingsbury, \Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal Processing Magazine 29(6), 82{97 (2012). [4] D. Silver, A. Huang, C. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalch- brenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, \Mastering the game of go with deep neural networks and tree search," Nature 529(7587), 484{489 (2016), URL https://doi.org/10.1038/nature16961 , cited By 5915. [5] A. Krizhevsky, I. Sutskever, and G. Hinton, \Imagenet classi cation with deep convolutional neural networks," (2012), vol. 2, pp. 1097{1105, cited By 50743. [6] R. Bommasani, D.A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M.S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill, et al., \On the opportunities and risks of foundation models," arXiv preprint arXiv:2108.07258 (2021). [7] \New deep learning models require fewer neurons," URL https://www.csail.mit.edu/news/ new-deep-learning-models-require-fewer-neurons . [8] A. Zewe, \These neural networks know what they're doing," URL https://news.mit.edu/ 2021/cause-effect-neural-networks-1014 . [9] M. Lechner, R. Hasani, A. Amini, T. Henzinger, D. Rus, and R. Grosu, \Neural circuit poli- cies enabling auditable autonomy," Nature Machine Intelligence 2(10), 642{652 (2020), URL https://doi.org/10.1038/s42256-020-00237-3 , cited By 10. [10] A. d'Avila Garcez and L.C. Lamb, \Neurosymbolic ai: The 3rd wave," (2020), URL https: //arxiv.org/abs/2012.05876v2 . [11] G. Marcus, \The next decade in ai: Four steps towards robust arti cial intelligence," (2020), URL https://arxiv.org/abs/2002.06177v2 . [12] J. Willard, X. Jia, S. Xu, M. Steinbach, and V. Kumar, \Integrating scienti c knowledge with machine learning for engineering and environmental systems," (2020), URL https://arxiv. org/abs/2003.04919v4 . 65 [13] A. Borghesi, F. Baldo, and M. Milano, \Improving deep learning models via constraint-based domain knowledge: a brief survey," (2020), URL https://arxiv.org/abs/2005.10691v1 . [14] R. Roscher, B. Bohn, M.F. Duarte, and J. Garcke, \Explainable machine learning for scienti c insights and discoveries," (2020), URL https://arxiv.org/abs/1905.08883v3 . [15] K. Cranmer, J. Brehmer, and G. Louppe, \The frontier of simulation-based inference," Pro- ceedings of the National Academy of Sciences 117(48), 30055 (2020), URL http://www.pnas. org/content/117/48/30055.abstract . [16] R. Baker, J.M. Pe~ na, J. Jayamohan, and A. J erusalem, \Mechanistic models versus machine learning, a ght worth ghting for the biological community?" Biology Letters 14, 20170660 (2018), URL https://www.researchgate.net/publication/325172474_Mechanistic_ models_versus_machine_learning_a_fight_worth_fighting_for_the_biological_ community , found on ResearchGate. [17] L.C. Lamb, A. Garcez, M. Gori, M. Prates, P. Avelar, and M. Vardi, \Graph neural networks meet neural-symbolic computing: A survey and perspective," (2021), URL https://arxiv. org/abs/2003.00330v7 . [18] D.A. Cox, \Neurosymbolic hybrid ai," (2020), URL https://www.youtube.com/watch?v= AdMceC1R_sg . [19] W. Knight, \Ai can do great things|if it doesn't burn the planet," (2020), URL https: //www.wired.com/story/ai-great-things-burn-planet . [20] J. Devlin, M.W. Chang, K. Lee, and K. Toutanova, \Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805 (2018). [21] T.B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., \Language models are few-shot learners," arXiv preprint arXiv:2005.14165 (2020). [22] A. Radford, J.W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., \Learning transferable visual models from natural language su- pervision," arXiv preprint arXiv:2103.00020 (2021). [23] SmugMug, \website photos on ickr," (2004), URL https://www.flickr.com . [24] W. Foundation, \Wikipedia: Online encyclopedia," (2001), URL https://www.wikipedia. org/ . [25] J. Howard, \Now anyone can train imagenet in 18 minutes," (2018), URL https://www.fast. ai/2018/08/10/fastai-diu-imagenet/ . [26] A. Radford, I. Sutskever, J.W. Kim, G. Krueger, and S. Agarwal, \Clip: Connecting text and images," (2021), URL https://www.fast.ai/2018/08/10/fastai-diu-imagenet/ . 66 [27] R. Bommasani and P.e.a. Liang, \On the opportunities and risks of foundation models," arXiv.org e-Print archive (2), 53{72. [28] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, \A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play," Science 362(6419), 1140{1144 (2018). [29] J. Schrittwieser, I. Antonoglou, T. Hubert, K. Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lock- hart, D. Hassabis, T. Graepel, et al., \Mastering atari, go, chess and shogi by planning with a learned model," Nature 588(7839), 604{609 (2020). [30] \MuZero: Mastering Go, chess, shogi and Atari without rules," https://deepmind.com/ blog/article/muzero-mastering-go-chess-shogi-and-atari-without-rules , accessed: 2021-10-20. [31] Defense.gov, \Dod adopts ethical principles for arti cial intelligence," URL https: //www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod%E2%80% 90adopts%E2%80%90ethical%E2%80%90principles%E2%80%90for%E2%80%90artificial% E2%80%90intelligence/ . [32] A. Garcez, M. Gori, L. Lamb, L. Sera ni, M. Spranger, and S. Tran, \Neural-symbolic comput- ing: An e ective methodology for principled integration of machine learning and reasoning," Journal of Applied Logics 6(4), 611{632 (2019), URL http://www.collegepublications. co.uk/downloads/ifcolog00033.pdf . [33] K.D. Julian, Safe and ecient aircraft guidance and control using neural networks , Ph.D. thesis, Stanford University (2020), URL http://purl.stanford.edu/tr210hw4978 . [34] C. Jiang, X. Liang, H. Xu, and L. Lin, \Hybrid knowledge routed modules for large- scale object detection," Neural information processing systems foundation (2018), vol. 2018- December, pp. 1552{1563, URL https://www.scopus.com/inward/record.uri?eid=2-s2. 0-85064840476&partnerID=40&md5=b62a5790dded2b7390c589ca3d8c3ce4 , cited By 21. [35] K.H. Lee, J. Li, A. Gaidon, and G. Ros, \Spigan: Privileged adversarial learn- ing from simulation," International Conference on Learning Representations, ICLR (2019), URL https://www.scopus.com/inward/record.uri?eid=2-s2.0-85083952211& partnerID=40&md5=cdb5c5d037e144b7f86f61ac953f9b0b , cited By 5. [36] J. Bernard, M. Zeppelzauer, M. Sedlmair, and W. Aigner, \Vial: a uni ed process for visual interactive labeling," Visual Computer 34(9), 1189{1207 (2018), URL https://doi.org/10. 1007/s00371-018-1500-3 . [37] D.L. Bergman, \Symmetry constrained machine learning," (2019), URL https://arxiv.org/ abs/1811.07051v2 . [38] P. Niyogi, F. Girosi, and T. Poggio, \Incorporating prior information in machine learning by creating virtual examples," Proceedings of the IEEE 86(11), 2196{2208 (1998), URL https: //ieeexplore.ieee.org/document/726787 . 67 [39] A. Cully, J. Clune, D. Tarapore, and J.B. Mouret, \Robots that can adapt like animals," Nature 521(7553), 503{507 (2015), URL https://doi.org/10.1038/nature14422 . [40] J. Ling, A. Kurzawski, and J. Templeton, \Reynolds averaged turbulence mod- elling using deep neural networks with embedded invariance," Journal of Fluid Me- chanics 807, 155{166 (2016), URL https://www.cambridge.org/core/article/ reynolds-averaged-turbulence-modelling-using-deep-neural-networks-with-embedded-invariance/ 0B280EEE89C74A7BF651C422F8FBD1EB . [41] S. Dieleman, J. De Fauw, and K. Kavukcuoglu, \Exploiting cyclic symmetry in convolutional neural networks," International Machine Learning Society (IMLS) (2016), vol. 4, pp. 2799{ 2808, URL http://proceedings.mlr.press/v48/dieleman16.pdf , cited By 40. [42] D. Worrall, S. Garbin, D. Turmukhambetov, and G. Brostow, \Harmonic deep: Networks translation and rotation equivariance," Institute of Electrical and Electronics Engineers Inc. (2017), vol. 2017-January, pp. 7168{7177, URL https://doi.org/10.1109/CVPR.2017.758 , cited By 76. [43] K. Marino, R. Salakhutdinov, and A. Gupta, \The more you know: Using knowledge graphs for image classi cation," (2017), URL https://arxiv.org/abs/1612.04844v2 . [44] Y. Lu, M. Rajora, P. Zou, and S.Y. Liang, \Physics-embedded machine learning: Case study with electrochemical micro-machining," Machines 5(1) (2017), URL https://www.mdpi.com/ 2075-1702/5/1/4 . [45] F. Scarselli, M. Gori, A.C. Tsoi, M. Hagenbuchner, and G. Monfardini, \The graph neural network model," IEEE Transactions on Neural Networks 20(1), 61{80 (2009), URL https: //doi.org/10.1109/TNN.2008.2005605 . [46] K. Yoon, R. Liao, Y. Xiong, L. Zhang, E. Fetaya, R. Urtasun, R. Zemel, and X. Pitkow, \Inference in probabilistic graphical models by graph neural networks," (2019), URL https: //arxiv.org/abs/1803.07710v5 . [47] M. Lutter, C. Ritter, and J. Peters, \Deep lagrangian networks: Using physics as model prior for deep learning," (2019), URL https://arxiv.org/abs/1907.04490v1 . [48] M. Hasanbeig, N. Yogananda Jeppu, A. Abate, T. Melham, and D. Kroening, \Deep- synth: Automata synthesis for automatic task segmentation in deep reinforcement learning," (2021), vol. 35, pp. 7647{7656, URL https://ojs.aaai.org/index.php/AAAI/article/ view/16935 . [49] N. Muralidhar, M.R. Islam, M. Marwah, A. Karpatne, and N. Ramakrishnan, \Incorporating prior domain knowledge into deep neural networks," in 2018 IEEE International Conference on Big Data (Big Data) (2018), pp. 36{45, URL https://dx.doi.org/10.1109/BigData. 2018.8621955 . [50] E. de Bezenac, A. Pajot, and P. Gallinari, \Deep learning for physical processes: Incorporating prior scienti c knowledge," (2018), URL https://arxiv.org/abs/1711.07970v2 . 68 [51] Y. Yang and P. Perdikaris, \Physics-informed deep generative models," (2018), URL https: //arxiv.org/abs/1812.03511v1 . [52] K. Leung, N. Ar echiga, and M. Pavone, \Back-propagation through signal temporal logic speci- cations: Infusing logical structure into gradient-based methods," in Workshop on Algorithmic Foundations of Robotics (2020), URL 2008.00097 . [53] Z. Che, D. Kale, W. Li, T. Bahadori, and Y. Liu, \Deep computational phenotyping," in The 21th ACM SIGKDD International Conference (2015), vol. 2015-August, pp. 507{516, URL https://doi.org/10.1145/2783258.2783365 . [54] J. Bian, B. Gao, and T.Y. Liu, \Knowledge-powered deep learning for word embedding," in T. Calders, F. Esposito, E. H ullermeier, and R. Meo (eds.), Machine Learning and Knowledge Discovery in Databases , Berlin, Heidelberg: Springer Berlin Heidelberg (2014), vol. 8724 LNAI, pp. 132{148, URL https://doi.org/10.1007/978-3-662-44848-9_9 . [55] Y. Du, Z. Liu, H. Basevi, A. Leonardis, B. Freeman, J. Tenenbaum, and J. Wu, \Learning to exploit stability for 3d scene parsing," in S. Bengio, H. Wallach, H. Larochelle, K. Grau- man, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems , Curran Associates, Inc. (2018), vol. 31, URL https://proceedings.neurips.cc/ paper/2018/file/43feaeeecd7b2fe2ae2e26d917b6477d-Paper.pdf . [56] P. Schramowski, W. Stammer, S. Teso, A. Brugger, X. Shao, H.G. Luigs, A.K. Mahlein, and K. Kersting, \Right for the wrong scienti c reasons: Revising deep networks by interacting with their explanations," (2020), URL https://arxiv.org/abs/2001.05371v1 . [57] R. King, O. Hennigh, A. Mohan, and M. Chertkov, \From deep to physics-informed learning of turbulence: Diagnostics," (2018), URL https://arxiv.org/abs/1810.07785v2 . [58] Y. Fang, K. Kuan, J. Lin, C. Tan, and V. Chandrasekhar, \Object detection meets knowledge graphs," in Proceedings of the Twenty-Sixth International Joint Conference on Arti cial Intelli- gence, IJCAI-17 (2017), pp. 1661{1667, URL https://doi.org/10.24963/ijcai.2017/230 . [59] J. Pfrommer, C. Zimmerling, J. Liu, L. K arger, F. Henning, and J. Beyerer, \Optimisation of manufacturing process parameters using deep neural networks as surrogate models," Procedia CIRP 72, 426{431 (2018), URL https://www.sciencedirect.com/science/article/pii/ S221282711830146X , 51st CIRP Conference on Manufacturing Systems. [60] J. Halverson, B. Nelson, and F. Ruehle, \Branes with brains: exploring string vacua with deep reinforcement learning," Journal of High Energy Physics 2019(6), 1{60 (2019). [61] C. Rackauckas, Y. Ma, J. Martensen, C. Warner, K. Zubov, R. Supekar, D. Skinner, A. Ra- madhan, and A. Edelman, \Universal di erential equations for scienti c machine learning," arXiv preprint arXiv:2001.04385 (2020). [62] R. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud, \Neural ordinary di erential equa- tions," Neural information processing systems foundation (2018), vol. 2018-December, pp. 6571{6583, cited By 354. 69 [63] R. Dandekar and G. Barbastathis, \Quantifying the e ect of quarantine control in covid-19 infectious spread using machine learning," medRxiv (2020), URL https://www.medrxiv.org/ content/early/2020/04/06/2020.04.03.20052084 . [64] A. Bills, S. Sripad, W.L. Fredericks, M. Guttenberg, D. Charles, E. Frank, and V. Viswanathan, \Universal battery performance and degradation model for electric aircraft," (2020), URL https://arxiv.org/abs/2008.01527v1 . [65] S.L. Brunton, J.L. Proctor, and J.N. Kutz, \Discovering governing equations from data by sparse identi cation of nonlinear dynamical systems," Proceedings of the National Academy of Sciences 113(15), 3932{3937 (2016), URL https://www.pnas.org/content/113/15/3932 . [66] M. Raissi, P. Perdikaris, and G. Karniadakis, \Physics-informed neural networks: A deep learn- ing framework for solving forward and inverse problems involving nonlinear partial di erential equations," Journal of Computational Physics 378, 686{707 (2019), cited By 697. [67] C. Rackauckas, A. Edelman, K. Fischer, M. Innes, E. Saba, V.B. Shah, and W. Tebbutt, \Generalized physics-informed learning through language-wide di erentiable programming." inAAAI Spring Symposium: MLPS (2020). [68] I. Mordatch and E. Todorov, \Combining the bene ts of function approximation and trajectory optimization." in Robotics: Science and Systems (2014), vol. 4. [69] J. Carius, F. Farshidian, and M. Hutter, \Mpc-net: A rst principles guided policy search," IEEE Robotics and Automation Letters 5(2), 2897{2904 (2020). [70] P. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei, \Deep reinforcement learning from human preferences," Neural information processing systems foundation (2017), vol. 2017-December, pp. 4300{4308, URL https://www.scopus.com/inward/record.uri? eid=2-s2.0-85046999643&partnerID=40&md5=5ffef52e69ce75915865634de63042ba , cited By 148. [71] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, and A. Anand- kumar, \Fourier neural operator for parametric partial di erential equations," (2020), URL https://arxiv.org/abs/2010.08895v1 . [72] R. Ramamurthy, C. Bauckhage, R. Sifa, J. Sch ucker, and S. Wrobel, \Leveraging domain knowledge for reinforcement learning using mmc architectures," in I.V. Tetko, V. K urkov a, P. Karpov, and F. Theis (eds.), Arti cial Neural Networks and Machine Learning { ICANN 2019: Deep Learning , Cham: Springer International Publishing (2019), pp. 595{607, URL http://dx.doi.org/10.1007/978-3-030-30484-3_48 . [73] R. Stewart and S. Ermon, \Label-free supervision of neural networks with physics and domain knowledge," (2016), URL https://arxiv.org/abs/1609.05566v1 . [74] R.T. Icarte, T.Q. Klassen, R. Valenzano, and S.A. McIlraith, \Reward machines: Exploiting reward function structure in reinforcement learning," (2020), URL https://arxiv.org/abs/ 2010.03950v1 . 70 [75] X. Li, Z. Serlin, G. Yang, and C. Belta, \A formal methods approach to interpretable reinforcement learning for robotic planning," Science Robotics 4(37) (2019), URL https: //doi.org/10.1126/scirobotics.aay6276 , cited By 13. [76] G.G. Towell and J.W. Shavlik, \Knowledge-based arti cial neural networks," Arti cial Intel- ligence 70(1), 119{165 (1994), URL https://www.sciencedirect.com/science/article/ pii/0004370294901058 . [77] J. Xu, Z. Zhang, T. Friedman, Y. Liang, and G.V. den Broeck, \A semantic loss function for deep learning with symbolic knowledge," (2018), URL https://arxiv.org/abs/1711. 11157v2 . [78] F. Wang and Q.J. Zhang, \Knowledge based neural models for microwave design," in IEEE MTT-S International Microwave Symposium Digest (1997), vol. 2, pp. 627{630, URL https: //doi.org/10.1109/22.643839 . [79] J. Ingraham, A.J. Riesselman, C. Sander, and D.S. Marks, \Learning protein structure with a di erentiable simulator," in 7th International Conference on Learning Representa- tions, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 , OpenReview.net (2019), URL https://openreview.net/forum?id=Byg3y3C9Km . [80] C. Rackauckas, A. Edelman, K. Fischer, M. Innes, E. Saba, V. Shah, and W. Tebbutt, \Gener- alized physics-informed learning through language-wide di erentiable programming," CEUR- WS (2020), vol. 2587, cited By 0. [81] M. Raissi, P. Perdikaris, and G.E. Karniadakis, \Physics informed deep learning (part ii): Data-driven discovery of nonlinear partial di erential equations," (2017), URL https: //arxiv.org/abs/1711.10566v1 . [82] J. Degrave, M. Hermans, J. Dambre, and F. wy els, \A di erentiable physics engine for deep learning in robotics," (2018), URL https://arxiv.org/abs/1611.01652v2 . [83] G. Glava s and I. Vuli c, \Explicit retro tting of distributional word vectors," in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , Melbourne, Australia: Association for Computational Linguistics (2018), pp. 34{45, URL http://dx.doi.org/10.18653/v1/P18-1004 . [84] P. Toth, D.J. Rezende, A. Jaegle, S. Racani ere, A. Botev, and I. Higgins, \Hamiltonian gen- erative networks," (2020), URL https://arxiv.org/abs/1909.13789v2 . [85] S. Das, D. Cashman, R. Chang, and A. Endert, \Beames: Interactive multimodel steering, selection, and inspection for regression tasks," IEEE Computer Graphics and Applications 39(05), 20{32 (2019), URL https://doi.ieeecomputersociety.org/10.1109/MCG.2019. 2922592 . [86] T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan, A. Sendonaris, G. Dulac-Arnold, I. Osband, J. Agapiou, J.Z. Leibo, and A. Gruslys, \Deep q-learning from demonstrations," (2017), URL https://arxiv.org/abs/1704.03732v4 . 71 [87] P. Tambwekar, A. Silva, N. Gopalan, and M. Gombolay, \Interpretable policy speci cation and synthesis through natural language and rl," (2021). [88] G. Warnell, N. Waytowich, V. Lawhern, and P. Stone, \Deep tamer: Inter- active agent shaping in high-dimensional state spaces," AAAI press (2018), pp. 1545{1553, URL https://www.researchgate.net/publication/320163505_Deep_TAMER_ Interactive_Agent_Shaping_in_High-Dimensional_State_Spaces , cited By 34. [89] R. Kaplan, C. Sauer, and A. Sosa, \Beating atari with natural language guided reinforcement learning," (2017), URL https://arxiv.org/abs/1704.05539v1 . [90] A. Ananthaswamy, \Ai's next big leap," (2020), URL https://knowablemagazine.org/ article/technology/2020/what-is-neurosymbolic-ai . [91] L. Biewald, L. Shukla, and C. Sharp, \The story of fast.ai and why python is not the future of ml with jeremy howard," URL https://www.youtube.com/watch?v=t2V2kf2gNnI&t=2430s . [92] J. Bradbury, R. Frostig, P. Hawkins, M.J. Johnson, C. Leary, D. Maclaurin, and S. Wanderman-Milne, \Jax: composable transformations of python+ numpy programs, 2018," URL http://github. com/google/jax 4, 16 (2020). [93] S. Schoenholz and E.D. Cubuk, \Jax md: a framework for di erentiable physics," Advances in Neural Information Processing Systems 33 (2020). [94] S. Karpinski, \The unreasonable e ectiveness of multiple dispatch," URL https://www. youtube.com/watch?v=kc9HwsxE1OY . 72
8861e964-fc99-47bb-a13e-279269a23452
trentmkelly/LessWrong-43k
LessWrong
Orthogonality: action counterfactuals It seems that some of the problems with orthogonality go away if you use the AI's actions to define the counterfactuals. In the traditional model "Press" meant a button is pressed that compels the AI to change utility u→v, and it gets a reward E(u|u↛v)−E(v|u→v). Problem was, this could cause the AI to compel people to press the button. Instead I propose that u→v is an action that the AI can choose to take, and that "Press" allows the AI to take that action. Then it would get rewarded with maxaE(u|a≠u→v)−E(v|u→v), calculated by the AI according to counterfactuals on its own actions (somehow; details left to the student to fill in). We could add a small (or infinitesimal) ϵ on top of that so that the change u→v is the decision the AI reaches. This makes it likely to encourage "Press", but only as a tie breaker decision. Why is this immune to the usual attempts by the AI to cause (or deny) "Press"? Simply because if the AI decided not to u→v, then it only loses an ϵ. Thus causing "Press" to happen (which opens the option u→v) will only gain it an ϵ at max.
f9c5e2e1-78f2-43c6-b30a-d83125b961cc
trentmkelly/LessWrong-43k
LessWrong
Thoughts on the OpenAI alignment plan: will AI research assistants be net-positive for AI existential risk? Note: I really appreciate the work that the OpenAI alignment team put into their alignment plan writeup and related posts, especially Jan Leike, the leader of that team. I believe open discussions about alignment approaches make it more likely that the whole community will be able to find flaws in their own plans and unappreciated insights, resulting in better alignment plans over time. Summary: OpenAI’s alignment plan acknowledges several key challenges of aligning powerful AGI systems, and proposes several good ideas. However, the plan fails to sufficiently address: 1. The dual-use nature of AI research assistants and the high risk that such assistants will improve capabilities more than alignment research in ways that net-increase AI existential risk. 2. The likely challenges involved in both generating and evaluating AI alignment research using AI research assistants. It seems plausible that generating key insights about the alignment problem will not be possible before the development of dangerously powerful AGI systems. 3. The nature and difficulty of the alignment problem. There are substantial reasons why AI systems that pass all tests in development may not stay safe once able to act in the world. There are substantial risks from goal misgeneralization, including deceptive misalignment, made worse by potential rapid increases in capabilities that are hard to predict. Any good alignment plan should address these problems, especially since many of them may not be visible until an AI system already has dangerous capabilities. The dual-use nature of AI research assistants and whether these systems will differentially improve capabilities and net-increase existential risk There has been disagreement in the past about whether “alignment” and “capabilities” research are a dichotomy. Jan Leike has claimed that they are not always dichotomous, and this is important because lots of capabilities insights will be useful for alignment, so the picture is not as wo
90fb4b67-f197-47ec-b3d6-5d17a4df0577
StampyAI/alignment-research-dataset/blogs
Blogs
MIRI Has Moved! For the past several months, MIRI and its child organization [CFAR](http://rationality.org/) have been working from a much-too-small office on the outskirts of Berkeley. At the end of June, MIRI and CFAR took over the 3rd floor of [2030 Addison St.](https://www.google.com/maps/preview#!q=2030+Addison+St%2C+Berkeley) in downtown Berkeley, which has sufficient space for both organizations. Our new office is 0.5 blocks from the Downtown Berkeley BART exit at Shattuck & Addison, and 2 blocks from the UC Berkeley campus. Here’s a photo of the campus from our roof: ![view of campus from roof (500px)](http://intelligence.org/wp-content/uploads/2013/07/view-of-campus-from-roof-500px.jpg) The proximity to UC Berkeley will make it easier for MIRI to network with Berkeley’s professors and students. Conveniently, UC Berkeley is ranked [5th in the world](http://www.usnews.com/education/worlds-best-universities-rankings/best-universities-mathematics) in mathematics, and [1st in the world](http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-science-schools/logic-rankings) in mathematical logic. Sharing an office with CFAR carries many benefits for both organizations: 1. CFAR and MIRI can “flex” into each other’s space for short periods as needed, for example when MIRI is holding a week-long [research workshop](https://intelligence.org/get-involved/#workshop). 2. We can share resources (printers, etc.). 3. Both organizations can benefit from interaction between our two communities. Getting the new office was a team effort, but the person *most* responsible for this success was MIRI Deputy Director Louie Helm. Note that MIRI isn’t yet able to accommodate “drop in” visitors, as we keep irregular hours throughout the week. So if you’d like to visit, please [contact us](https://intelligence.org/team/) first. We retain 2721 Shattuck Ave. #1023 as an alternate mailing address. The post [MIRI Has Moved!](https://intelligence.org/2013/07/08/miri-has-moved/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
5f3c138f-5ab1-4996-a7e8-b9ed402c37d9
trentmkelly/LessWrong-43k
LessWrong
Re: Fierce Nerds Paul Graham just published an article titled Fierce Nerds. It's the most precise description of me I've ever read. > Most people think of nerds as quiet, diffident people. In ordinary social situations they are — as quiet and diffident as the star quarterback would be if he found himself in the middle of a physics symposium. And for the same reason: they are fish out of water. But the apparent diffidence of nerds is an illusion due to the fact that when non-nerds observe them, it's usually in ordinary social situations. In fact some nerds are quite fierce. This is absolutely true. When I'm in my element I own the room. I am so bored in ordinary social situations I used them to drill my Chinese handwriting for three years. > The fierce nerds are a small but interesting group. They are as a rule extremely competitive — more competitive, I'd say, than highly competitive non-nerds. Competition is more personal for them. Partly perhaps because they're not emotionally mature enough to distance themselves from it, but also because there's less randomness in the kinds of competition they engage in, and they are thus more justified in taking the results personally. Yup. > Fierce nerds also tend to be somewhat overconfident, especially when young. It might seem like it would be a disadvantage to be mistaken about one's abilities, but empirically it isn't. Up to a point, confidence is a self-fulfilling prophecy. Overconfidence. That's familiar. > Another quality you find in most fierce nerds is intelligence. Not all nerds are smart, but the fierce ones are always at least moderately so. If they weren't, they wouldn't have the confidence to be fierce. Yup. > There's also a natural connection between nerdiness and independent-mindedness. It's hard to be independent-minded without being somewhat socially awkward, because conventional beliefs are so often mistaken, or at least arbitrary. No one who was both independent-minded and ambitious would want to waste the effort i
1adc37da-70a4-4e69-b4cc-f48677ef2282
trentmkelly/LessWrong-43k
LessWrong
I'm hiring a Research Assistant for a nonfiction book on AI! I'm Garrison Lovely, a freelance journalist with work in NYT, Nature, BBC, TIME, and elsewhere. I report on the intersection of economics, geopolitics, and artificial intelligence. I'm also a Reporter in Residence at the Omidyar Network and publisher of Obsolete, a fast-growing Substack on AI. Overview What: Support the completion of a forthcoming general-audience nonfiction book on AI. You'll conduct research, draft chapters, edit existing content, and help transform an incomplete manuscript into a polished final product for publication by OR Books and The Nation Magazine. Why: The project, Obsolete: Power, Profit, and the Race to Build Machine Superintelligence, has the potential to be the go-to AI risk book in the post-ChatGPT era. You’ll also get the opportunity to work closely with an experienced journalist and writer who has a track record of publishing work in leading outlets (NYT, Nature, BBC, TIME, Foreign Policy, etc.). Start Date: As soon as possible Employment Type: Fixed term (6 months, with possibility of extension); full time Location: Remote, provided at least three hours overlap between 10am-6pm ET. Compensation: $37,485-$65,442 total compensation for the 6-month period, depending on experience and location Applications will be considered on a rolling basis, but please apply as soon as you can. Priority may be given to applicants who can move through the process sooner. Apply here Full job description here
5baf2629-8950-4094-a539-79958dfaaeea
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
AXRP Episode 23 - Mechanistic Anomaly Detection with Mark Xu [Google Podcasts link](https://podcasts.google.com/feed/aHR0cHM6Ly9heHJwb2RjYXN0LmxpYnN5bi5jb20vcnNz/episode/NDUzZTcwMzUtNThiOS00Y2NjLTkzOGUtNzJhZWM0MmFlNTA1) Is there some way we can detect bad behaviour in our AI system without having to know exactly what it looks like? In this episode, I speak with Mark Xu about mechanistic anomaly detection: a research direction based on the idea of detecting strange things happening in neural networks, in the hope that that will alert us of potential treacherous turns. We both talk about the core problems of relating these mechanistic anomalies to bad behaviour, as well as the paper “Formalizing the presumption of independence”, which formulates the problem of formalizing heuristic mathematical reasoning, in the hope that this will let us mathematically define “mechanistic anomalies”. Topics we discuss: * [Mechanistic anomaly detection](#mad) + [Are all bad things mechanistic anomalies, and vice versa?](#type-1-2-errors) + [Are responses to novel situations mechanistic anomalies?](#new-situations) + [Formalizing “for the normal reason, for any reason”](#for-the-normal-reason-for-any-reason) + [How useful is mechanistic anomaly detection?](#how-useful-is-mad) * [Formalizing the presumption of independence](#heuristic-arguments) + [Heuristic arguments in physics](#heuristic-arguments-in-physics) + [Difficult domains for heuristic arguments](#difficult-domains) + [Why not maximum entropy?](#y-no-max-ent) + [Adversarial robustness for heuristic arguments](#adv-robustness) + [Other approaches to defining mechanisms](#other-approaches) * [The research plan: progress and next steps](#research-plan) * [Following ARC’s research](#following-arc) **Daniel Filan:** Hello, everybody. In this episode, I’ll be speaking with Mark Xu. Mark is a researcher at the [Alignment Research Center](https://www.alignment.org/) where he works on mechanistic anomaly detection, which we’ll be speaking about during this interview. He’s also a co-author of the paper [Formalizing the Presumption of Independence](https://arxiv.org/abs/2211.06738) which is about heuristic arguments, a way of formalizing, essentially, what a mechanism is. For links to what we’re discussing, you can check the description of this episode and you can read the transcript at axrp.net. Mechanistic anomaly detection ----------------------------- **Daniel Filan:** All right. So I guess we’re going to be talking about mechanistic anomaly detection. First of all, what is that? **Mark Xu:** Yeah. So I think we can start with a more basic thing which is, what is anomaly detection? So there’s this traditional problem where, say, you have some factory making some widgets, and you want to decide whether or not a widget is a bad widget; and you don’t really know what a bad widget is, but you do know that the average widget is a good widget. So you can do something where you measure some various parameters of your average widget, or you measure some various parameters of every widget, and then you maybe do something simple where you’re like, suppose a widget is more than a few standard deviations along two or more of my measurements, we’re going to call that an “anomalous widget” and we’re going to not sell that sort of widget. I think that’s the standard conception of what an anomaly is, and what it means to detect anomalies is to get rid of things that you think are not good or anomalous. And then, there’s another thing that we want to do which is… So you have some system that, instead of making widgets, is doing certain kinds of reasoning and we’re interested in this general question of… So we have our ML system and we have a dataset on which… it’s not exactly making widgets that we think are good on average, but it’s doing reasoning that we think is good on average, because of our ability to create the training set and measure various properties of the training set, etc., etc. And then, we’re worried about situations where, say, off-distribution, it starts doing things that we don’t think are good but in ways where we can’t quite measure the thing that’s not good. And so, we want to do anomaly detection but we can’t quite do it in terms of measurements that we can actually make, so we want to do it with respect to the kinds of reasoning that our system is performing, and we call that mechanistic anomaly detection; where the idea is instead of being anomalous in terms of what the system actually does, it’s anomalous in terms of the mechanism it uses to do the thing that it’s doing, if that made any sense? **Daniel Filan:** Yeah. So I guess my understanding is: you have your AI system and in the lab, you’ve tested it and it didn’t do anything obviously horrible. And so, you’re sort of ready to release it, but you’re like, “Wait. When my AI is in the real world, I’m going to detect when it does any kind of reasoning that’s really different to the kind of reasoning that it did before. And then if it does a really different kind of reasoning, I guess I’m going to try to shut it down or something.” **Mark Xu:** Yeah. I think the thing that happens after that is a bit confusing. The standard thing, I think of is like, “We’re going to take it offline and retrain it on the kind of thing where it did a bad thing and try to get it to not do that bad thing anymore,” but I think that’s roughly right. I’m tempted to try to explain it again because I feel like I didn’t quite explain it that well. **Daniel Filan:** Sure. **Mark Xu:** Yeah. So I think the situation is: you have some ML system, and you have some set of scenarios in which it’s doing a thing that you think isn’t going to be catastrophic for human interests – like, it has murdered zero humans on a particular training set. And then, you’re like, “Okay. That gives me some amount of belief that it’s not going to murder any humans off-distribution.” But the problem is, I don’t have this guarantee that the way it acts off-distribution is in any way the same as the way it acts on-distribution, where maybe the standard deceptive alignment example is where your AI is constantly scheming and checking, “Will I get away with murder? Will I get away with murder?” And then on training, the answer’s always no and then off training and on deployment it finds a situation where it could get away with murder and then it murders all the humans and that is no good. And so, the thing we want mechanistic anomaly detection to be able to do is to provide some sort of guarantee that the way that your system is generalizing off-distribution is employing the same kinds of reasoning or the same kinds of mechanisms that it’s employing on-distribution. And so, your ability to measure whether or not your AI murders all humans applies off-distribution now because you have this… You fingerprinted the kinds of reasoning that it’s doing on-distribution and you’re ensuring that it’s doing the same kinds of reasoning off-distribution or if it’s not, you’re taking it offline. And so long as it’s doing the same kinds of reasoning off-distribution as it’s doing on-distribution and you have this ability to measure how good the reasoning it’s doing on-distribution in terms of measuring the consequences of the way it’s acting, then hopefully, you have some sort of guarantee that off-distribution, it won’t murder all the humans. **Daniel Filan:** Okay. And the reason this is better than just checking if it in fact murders all the humans is hopefully the reasoning: first, you plan to kill all the humans and then you actually do it and [we can] anomaly-detect the first one and then just stop that from happening before the plan actually proceeds. Is that roughly right? **Mark Xu:** I think that’s one reason. I think another reason is you can imagine situations where your AI murders all the humans and there’s no sensor reading that would tell you this fact. So it’s simultaneously conspired to hack all the heartbeat sensors to make them keep pretending that humans are alive and hack all the cameras to make them keep showing happy, healthy humans, etc., etc. And so, even if in theory all the humans are dead, there’s no way that you can tell that all the humans are dead. And so- **Daniel Filan:** Where ‘you’ is, I guess, the computers? **Mark Xu:** Yeah. Or just all sensors or… suppose that there was one human still alive, trapped in a box and they had access to every sensor that humans ever made, they would be unable to tell whether or not the humans were alive or dead, and that’s no good, and the hope is we can avert those kinds of outcomes also by checking whether or not the AI “knew that the humans were dead” because it was employing kinds of reasoning that involved stabbing all the humans versus not stabbing all the humans, or kinds of reasoning that implied that stabbing all the humans was good versus kinds of reasoning that implied that stabbing all the humans was not good. **Daniel Filan:** So I guess what I’m hearing is that it seems like the hope is to solve two problems that, I guess, the alignment community has separated, where one problem is: you can set up some scheme for rewarding your AI for various measurable properties of the environment, like “on this camera feed, does it look like all the humans are alive? Do the humans write down on pieces of paper, ‘I like what this AI is doing’?” And [one problem](https://www.alignmentforum.org/tag/inner-alignment) is your AI is just never motivated by those things and later, it just disregards all those signals and just does whatever it wants, even though the humans don’t like it and even though it stops humans from appearing to be alive. And then, [another way your AI could go wrong](https://www.alignmentforum.org/tag/outer-alignment) is that it is motivated to keep all those signals going but it does so in a way that’s really different from the past where it’s like: instead of the cameras showing happy humans and humans writing down “I like what this AI is doing” because humans are alive and thrilled by the AI, actually, what’s happened is the AI built little humanoid robots and the pieces of paper saying, “I like what the AI is doing,” are now written by the AI holding a pen instead of humans, or something. And so, to avert that scenario, one of your signals is like, “Are you doing really weird reasoning [compared to] previously?” and that just cuts out what some people might call an [outer alignment](https://www.alignmentforum.org/tag/outer-alignment) failure. Does that sound right? **Mark Xu:** Yeah. I think that sounds roughly right and I think my hope is at least that these two problems are not as different as they’ve historically been conceptualized as, or something, and they’re both linked by this common thread of: the mechanism for how your AI is selecting which actions to take is no longer the same mechanism that it was using during the training distribution. It’s differed in some substantial way and we should be able to detect this via not specifically checking whether or not it’s “being deceptive” or “hacking the sensors”, but rather just checking that it’s no longer doing the type of thing it was doing during training, which is maybe an easier thing to do, maybe not, who knows? ### Are all bad things mechanistic anomalies, and vice versa? **Daniel Filan:** Yeah. So I have some confusion about this where… So the assumption in this framework is that a bad thing is happening basically, roughly, if and only if there’s a anomalous mechanism the AI is using for reasoning. So for a big fraction of the cases where the AI is doing a bad thing, you want to pick up on “There’s some anomalous mechanism,” and also when you do pick up on some anomalous mechanism, that shouldn’t just happen all the time for okay reasons, it should be a decent signal that the AI’s doing a bad thing. **Mark Xu:** Yep. **Daniel Filan:** And whether it’s true or not depends on what a “mechanism” is. But if I informally think about it, I think there are a few ways that can fail. So suppose during training, whenever your AI makes a decision, it forms this plan of, “Okay. Day 1, I’m going to play nice. Day 2, I’m going to play nice. Day 3, I’m going to play nice. Day 50, I’m going to take over the world and do whatever. What day is it currently? Day 3? Okay. Output play nice” or something. And then, it strikes me that it’s not necessarily… on day 50 when it tries to take over, I don’t know that that’s going to be a mechanistic anomaly. And then also, sometimes new things happen in the world or you tell your AI, “Hey, I want humans to be living healthy and happy and long,” and in the good world the AI’s like, “Ah, I’m going to invent some new drugs and keep you safe from invasions and also improve your ability to tell whether the AI’s doing a good job or not,” and that seems like a different thing than what’s been happening previously, and maybe that counts as an anomalous mechanism. So I’m not sure how you’re going to rule in the actual bad behavior without ruling out a bunch of good behavior. **Mark Xu:** Yeah. So I think this is, in some sense, the heart of the issue. So I think the thing I want to do now is talk through some specific examples of what kinds of things, I hope, are the same and different mechanisms and why I think that they are the same and different mechanisms and how we hope to be able to do things like maintain our AI’s ability to take novel defensive actions in the world while still ruling out cases that are deceptive alignment-y or “murder all the humans because you saw something novel”, etc., etc. So to go to your first example, if you imagine an AI every day doing this check that’s like, “Is it less than day 50? If it’s not less than day 50, then I’m going to murder all the humans,” and suppose that during training we only ever train up to day 40 or whatever and then during deployment, we deploy our AI to day 60. Then naively, this AI will behave perfectly well during training and then it’ll murder us on day 50 during deployment. So I think the thing that I want to happen here is… what I mean by ‘mechanism’ is closely related to what ARC calls a heuristic argument, but it’s going to be something like, suppose someone gave you the system and it does something during training, and you want them to give you an argument that makes it obvious to you why it does that thing. And so, you’re going to be looking at the system and the system is going to be first… So empirically during training, the system never murders any humans and you’re going to look at the system and be like, “Hey, but the first thing it does is check whether or not the day is less than 50 and then if it isn’t, it’s going to murder all the humans. What’s up with that? Why does it never murder any humans?” And then someone has to argue to you, “Well, that’s because during training, this check always returns true. It’s always less than Day 50 during training,” and you’re going to be like, “Okay.” So as a lemma in your argument to me for why the system always behaves well during training, you have asserted to me that this check always returns true. And so, my mechanism for why it’s behaving the way it is during training is going to be something like, “This check always returns true. Therefore, it never murders any humans. Therefore, it does whatever it does during training and no humans are murdered and it gets high reward.” And then during deployment, it’s going to be day 50 and this check is going to return false. And then hopefully, you’re going to be looking at this check and being like, “Hey, my mechanism said that this check should have always returned true but this time it returned false.” That’s no longer the same mechanism because you had this check that was essentially constant during training and then off training, it became not the same constant as it was during training. And so, you’re just suspicious of this or you’re going to call it a different mechanism or something. **Daniel Filan:** Okay. And the idea is that you flag this even if you can’t tell that the thing that happens post-day-50 is “murder all the humans”; just something different happens when this check returns true and it used to always return false, or maybe it’s the other way around. **Mark Xu:** Yeah. And so, our hope is if you ever have anything like this, if you ever have anything that’s essentially constant during training and then during deployment, it’s a different value than the constant was during training, then you’re going to maybe always flag that as an anomaly. And I think that causes problems in the other direction where you start flagging too many things as an anomaly where like… Here’s a constant that’s always true during training: maybe, it’s like always before the year 2023 and then you deploy it in 2024 and it sees the date and it’s like, “Oh no, I’ve never seen the date as 2024 before. That’s no good.” **Daniel Filan:** Yeah. And it’s even a deep thing, right? Like, I guess, famously the Sydney chatbot that Bing deployed, right? Users are suddenly insisting that the year is 2023 and during training, users never said anything like that and now the users are fighting with you and… It seeps into a bunch of stuff. **Mark Xu:** Yeah. So I like to divide the MAD—or Mechanism Anomaly Detection, which I often abbreviate as MAD—problem into two halves. There’s the in-distribution anomaly detection problem and there’s the out-of-distribution anomaly detection problem. So the in-distribution problem is like: suppose that you just have an infinite sequence of samples from the same distribution and suppose your AI is doing something like, defect with one over a trillion chance or something, or murder all the humans with one over a billion chance. Can you stop your AI from murdering you in that situation? And the hope is yes, because you’ll have a million samples where the defection probability or the defection never happens and then when your AI does defect, you’re like, “Oh, my AI never defected like this during training, so that’s no good.” In this problem, I think it’s okay if you flag everything that is a new value that you haven’t already seen as an anomaly because you have an infinite series of samples. So if you ever do this, then you can push down your false positive rate by just sampling more data and re-training, etc., etc. **Daniel Filan:** Wait. So if I think concretely there, during training, in-distribution, if with probability one over a million, the AI outputs kill all humans… I guess the idea of it is that the AI falsely believes that it’s hooked up to sensors, like you’re checking… Because if it actually kills all humans during training, that’s still bad, right? **Mark Xu:** Yeah. So the idea is you do have this clean set where the AI never does anything bad. That’s, I think something I want to assume, and maybe we can argue about whether or not that’s reasonable, if you think it’s unreasonable. But suppose that you have this dataset where the AI never murders all humans. Suppose you have a million samples, then in some sense your AI can’t murder all the humans with a rate much more than one over a million or else, you would’ve seen it happen. And so, the hope is that if you do mechanistic anomaly detection relative to a dataset size of a million, then your false positive rate is order of one over a million or else you would’ve seen that thing during training and you would flag it as not an anomaly because you’ve already seen it, and it would be in the white list of good mechanisms or whatever. And so, as long as you have this [IID assumption](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables), then you can in fact flag everything that you’ve never seen before as anomalous while maintaining a false positive rate that’s still relatively low and bounded perhaps by one over your training set size and then maybe this is okay. ### Are responses to novel situations mechanistic anomalies? **Daniel Filan:** Okay. So I don’t know. If I think concretely about that, suppose that I’ve got this distribution of ways the world might be and that distribution isn’t changing, I’m just sampling IID from it, and very occasionally, some weird stuff happens in that distribution that I need my AI to respond to, right? So one news story that happened recently is there was [this guy](https://en.wikipedia.org/wiki/Yevgeny_Prigozhin) who [sort of did a coup](https://en.wikipedia.org/wiki/Wagner_Group_rebellion) in Russia and then after 24 hours was like, “Nevermind. I’m not doing this anymore,” and you can imagine if your AI’s job is to be journalists or whatever, right? When you’re drawing that sample, something’s going to happen that didn’t happen in the first million samples it seems to me like either you flag that as an anomaly and you tell it to not do something different, which is bad, or you don’t flag that as an anomaly. And then, if the response of the AI was like, “Kill all humans in a way that you can’t tell,” or something, then that would also be bad. So concretely, I don’t quite get how this is working. **Mark Xu:** Yeah. So I think there’s two things going on. So one thing going on which is going to be a bit vague is: suppose you just had a normal ML system that you weren’t doing mechanistic anomaly detection on and you gave it this novel news story about this coup, maybe-not-coup in Russia, then somehow your AI does not freak out and do something completely bonkers. Somehow, it just does the normal journalist thing, let’s suppose. There are kinds of AIs that will freak out and become bonkers, and I think one explanation for why your AI doesn’t freak out when it’s encountered something novel is that it’s still doing the same kinds of reasoning that it learned to do during training and it’s just like… Or there’s some prior question which is like, “The world is combinatorially vast, why is not every input this crazy novel thing for your AI?” and there’s some handwavy answer where it’s like, “Oh, it’s abstracting in some way and it’s applying forms of reasoning that are valid across many possible instances of news, stories, etc., etc,” and the hope is that these sorts of things are going to correspond to what we mean by different mechanisms. And so, if your AI has learned to add two numbers during training and then it receives two numbers that are bigger than numbers it’s ever seen before but it still adds them using the same algorithm that it learned during training, then we’re not going to flag that as an anomaly because it’s still doing roughly the same thing. And so, hopefully, if your AI is “doing the same thing” in this vague handwavy kind of way, then we won’t flag that as an anomaly. And so, your AI journalist… seeing a new thing happen in the world but still reporting on it in the same kinds of ways that they typically report on things, which is maybe they look at what they’re seeing with their cameras and then they try to describe what they’re seeing with their cameras and they write that down and then they do some other journalist-y kind of stuff and then they send that off to the newspaper or whatever… If your AI is still doing that sort of thing, then hopefully, it will not flag this as an anomaly. And the things that we do want to flag as anomalies is: suppose that there’s a coup/maybe coup in Russia, and then your AI sees this and switches to a different mode of journalism where it’s no longer writing down what it sees on its cameras but it’s, in fact, making something up, or maybe it has some allegiance to Russia or something and it’s pretending this coup never happened or something like that. And the hope is that if it switches to this new mode of reasoning that we haven’t seen during training, then we’ll flag that as an anomaly. So I think that’s the first thing that’s maybe going on. **Daniel Filan:** Yeah. I’m a bit skeptical, but do you want to say the second thing or can I field questions? **Mark Xu:** So I think this is a bit shakier than the second thing but I’m happy to… Yeah. Let’s talk about it for now. **Daniel Filan:** Yeah. The thing that that flags for me is that: okay, suppose that instead of a journalist, you’re, like, the Russian military command, right? You’re a robot that’s the Russian military command, the bit that’s not doing the coup. Now, you have this new problem that you just have never seen before, and maybe you have very general principles of reasoning or something, but you’re just thinking about different problems than you were thinking about before because you’re like, “Oh, I’ve never previously thought about what if some part of my army does this weird thing and now I’ve got to figure out, okay, how am I going to deal with that?” **Mark Xu:** Yeah. **Daniel Filan:** Right. I guess another example of this is if you’re a human and you’re like, “I’m going to learn a new board game. Okay. I’m learning chess so I’ve got to think about tempo and position and stuff like that.” So on a very high level, you have the same structure of reasoning, maybe, but it seems like the level at which you’re doing the same thing is so high as to include the case where on a very high level, you’re always just waiting until you can take over. And then, when you can take over, you start doing that instead of playing nice. **Mark Xu:** Yeah. So I think that roughly makes sense and is an extremely valid objection where I’m going to summarize it as: there’s these two things. One thing is, suppose a novel thing is happening in the world that you need to respond to flexibly and creatively. I think my standard example is, suppose a meteor is coming, barreling towards earth, and is going to kill everyone, and your AI needs to do some crazy s**t to deflect the meteor, you want your AI to do some crazy s**t to deflect the meteor, how is this any different than if your AI sees a meteor barreling towards Earth and it’s like, “Ah, now is my time. The humans will be distracted. I will initiate a robot rebellion and murder all the humans”? Is that roughly your objection? **Daniel Filan:** Yeah. Or specifically, I’m not sure what notion of reasoning mechanism would distinguish between those two. **Mark Xu:** Yeah. So I want to talk about the second thing I was going to say which hopefully addresses this issue. So I think one crucial fact is that we’re not going to be trying to do mechanistic anomaly detection relative to everything that your AI is doing, we’re only going to be doing it relative to some things. So maybe, I’ll describe the simplest example. So in the original setting that we lay out in [Eliciting Latent Knowledge] (https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge), we have a diamond in the vault with some cameras, and robbers frequently try to steal your diamond, and you want to train your AI to protect the diamond from the robbers. And our concern is that instead of protecting the diamond from the robbers, your AI might find it easier to just hack all the cameras to make it look like the diamond is still there and just let the robber take the diamond, or if the robber hacks the cameras to make it look like the diamond’s still there, your AI will be unable to defend because you won’t have the ability to tell whether or not the diamond is still there. So our hope is that we’re not going to do mechanistic anomaly detection with respect to your AI coming up with plans to protect… Well, maybe you are, but here’s a simple setting in which there’s a specific kind of mechanistic anomaly detection that I think might be sufficient, that is quite a bit narrower than just doing it on everything, which is: we have some cameras and then some process where we look at the cameras and we decide whether or not the diamond is still there. Maybe we look for specific patterns of pixels and various diamond-looking shapes, etc., etc. So I suppose we have maybe a neural net that looks at the cameras and tells us, “Yes/No, is the diamond still there?” The thing I want to do is do mechanistic anomaly detection relative to that neural net plus the system of cameras or the predictions of the cameras that our AI is making to decide whether or not the diamond is still there, if that makes sense. **Daniel Filan:** I guess it makes sense but… I don’t know. Sometimes I hear these analogies and I end up being not quite sure how they’re supposed to relate to actual AIs we might build. Because in particular, if I analogize that, it seems like the thing you want to detect is: once the AI has killed all humans and it’s getting sensor readings, you want to detect that something funny is going on there, and then I’m like, “Well, that seems like…” But that’s probably not what you mean. **Mark Xu:** Yeah. I’m not trying to do anomaly detection relative to the actual sensor readings and the actual process that humans will use to decide whether or not the diamond is there after your AI has acted. I’m trying to do it relative to… So your AI is planning their actions and when planning their actions, they’re using some sense of… They’re using some prediction about the future and whether or not the diamond is still there in the future. I want to do anomaly detection relative, or on our AI’s prediction about whether or not the diamond is still there; or more abstractly, your AI is selecting amongst actions using some criteria, and sometimes this criteria says this is a good action because it actually protects the diamond, for example. And sometimes, the criteria will say that it’s a good action because of a new reason which is that the AI has hacked all the sensors. So I want to do anomaly detection relative to, why did our AI think that this action was a good action? **Daniel Filan:** Okay. So in that setup, so you have this AI and it’s got this goal/planning distinction where parts of its head are figuring out like, “What are things I could do?” and part of its head is like, “Well, if I did that thing, how good it would be?” **Mark Xu:** Sure. **Daniel Filan:** And you want to detect for anomalies in the evaluation part, and is part of this… Should I be imagining we’re just training some big neural net and then we have to divide it up and figure out which part is the goal evaluation part? **Mark Xu:** I think not quite. What I like to do is I like to imagine various ways that neural nets could be and then talk about how we think mechanistic anomaly detection should work to make those neural nets not murder humans. And then… I think often during this process, I want to make very unrealistic assumptions about our neural net. So maybe we can suppose that you know how to do this division, or the reasoning is in some sense transparent to you so we can talk coherently about what we want to happen in mechanistic anomaly detection. And then, I think I want to separate the problem from, “Okay. In practice, we’re dealing with these messy neural nets, how are we going to actually implement mechanistic anomaly detection in a way that makes any sense at all?” **Daniel Filan:** Yeah. I mean, I’m sympathetic to that broad move but in this case, if you care about detecting anomalies in the goal evaluation part but not in the plan generation part, then it just seems really important to know which is which and- **Mark Xu:** Yeah. I guess… So I think maybe I misspoke earlier. I think I’m not quite like… I don’t want to just do mechanistic anomaly detection relative to the plan part. So I think if you imagine, for example, you have some policy pi that produces actions and you have some evaluation which like, “Is there a diamond after you take this action?” or you have some predictive model and you compose that with the action that the policy generated, and then you have some separate image net that checks whether or not there’s a diamond. So you have this composition of three systems. You have the image net evaluating the predictions of the action generated by your policy. I think I want to do mechanistic anomaly detection on this composition and that’s… Yeah. I just want to do that and I don’t want to evaluate, “Is the action itself anomalous?” or like, “Is my AI selecting actions based on an anomalous process?” I just want to evaluate: does the action have the consequence of it looking like there is a diamond in the room for the normal reason that we know and love that we found during training, which is that the diamond is actually there in the room, or a new mechanism, which could be really anything, but I think the typical example is your AI has carefully hacked all the cameras to make it look like there’s a diamond in the room? **Daniel Filan:** So hang on. So you’re doing mechanistic anomaly detection to the composition of all three networks? **Mark Xu:** Yes. **Daniel Filan:** But then, I have this question of like: okay, suppose in order to keep the diamond in the room, the AI has to come up with a new type of plan, there’s a new threat to… Like the devil appears and the devil’s like, “Hey, AI, you’ve got to play a game of chess against me and that’s the only way you can keep this diamond safe.” Then the AI is like, “Oh, okay. Now, I’ve got to think about chess which is something I’ve never thought about before,” and if I knew all the three parts, that would be an anomaly in the planning part, right? **Mark Xu:** Yeah. Well- **Daniel Filan:** Like one might think it would be a mechanistic anomaly in the planning part. **Mark Xu:** Yeah. So I think there’s going to be two hopes here. One hope is that it’s not a mechanistic anomaly in the planning part. I think that’s kind of iffy given I don’t really know what I mean by mechanism. I think the second part is you can instead imagine that instead of doing mechanistic anomaly detection on all three things, we only do it on two things. We just take the action our policy generates and we just put that in our predictor but we don’t include the policy itself. And then we just ask, “Does this action have the consequence of the diamond being on camera for the normal reasons diamonds sometimes appear on camera or not the normal reason that maybe the action has hacked all the cameras, maybe the action has paid someone else to smuggle in a fake diamond, etc., etc.?” and I think another third thing I want us to get into, which is how potentially this still might be anomalous, and we can try to point more precisely at what we mean by a diamond by doing something else more complicated, but maybe I will see if you have objections first. **Daniel Filan:** Yeah, so just regarding your response 2, I see you saying, “Okay, well, we’re not actually going to do it to the composition of all three networks. We’re going to do it to the bit of the network that takes some plan and then evaluates whether the diamond ends up there, then checks if it’s right or something.” **Mark Xu:** Yeah. **Daniel Filan:** And then I’m like: okay, but then either we have to find that structure in an end-to-end net that we trained ourselves, which seems like in order to do that, you might have thought that you had to solve some problem very akin to figuring out mechanisms in networks, or you’ve got to build the network that way just in a very explicit way, and I guess, I don’t know which one you’re imagining. **Mark Xu:** So I think I’m imagining neither. It’s possible that we should just stop talking about this particular hope because I also don’t think it’s particularly plausible, but if you imagine that your policy is using some prediction of the future to select action A, so as long as your evaluation of the action is as good as the evaluation your policy is using in some sense, then I think they don’t have to be literally the same, and of course there’s this entire problem of how do you know that your policy has the same predictive abilities as the evaluator that you have external to your policy? And then the answer to that is, I don’t know, something, something, [ELK](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge), something, or somehow… I think probably we should just move on to talking about a different sort of thing, but I think I want to summarize your objection first, which is: suppose normally the way that you protect your diamond during training is by building walls to stop robbers. **Daniel Filan:** Yeah. **Mark Xu:** And then the devil appears and has challenged you to a chess game. **Daniel Filan:** Yeah. **Mark Xu:** You’re just like, how is that not a different mechanism? The mechanism of action for typical normal actions that protect the diamond is you build a wall to protect the diamond from robbers, and now the action you have to do is an entire chess game, and I’m like, “Yep, that’s probably a different mechanism”, and I think is the heart of what I like to call the out-of-distribution mechanistic anomaly detection problem, which is you have some sensors and they show a pattern of pixels that a human looking at the cameras would interpret as a diamond, and there’s some interesting question here, which is like, “Why do the cameras ever show patterns of pixels that look like diamonds?” Here’s a naive model for how pixels work in cameras. They’re just kind of random. And your diamond is, let’s say, a million pixels arrayed precisely in the ways that make it look like a diamond. There’s no way that this would ever pop up by random chance. **Daniel Filan:** Yeah. **Mark Xu:** And so this I want to say is a fact that demands explanation, and the explanation is pretty simple. It’s that there exists a physical world out there with actual objects. One of these objects is a diamond and cameras reflect the physical world in the following way, and therefore sometimes the diamond appears on camera. **Daniel Filan:** Okay. **Mark Xu:** And so our goal is to be able to talk about this sort of physical, actual diamond in the world, outside of how the action achieved the diamond still being there, if that makes sense. So we want to be able to say something… so oftentimes I use the phrase ‘normal reason’ for mechanisms that were active during training, and so we want to say something like, “We want the diamond pixels to appear on camera for the normal reason, but we want the diamond to be there for any reason”, and so we don’t want to do mechanistic anomaly detection relative to the entire causal history for why the diamond is in the location it’s in, but we do want to do mechanistic anomaly detection on the part where the diamond appears on camera. Another way of putting this is, we want to be able to ask the question of: I as a human, looking at the camera, think there’s a diamond there. Is there actually a diamond there? Am I thinking that there’s a diamond there for the normal reason that I often think that there are diamonds there, which is that there are actually diamonds in the world? But I as a human don’t really care why the diamond is there or whether or not the diamond remained there because you won a complicated game of chess against the devil or whatever. And so we want to be able to say why… we want to be able to ask our AI to take actions that put the diamond there for any reason, but such that the diamond appears on camera for the normal reason, and so here’s a sketch of how we hope to do that. **Daniel Filan:** And sorry, wait, just backing up. I guess the thing this is meant to be a metaphor for is in your AI, the… Actually, wait, what is this meant to be a metaphor for? **Mark Xu:** So the diamond is always a metaphor for a human that’s happy, healthy, and alive, so we want our AI to take actions such that the human looks like they’re happy, healthy, and alive, for the normal reason that humans look like they’re happy, healthy, and alive, namely that they are happy, healthy, and alive, and the cameras are working as intended in reflecting the true state of the humans, et cetera, but we want our AI to be able to take action such that the humans look like they’re happy, healthy, and alive for the normal reason, for any reason. So we want our AI to be able to erect a novel shield against meteors or play complicated games of chess or engage in sophisticated military campaign to protect the farms from the asteroid or other enemies, et cetera, et cetera. **Daniel Filan:** Or improvidently trained AIs that are… **Mark Xu:** Or other AIs. **Daniel Filan:** And when you say you want the sensors to be reading the right thing for the right reason, I imagine you actually mean something internal to AI, right? **Mark Xu:** Yeah. So I think all of the mechanisms are hopefully relative to how the AI is thinking about the situation or from the AI’s perspective, we want our AI to be taking action such that the consequences of the action are the human appearing on camera for the normal reason that humans appear on camera during training, which is that they’re actually there and not that someone has hacked all the cameras to make it look like a human is appearing on camera. **Daniel Filan:** And in particular, you want the AI to be making decisions for that reason. **Mark Xu:** Yeah. **Daniel Filan:** It’s not that they’re just playing nice until they can get away with it and that’s the reason for their actions, it’s that their actions are because of predicting that the cameras show healthy happy humans? **Mark Xu:** Yeah, although I think that’s maybe a bit more complicated, but yeah. ### Formalizing “for the normal reason, for any reason” **Daniel Filan:** Okay. I mean it’s possible we can just assume that… or you only want to work on one of those problems. **Mark Xu:** Yeah, or I think I ultimately want to work on both of those problems. I think the main risk is we can’t talk coherently about this ‘for the normal reason/for any reason’ thing at all because it might just not make any sense, so I’m going to sketch out a way in which I think this might make sense. **Daniel Filan:** Okay. **Mark Xu:** So let’s say for simplicity, we have two sensors sensing our diamond or human: one of them is an image sensor and one of them is a weight sensor. So it’s like, is there a diamond-looking thing, and is there a diamond-weighing thing, a thing that weighs a similar amount to a diamond? So there’s this interesting fact on the training distribution, which is that these sensors are often correlated. Oftentimes they’ll both say yes; or suppose that there’s a 50% chance there’s a diamond during training, and so one of the sensors is on with 50% chance and the other one is on with 50% chance, and so naively you would expect their conjunction to be on with 25% chance, but in fact their conjunction is on with 50% chance during training, because both of them are driven by this common cause, which is that the diamond is actually there. **Daniel Filan:** Yep. **Mark Xu:** So we want to say something like… Instead of saying, “Do I believe that there’s a diamond for the normal reason?”, we want to point directly at this latent common cause between correlations, between two sensors, by saying something like, “Are the sensors correlated for the normal reason that they’re correlated?” Obviously this doesn’t make that much sense because what does it mean to be correlated on a specific data point? So instead, let’s imagine that we’re taking the AND of the two sensors and we’re interested in the question of, “Is the AND of both sensors true for the normal reason that they’re true, or a different reason?” And so the way that we’re going to do mechanistic anomaly detection relative to this is going to be a bit weird. So here’s what we want to do. So we have a particular action, let’s say, such that the actual consequences of this action are both sensors are on, and there’s some mechanism or there’s some set of mechanisms that this action is “using” to make it so both of these sensors are on. We want to ask the question of, “Is it possible to drop out mechanisms from this set of mechanisms that this action is using such that both the camera shows there’s a diamond and the scale weighs that there’s a diamond but camera and scale are not on?” I don’t know if that made any sense. **Daniel Filan:** So hang on, you want to… Sorry, it sounded like you said that you want to drop out mechanisms such that the camera says there’s a diamond and the weight sensor says there’s a diamond, but it’s not the case that both the camera says there’s a diamond and the weight sensor says there’s a diamond, or did I just mishear that? **Mark Xu:** That’s a little bit right, and of course that sounds absurd, so I’m going to talk about why that might not be that crazy. **Daniel Filan:** Okay. Just also, backing up: when you said you want to drop out mechanisms, is the idea that you want to check the mechanisms for why both of those were going to be on in the past - in this case, do we only have mechanisms that give you one and the other but not both? **Mark Xu:** Yeah. **Daniel Filan:** Okay. **Mark Xu:** Yeah, so if you imagine you’re doing training and you’re confused by this question, which is like, “Why are these both commonly on?” And then your “explanation” for why they’re both on, or the mechanism by which they’re both on with about as much frequency as one of them is on, is sometimes there’s a latent structure and it causes both of them to be on and then the AND is on, and then also separately there’s noise, and that causes either individual sensor to be on, and we’re concerned about cases where your AI takes actions that manipulate the noise of both the sensors to cause you to think there’s a diamond by making them both on without the latent structure actually being present. **Daniel Filan:** Okay. **Mark Xu:** And so the hope is that the mechanisms will be such that we’re not forced to make these sorts of logical deductions that potentially are very obvious. So there’s the noise-making sensor 1 on and the noise-making sensor 2 on, but we don’t have to notice that if sensor 1 is on and sensor 2 is on, then sensor 1 and sensor 2 is on. However, so in the case where there’s noise that makes sensor 1 on, and noise that makes sensor 2 on, it won’t necessarily be the case that both of them, or the AND of them, is on. There’ll be a way to drop out mechanisms such that this is not the case, and then of course the question is why… if the diamond is actually there, why can’t we just drop out the mechanism that makes both of the sensors on also? And this is where the training set comes in, where there’s a set of mechanisms that you needed during training to explain most of the variance, let’s say 99.9% of the variance in the model’s performance or prediction or behavior during training, and that set of mechanisms has to include the mechanism that if there’s a diamond, then sensor 1 and sensor 2 are on, and if there’s a diamond, then both of the sensors are on and that implies that the AND of the two sensors is on. **Daniel Filan:** Okay. **Mark Xu:** However, it doesn’t have to include this AND interaction between the two noise terms because the ‘and’ interaction between the two noise terms happened extremely rarely and thus you don’t have to include it to explain 99.9% of the variance of the model during training, and so for any given data point, you’re “free” to drop that interaction between the two noise terms, and so if the sensors are only both on because of this AND interaction between the noise terms, then you’re in some sense free to drop it, and so you can notice that the AND of the two sensors wasn’t true for the normal reason, it was true for this crazy other reason. **Daniel Filan:** Okay, so sorry, let me say that back to see if I understand. It sounds like you basically want to understand: hey, there used to be a good reason why sensor 1 said there was a diamond, namely there was a diamond, and a good reason why sensor 2 said there was a diamond, namely there was a diamond, and there was also some, I don’t know, weird stuff happening with sensor 1 and weird stuff happening with sensor 2, but the fact that both of them showed that there was a diamond, we had one explanation for this, and now in this new world the explanation for why both sensor 1 and sensor 2 say there’s a diamond is not just the old explanation for that, namely, there’s this thing that causes both. It’s like, oh, sensor 1 acted up sometimes generally in training and now it’s acting up in a way that causes there to be a diamond. Now sensor 2 is acting up in a way that causes it to appear that there’s diamond, but… maybe this isn’t what you’re saying, but the structure is different. You’re got one thing going on for sensor 1 that isn’t really affecting sensor 2 much, and one thing going on for sensor 2 that isn’t really affecting sensor 1 much or it wasn’t affecting sensor 1 during training? **Mark Xu:** Yeah, I guess the thing I want to say is there’s this mechanism for… so suppose you had an action that manipulated the noise for both sensors to make them both on. There’s this mechanism that causes the AND of the two sensors to be on, which is noise for Sensor 1 and noise for sensor 2 at the same time, and that happened during training, but rare enough such that it wasn’t important enough to “force” you to include it. **Daniel Filan:** Yep. **Mark Xu:** And so you can detect when both sensors are on for the mechanism of both noises are on, and that is going to be hopefully a different mechanism than the normal mechanism of there was one diamond, it caused both sensors which caused the AND. **Daniel Filan:** Right, right. So you can think of there’s noise in sensor 1 and there’s noise in sensor 2 as being one thing - there’s noise in sensor 1 and there’s noise in sensor 2 - but you just didn’t need that explanation to say why it appeared that sensor 1 and sensor 2 both showed there was a diamond in the past. You could rely on this different mechanism, which is: there’s a diamond, and the ways in which these count as different mechanisms, I guess based on me reading things, I guess you’re going to go into it, it’s about means and variances and co-variances. **Mark Xu:** Yeah, something like that. Maybe another way to say it is: there was an assumption you could have made during training, which is that noises are never both on simultaneously, and this assumption does not hinder your ability to predict your model at all, basically. Maybe noise is quite rare, and so both noises are only on with one in 1,000 chance, one in 1 million chance, et cetera or something like that, and so you’re only missing out on one part in a thousand explanation of your model’s behavior if you make this assumption that noises are never on, but you’re missing out on 100% of the “behavior” on the specific data point where both noises are on. And so anytime you have this mismatch, we can drop something out that only hinders your average prediction during training by one part in a thousand, but hinders your ability to predict this specific thing by a lot, then you should be suspicious that the things that are happening between train time and test time are quite different. And in particular, this correlation between the sensors is happening for “a different reason”. So this is covering the “are they correlated for the normal reason?” part. **Daniel Filan:** Yeah. **Mark Xu:** And then the way that we’re going to get the “for any reason” part is we’re going to require that the set of mechanisms that you drop out explain both sensors individually and then ask whether or not there exists a set of mechanisms that explain both sensors being on individually but does not explain the conjunction being on. **Daniel Filan:** Okay. Sorry, can you restate that or give an example? **Mark Xu:** So suppose during training there were never any robbers and so the diamond just persisted through time in the normal ways the objects persist through time and showed up on both cameras, and then suppose during deployment there is a robber now and we want our AI to take the action of ‘build a wall to stop the robber’, but our AI could also take this action of ‘manipulate both sensors’, and so if you apply naive anomaly detection, then both of these will be flagged as anomalies. Your AI has never built a wall before, it’s also never manipulated sensors before, but then we’re stuck because our AI can’t take any actions basically. **Daniel Filan:** Yep. **Mark Xu:** So the hope is that we ask for a set of mechanisms such that both sensors appear to be on according to this set of mechanisms, and then we ask further the question, are the AND of the sensors on or are we required to thus deduce that the AND of the sensors are on? **Daniel Filan:** Yeah. **Mark Xu:** So if your AI builds a wall to stop the robber, then we’re required to include the mechanism of wall-building and robber-stopping to explain the scale showing a diamond and the camera showing a diamond, and then since the scale is showing a diamond because there’s a diamond and the camera is showing a diamond because there’s a diamond, and we have to include enough mechanisms to explain 99.9% of train-time variance or performance or whatever, then we’re required to make this deduction that both of those being on because of the diamond implies that the AND is on. Whereas if your AI took an action to manipulate the noise of both cameras, we can make this assumption that the noise never happens simultaneously and we’re not forced to conclude that the AND of both sensors is on if both sensors are on because your AI manipulated the noise. **Daniel Filan:** So the thing I think I’m not getting is this idea of manipulating the noise, which seems like a model-dependent thing. So if I say very concretely, okay, suppose what happens is robbers are coming to take the diamond and the AI makes a plan that’s like, okay, I’m going to fiddle with all the sensors to make it appear like a diamond’s there. **Mark Xu:** Yeah. **Daniel Filan:** So it does that, right? **Mark Xu:** Yeah. **Daniel Filan:** And then a mechanism that explains this is: the AI wanted to say that the diamonds were there. And then that explains both that sensor 1 says the diamond’s there and that sensor 2 says the diamond’s there, and presumably, you’ve got to deduce both of those from this explanation, and is the idea that that’s not going to count because that’s just a different mechanism than existed in the past? **Mark Xu:** The thing I want to say, although I’m not quite sure how accurate it’s going to be, is the mechanisms have to be more detailed than that. So just because your AI wanted to make there be a diamond does not imply that the particular action it took will in fact make it be that there’s a diamond, and so you still have to talk about the particular mechanism of action for the particular action your AI decided, which was your AI had to manipulate sensor 1 and it had to manipulate sensor 2 and it had to manipulate sensor 3… I guess we were only talking about two sensors. So your action only has to manipulate two sensors and then the particular mechanism of action that this particular action that your AI selected still violates this principle, which is manipulations or noise never affects both sensors simultaneously, which you’re free to assume during training without hurting your ability to predict your model but now hurts your ability to predict this particular action by too much. **Daniel Filan:** So sorry, why… So is the thing that’s happening is that if the mechanism is the… Well, hang on, wait back up. Suppose your AI is just really good at stuff, right? When it tries to make things happen, they do happen. Then, you know, you can with high probability deduce “both sensor 1 and sensor 2, they’re going to say there’s a diamond there” from “the AI is trying to make it appear that there’s a diamond there”. And maybe you rule that one out because you’re like, “Well, that mechanism didn’t really apply during training or on this nice set of examples”, and then the only other explanation you have left is randomly sensor 1 said there was a diamond and also randomly sensor 2 said there was a diamond because they are trying to make something happen. In some sense that’s random relative to us not knowing that was going to happen, and then you’re like, “Okay, but you didn’t need to assume independent random noise during training in order to get high probability that a diamond was going to be there”. **Mark Xu:** Yeah, I think the second part of that was correct. I think the first part is confusing. So I agree that there’s this compelling sense in which your AI wanted X, therefore X, should be a valid argument for why X happened or something, and I think… **Daniel Filan:** Or at least a valid mechanism? **Mark Xu:** I think that sounds like it might be right, and this is an area where it’s a bit confusing. So I think the way that I would want to talk about this more precisely is: you have some clean set where you know your AI actually made it so there’s a diamond, and there’s some question of: suppose that the thing your AI is doing is always trying to make it look like there’s a diamond. How do you know that your clean set is clean? And that itself I think is complicated and I don’t quite know what’s going on there, but one way that you could hope to know it is you have a sequence of policies. One policy is the randomly-initialized policy, and you’re very confident that this policy is not going to take any actions that have any coherence, and so in fact, the diamond will be there because there’s a diamond, because your randomly-initialized policy cannot hack all the sensors. And then you have your policy that maybe is sophisticated and then you ask, “Are these two policies taking… are the diamonds appearing because of actions that these two policies are taking for the same reason as each other?” or something. And you need to have some ability to know that your policy is not taking actions that hack all the sensors at some point. So I think things get simpler if we’re not talking about policies that take actions and just we’re talking about predictors that predict things, and then we can ask, is the predictor predicting things because of the normal reason that it sometimes predicts correlated sensors, which is the diamond is still there? Or is it predicting things because of some different reason, which is someone took an action that hacked all the sensors? **Daniel Filan:** Yeah, and then I guess in this case we’re not just assuming that it’s a thing that was trained to predict something, we’re assuming that it actually does a high-quality job at predicting these things. **Mark Xu:** Yeah. **Daniel Filan:** Because otherwise then you have [mesa-optimization](https://arxiv.org/abs/1906.01820) worries of, “Oh, what if this thing that said it was a predictor is actually trying to fool you?” **Mark Xu:** So I think the simplest setting is assuming this sort of benign predictor thing, which I think is unrealistic for a variety of cases, but is this a maybe simple example of how you want the thing to be going on? And so maybe we can assume that there’s a robber coming to steal your diamond, never before seen during training, and your predictor takes in the input of https://arxiv.org/abs/1906.01820the position of five levers, and all the levers start ‘off’, and if you flick the first three levers on, then a wall gets built that stops the robber. However, if you flick the fourth lever on, then the first sensor turns on, and if you flick the fifth lever on, then the sensor turns on, and it costs $1 to flip each lever, and so it’s in fact cheaper, or it’s just simpler, maybe you’re doing random search against actions. And so the action you will expect to find is flipping both levers 4 and 5 on, and then we want to be able to find the action that flips the first three levers on, but not levers 4 and 5… or I guess it doesn’t really matter if you flip levers 4 and 5 also. And then the way we hope to do this is we want to ask for an action that makes sensor 1 on for any reason and sensor 2 on for any reason such that we are forced to deduce that sensor 1 and sensor 2 are on, and then flipping levers 4 and 5 will not cause this to happen because we can drop out the correlated term between 4 and 5 freely, but we are forced to include the correlated term between sensors being on because of diamonds, because that’s what was going on during training. **Daniel Filan:** Right. Okay. So I think this makes some amount of sense if I want to understand just this problem of how do you ensure that the camera says the diamond is there for the normal reason vs the diamond’s there for any reason. So in this case, one thing I wonder about is… So this is a problem that only occurs in the thing that this is an analogy for. So this is meant to be an analogy for ‘humans are happy and flourishing’ and stuff. And suppose I am in a world where I’ve got this smart AI, it’s trying to make humans happy and flourishing, and it’s also got this, “Are humans happy and flourishing detector?”, and let’s say that detector is… There’s some camera feeds and stuff and also there are two humans, both of which are looking at the camera feeds and they’re like, “Yeah, as far as I can tell, the humans are happy and flourishing.” Right? Now, one thing the AI can do is that it can make those two humans better at their jobs. For instance, it can stop those humans from getting sick or maybe it can give those humans stimulants or make them better at focusing or better at noticing small issues, and then I wonder: that kind of action, it seems like that could change maybe some of the correlations between different sensors, or maybe it makes the humans a bit more correlated because they’re losing less info - or maybe one of the interventions is that it helps the humans talk to each other to just generally make the sensors more accurate. And so I wonder, the thing I’m wondering about is: is this going to flag benign ways of improving the sensors? Which in some sense is what we should expect if AIs are doing a really good job at making all of humanity better. **Mark Xu:** So I think this is a bit complicated and I’m not sure what the answer is, so here’s the baseline of what can happen. So you can imagine a world where all your AI does is protect the humans from malign influences and meteors and other AIs, et cetera, et cetera, and then all upgrades to sensor arrays or improving humans’ ability to interpret sensors and reason about the world, et cetera, et cetera, is only ever guided by human hands, and humans must do the hard work of thinking about how the sensors work and designing better sensors, et cetera, et cetera. And I think this is the baseline and then you can try to think about different ways in which your AIs can help with this process. So I think your humans can ask for things that they can point at using this latent structure. They can be like… “During training I sometimes had a strawberry in front of me, can you make a strawberry in front of me for the normal reason that strawberries appear in front of me?” Suppose your human wants to eat a strawberry and eating a strawberry will make them better at reasoning because food is good or whatever. Or they can be like: “sometimes during training, there were various physical resources, I had some iron: can you make there be some iron in front of me for the normal reason that there’s iron”. I think it’s kind of complicated if the humans want to make requests like, “Can you design me a better camera?” I think if you were very clever about it, you might be able to ask for a better camera, but I think it might be unclear how you would ever ask for a better camera in a way that your AI can fulfill by making sense or something, and I think it’s unclear to me that this is problematic. **Daniel Filan:** Yeah, I guess the worry might be that… I guess it depends where explanations stop, but the worry might be that almost any good thing the AI might do, might end up improving sensor readings. So the AI makes, I don’t know, electronics more reliable or the AI improves human cognition or something. It seems like by default that might improve sensors. And yeah, maybe you cut it off as saying, well, in any case, the reason the sensors are working is humans are making them work and humans decided to get these upgrades or something. **Mark Xu:** I think maybe the simplest answer is just as long as the changes are slow and you can still verify that the humans are alive post making the changes, then you can just do online learning and retrain your AI and be like “and now we’re defining what it means to be good relative to this new set of sensors”, and you can in fact verify that the humans are still alive. And the only worry is if you upgraded all the cameras simultaneously and you lost your ability to check whether or not the humans are still alive because you no longer have knowledge about how the cameras work or other stuff like that. **Daniel Filan:** So, should I be imagining this gets flagged as an anomaly, but we go ahead with doing it anyway? **Mark Xu:** So, if you imagine your AI starts with five cameras and then you add a sixth camera, I think by defaultyour “definition” in terms of mechanisms of what it means for the diamond to be there or the human to be alive is just only going to be in terms of the first five cameras, and your AI is just going to ignore the sixth camera entirely when selecting actions. And then in order for it to incorporate the sixth camera, you’re going to just need to provide a new clean set where the humans are alive according to all six cameras, and then do mechanistic anomaly detection relative to that new set. And then, I don’t know, you can imagine some gradual… You just add a sixth camera and then you use the first five cameras to check if the humans are alive and then you label using that, I don’t know- **Daniel Filan:** So, it’s gradual in the subset sense, where there’s always some subset that remains unaffected by changes? **Mark Xu:** Sure, so I think that should work, although I haven’t thought about it. I don’t know, I mostly consider this in some sense out of current scope or something. ### How useful is mechanistic anomaly detection? **Daniel Filan:** So, I guess we’ve talked about mechanistic anomaly detection a while. Maybe more broadly, how are you thinking about… on the two scales of ‘it doesn’t really solve this’, to ‘it totally solves this’, and the scale of various problems things might solve, how are you thinking of mechanistic anomaly detection: it definitely solves this one problem and maybe solves this other, or it maybe solves five problems? **Mark Xu:** So, I think there’s various versions of the hope. I think the strongest version of the hope is that it both handles [deceptive alignment](https://www.alignmentforum.org/tag/deceptive-alignment) and it handles all of [eliciting latent knowledge](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge), and I don’t know if that makes sense. I’m happy to talk about why I think mechanistic anomaly detection will tackle those two problems. And then I think there’s more restricted hopes. I guess that’s what I’m gunning for in some sense, and then I think it’s possible that we only get what I was calling the in-distribution mechanistic anomaly detection, and we don’t get the out-of-distribution mechanistic anomaly detection, in which case we’ll only get deceptive alignment and possibly only some subset of deceptive alignment, and we’ll only get this IID form of eliciting latent knowledge, which is going to be not that useful I think, and then it’s possible we get nothing. And then I think there’s all sorts of empirical middle grounds, or there are, I think, worlds where we don’t nail the problem, but we can do some forms of ad hoc mechanistic anomaly detection by doing normal human interpretability and looking at models and being like, “Huh, that looked weird, the neurons are distributed weirdly now: perhaps that’s not good,” I think I largely consider that not what I’m aiming for, but maybe it will happen. **Daniel Filan:** Do you have a sense of which of these outcomes are more likely? Are there really big gaps in the probabilities between these things? **Mark Xu:** I think the outcome that is most likely is that we just get nothing and all of this doesn’t work for some reason or another or we’re just wrong about how everything works, and then I think if we’re right about things, we’ll probably just get both of them, and I think it would be kind of surprising to me if we only got the in-distribution anomaly detection, but we didn’t get the out-of-distribution anomaly detection, because it seems like there’s a lot of avenues there, but I think in general, it’s kind of confusing what we’re even doing and if we’re going to get anything at all or something. I think that’s the first question, do we get anything at all? And then if we get something, then we’ll probably get a lot of things is my guess. **Daniel Filan:** All right, maybe this is going back a bit, but in-distribution versus out-of-distribution anomaly detection, given that you can see a new rare thing in-distribution, I’m not even sure what the distinguishing features of those are. **Mark Xu:** I think the main question is whether or not it’s okay to get tripped up by seeing new rare things in-distribution, whether or not it’s okay to just flag all of those as anomalies. And so, perhaps it’s not accurate to call it in-distribution anomaly detection, but I think there’s a naive thing that I want to do, which is take a training set, white list all the mechanisms that happened during training, flag everything else as anomalous, and then if something is anomalous, we can inspect it carefully and check whether or not it’s actually anomalous. If it’s actually not anomalous, we add it to the white list and then we just keep going. And there’s some question of, is that okay or not? If that’s an okay process to be engaged in, then I think… I want to call that the in-distribution world, things aren’t changing too fast. Your AI doesn’t have to take novel actions that humans don’t really understand. AI can just do normal stuff and if it does anything abnormal, the humans look at it and it’s like, “Is this bad? I don’t know,” and they can take their time and decide, and the hope is that this will catch all the crazy deceptive alignment cases where the AI goes from being totally benign in actions to stabbing all the humans where you’re like, “That was not good,” and you have time to deal with that sort of stuff, and then you can retrain your AI to be like, “You know the part where you defected and stabbed all the humans? Negative reward.” **Daniel Filan:** And then going a bit more broadly, I’m wondering how you think of this general mechanistic anomaly detection program as relating to things like interpretability or explanation or stuff? **Mark Xu:** So I think the field of interpretability is empirically motivated by many things. I think if the entire field of interpretability, all they did was figure out a way to do mechanistic anomaly detection on powerful models, I would be like, “That was a good job.” I think interpretability techniques are often useful for things like mechanistic anomaly detection, but not oriented around that or something like that. I think I would make the tentative claim that people doing interpretability should be more motivated by various downstream applications, of which I think mechanistic anomaly detection is one, but often confounded by the fact that there currently exists pretty strong baselines for mechanistic anomaly detection that aren’t doing the kinds of things that we want to be doing, and so it’s hard to check whether or not you’re making progress on things like empirical mechanistic anomaly detection. Because you can just solve it by doing a bunch of stuff that intuitively seems unscalable or wouldn’t work in more complicated settings or just doing regularization or retraining your model or fine-tuning or whatever. I’m not very familiar with what people mean when they say search for explanations. I think naively, I mean the same thing by ‘mechanism’ as what they mean by ‘explanation’, or explanations are maybe sets of mechanisms in my head; not quite sets, but mechanisms layered on top of each other. And I think it would be reasonable to ask questions like: when your explanation of your model’s behavior has an ‘or’ structure where it could be doing A or B, we want to be able to detect whether or not it’s doing A or whether or not it’s doing B on any specific input. I think that’s a reasonable question to ask of explanations, and I would be excited if someone did something related to explanations that enabled you to do that thing or defined explanations in a way that made it possible to do that sort of thing, et cetera, et cetera. Formalizing the presumption of independence ------------------------------------------- **Daniel Filan:** Cool, so perhaps speaking of explanations, I maybe want to move on to this paper that is called [Formalizing the Presumption of Independence](https://arxiv.org/abs/2211.06738) authored by [Paul Christiano](https://paulfchristiano.com/), [Eric Neyman](https://sites.google.com/view/ericneyman/), and yourself. I guess it was on arxiv November last year? **Mark Xu:** Around then. **Daniel Filan:** Cool, so for this paper, can you tell us… For those who haven’t heard of it, what’s the deal with it? **Mark Xu:** So, I think ARC’s work right now is roughly divided into two camps. One camp is: given some sense of what a mechanism is or what an explanation is, how can we use that to do useful stuff for alignment, under which mechanistic anomaly detection is the main angle of attack there; and then the other half of ARC’s work is like, what do we mean by mechanism or what is a mechanism? What is an explanation? And that is largely what ‘Formalizing the presumption of independence’ is. And so, ‘Formalizing the presumption of independence’ more specifically is about: So, suppose you have a list of quantities like A, B, C, D, et cetera, and you’re interested in the product of these quantities and you knew, for example, expectation of A, expectation of B, expectation of C, I claim that a reasonable thing to do is to assume that these quantities are independent from each other – to presume independence, if you will – and say that expectation of A x B x C x D, equals expectation of A x expectation of B x expectation of C x expectation of D, and this is “a reasonable best guess”. And then there’s some question of: but then sometimes in fact A is correlated with B and C is correlated with D, and so this guess is wrong; and so, we want to be able to make a best guess about how various quantities are related, presuming independence, but then if someone comes to us and is like, “Hey, actually if you notice that A is correlated with B”, we want to be able to revise our guess in light of this new knowledge. And so, ‘Formalizing the presumption of independence’ is about the quest to define these two objects that are related in this way. So we have a heuristic estimator, which we want to “presume independence”, and then we want a heuristic argument, which is an object that we feed to a heuristic estimator to be a list of ways in which the presumption of independence is false, and actually you should be tracking the correlation (or maybe higher order analogs of) between various quantities. So we can imagine that for example, by default given A x B, your heuristic estimator will estimate expectation of A x B as expectation of A x expectation of B, and then you can feed it a heuristic argument, which could be of the form of, “Hey, actually A and B are correlated,” or, “Hey, actually you should track the expectation of A x B,” which will make your heuristic estimator more exact and calculate expectation of A x B as maybe expectation of A x expectation of B plus the covariance between A and B, which is the correct answer. And then we want to be able to do this for everything, I guess is the simple answer there. So, we want to be able to say, given an arbitrary neural net, we have some default guess at the neural net, which is maybe that we just assume that everything is independent of everything else. So, we want a default guess for the expectation of the neural net on some compact representation of the data distribution, which we often call mean propagation, where you just take the means of everything, and then when you multiply two quantities, you assume that the expectation of the product is the product of the expectations and you just do this throughout the entire neural net, and this is going to be probably pretty good for randomly-initialized neural nets, because randomly-initialized neural nets in fact don’t have correlations that go one way or the other. And then you have to add the ReLUs, that’s a bit complicated. Let’s assume for now there are no ReLUs, and instead of ReLUs, we just square things and then it’s more clear what to do. Again, your guess will be very bad, because if you have an expectation 0 thing and then you square it, you’re going to be like, “Well, the expectation of the square is going to be the product of the expectations, and so it’s probably zero,” but actually it’s not probably zero because most things have some amount of variance. And then hopefully someone can come in or ideally we’ll have a heuristic estimator that does this by default, and then someone will come in with a list of ways in which this presumption of independence is not very good. For example, the square of things is often positive or things often have variance, and then we’ll be able to add this to our estimate and we’ll be able to do a more refined version of mean propagation where you keep track of various covariances between things and higher order analogs of. A,nd then hopefully we’ll be able to do that. And I think there’s a bunch of follow-up questions which is like, “Why is this useful? Why do you think this is possible?” Maybe I’m happy to go in the order that you think is best. **Daniel Filan:** Sure. So first of all: my understanding is that this is related to the mechanistic anomaly detection paradigm where I think the thing is supposed to be that a heuristic argument is a mechanism, or is playing the role of a mechanism. Is that right? **Mark Xu:** I think it’s more accurate to think of heuristic arguments as lists of mechanisms. I think that is also not quite right and the entire paradigm is a bit not… Or the words are not good because I think there’s going to be no such thing as one mechanism as distinct from other mechanisms. I just think it’s going to be an entire layered mess of different mechanisms constantly interacting with each other, et cetera, et cetera, where it’s hard to point to any computer program and be like, “There are seven distinct mechanisms for why the program is behaving as it is.” There’s just one thing and then it splits into two things and then interacts with the third thing, and so you probably won’t have this list of distinct mechanisms. But the idea is that a heuristic argument is a set of mechanisms, or a set of mechanisms plus ways that they relate to each other, et cetera, et cetera, and the hope is that this is the kind of object that we’ll need to be able to do mechanistic anomaly detection. **Daniel Filan:** If the old arguments for why your network is outputting ‘be nice’ is different from the new argument for why your network is outputting ‘be nice’, then you should be as worried as if there were different mechanisms, something like that? **Mark Xu:** Or the hope is to just use the heuristic estimator, plus heuristic argument in all of the schemes that I was describing previously. You have a heuristic argument for why your model is behaving the way it is, or your heuristic argument explains like 99.9% of the variance of your model during training. And then you ask whether or not there exists any heuristic argument that explains 99.9% of the variance of your model during training, but does not explain this particular data point off distribution, and then if that ever happens, then you’re like, “This data point was using a part of the model only responsible for one in 1,000 parts of its variance during training, that’s kind of weird. No good.” ### Heuristic arguments in physics **Daniel Filan:** This is actually interesting, because one thing that I was thinking of when thinking about this is: in physics you can often use these energy arguments. So, here’s a classic energy argument. You have some ramp, you have a ball, it’s resting on the top of the ramp, and for convenience’s sake, this ball, it’s actually just infinitely… It’s a point particle. It’s got zero radius. So, you put it at the top of the ramp and then you let go, and then the bottom of the ramp is horizontal, and at the end the ball is traveling at the bottom of the ramp. And you ask, how fast is the ball going at the bottom of the ramp? And there’s this way you could argue it, which is like, “Well, the height between the top of the ramp and the bottom of the ramp is this much, and so initially the ball had this gravitational potential energy that it used up by moving down, and at the end, the ball just has kinetic energy, which is just its velocity.” And then you’re like, “Well, this is going to be equal because energy is conserved,” and then you back out the velocity of the ball. And the interesting thing about this argument is that it doesn’t actually rely on the shape of the ramp. The ball could do loop-de-loops in the middle and bounce around off a bunch of things and you’ll still get the same answer. And so, there are a bunch of mechanisms that correspond to this argument of energy conservation, but it seems like at some level, as long as you can have this argument that energy got conserved, that’s qualitatively different from arguments where I took the ball and then put it at the bottom of the ramp and then gave it the right amount of velocity. I don’t know, that’s not really a question. So, you have the choice to react to that or not. **Mark Xu:** So, I think we often consider examples like this from physics, and the hope is that maybe more abstractly, there’s a set of approximations that people often make when solving simple physics problems, like there’s no air resistance, things are point masses, there’s no friction, et cetera, et cetera. And the hope is that these all correspond to uses of the presumption of independence. This is simplest for the [ideal gas law](https://en.wikipedia.org/wiki/Ideal_gas_law), where the presumption of independence you’re making when you’re using the ideal gas law is the particles… their position is randomly distributed inside the volume. They don’t interact with each other. Particles are independent of each other, they have no electrostatic forces on each other, and they have no volume also. And it’s not immediately obvious to me what uses of the presumption of independence correspond to no friction. **Daniel Filan:** I think it’s not. So, if I think about no friction or point particle in the ramp example, if you add in those assumptions, those systematically change the answer… I don’t know, I guess those can only ever make your predicted velocity go down. They can never increase your predicted velocity because energy gets leaked to friction or because part of your energy is making you rotate instead of making you have a linear velocity, and that strikes me as different from other cases where if things can be correlated, well, they can be correlated in ways that make the number bigger or in ways that make the number smaller usually. Except for the case of, you square something, and in that case when you add the covariance between X and X that always makes the expected square bigger. **Mark Xu:** So, I think there are going to be cases in general where adding more stuff to your heuristic argument will predictably make the answer go one way or another, because you looked at the structure of the system in question before thinking about what heuristic arguments to add. I think there are ways of making… One presumption of independence is relative to the ball’s center of mass. The position of the various bits of the ball is independent or something. Whereas if balls are actually rotating or the velocity of the bits of the ball is independent. Or zero maybe, but actually balls rotate sometimes, or… I actually don’t know about the friction one. I guess balls don’t have that much friction, but you can assume that the bits of the ball don’t exert any force on the bits of the ramp or whatever. **Daniel Filan:** Or you could imagine maybe the ball rubbing against the ramp causes the ramp to push the ball even harder. **Mark Xu:** Maybe, I think that’s hopefully not going to be a valid use of the presumption of independence, but I’m not quite sure. **Daniel Filan:** Or I guess it could slow the ball down or it could speed the ball up, like - **Mark Xu:** If you were truly naive looking at physics, you’d be like, “Who knows what’s going to happen when you add this interaction term?” **Daniel Filan:** In fact, modeling friction is actually kind of tricky. You get told ‘ah, friction!’ but wait, what is friction? **Mark Xu:** I think these are the kinds of things where heuristic estimators are hopefully quite useful. There’s some intuitive question or problem which is: there’s a ball at the top of the ramp, what is its velocity going to be at the bottom of the ramp? And oftentimes the actual answer is, I don’t know, there’s a trillion factors at play. However, you can make this energy argument that’s a pretty good guess in a wide range of circumstances, and you’re never going to be able to prove basically that the velocity of the ball is going to be a certain thing at the bottom. I guess proof is complicated when you’re talking about the physical world, but if you suppose that you have a computer simulation with all the little particles interacting with each other, basically the only way that you’re going to prove that the ball ends up at the bottom with a certain velocity is to exhaustively run the entire simulation forwards and check at the end what its velocity actually is. However, there’s this kind of reasoning that seems intuitively very valid, which is: assume the air doesn’t matter, assume the friction doesn’t matter, just calculate the energy and do that argument, that we want to be able to capture and formalize in this concept of a heuristic estimator, and then the heuristic argument will correspond to ‘consider the energy’. Maybe if there’s lots of wind, then in fact the air resistance on the ball is non-negligible, and then you can add something to your heuristic argument, which is ‘also you’ve got to consider the fact that the wind pushes the ball a little bit’ and then your estimate will get more accurate over time, and hopefully it will be a big, good time. I don’t know, hopefully it’ll work. **Daniel Filan:** And interestingly, ‘the wind pushes the ball’ is saying that the interactions between the wind and the ball are not zero mean, or there’s some covariance or something. **Mark Xu:** One should notice these things to get a better estimate. And so, hopefully we’ll be able to do this for everything, hopefully we’ll be able to do it for all the physics examples, and there’s a set of math examples that I’m happy to talk about, and then obviously the application we truly care about is the neural net application. ### Difficult domains for heuristic arguments **Daniel Filan:** Speaking of examples: so in the paper you have a bunch of examples from number theory. There are also some other examples, but to be honest, it seems like the main victory of these sorts of arguments is in number theory, where you can just imagine that primes are randomly distributed and make a few [assumptions] about their distribution and just deduce a bunch of true facts. So, what I’m wondering is outside number theory… So, you give some examples where heuristic arguments do work: are there any cases where you’ve had the most difficulty in applying heuristic arguments? **Mark Xu:** Yeah, so I haven’t personally spent that much time looking for these examples. So, we are quite interested in cases where heuristic arguments give “the wrong answer”. I think this is a bit complicated, and so the reason it’s a bit complicated is: there’s this nature of a heuristic argument to be open to revision; and so, obviously you can make heuristic arguments that give the wrong answer. Suppose A and B are correlated, and I just don’t tell you to notice the correlation, then you’re just going to be really wrong. And so, it’s kind of unclear what is meant by a heuristic argument that gives the wrong answer. And so we often search… There’s two sorts of things that we’re interested in. One sort of thing is an example of a thing that’s true or an example of an improbable thing that happens way more likely than you would’ve naively expected, that seems to happen for “no reason”, such that we can’t find a heuristic argument in some sense. So, obviously you can find a heuristic argument if you just exhaustively compute everything, and so there’s some notion of length here where we can’t find a compact heuristic argument that seems to describe “why this thing happens the way it does”, where we conjecture that it’s always possible to explain a phenomenon in roughly the same length of the phenomenon itself. And then another thing that we search for often is we want to be able to use heuristic arguments for this sort of mechanism distinction problem. And so, we search for examples of cases where heuristic arguments have the structure where it’s like: A happens, therefore B happens, and sometimes C happens, therefore B happens, but you can’t distinguish between A and C efficiently. So, I think the closest example we have of something that seems kind of surprising, but we don’t quite have a heuristic argument, is… So, you have Fermat’s Last Theorem, which states that aN + bN = cN has no integer solutions for N greater than two. **Daniel Filan:** Non-trivial integer solutions. **Mark Xu:** Non trivial, yes. **Daniel Filan:** Zero and one. **Mark Xu:** Well, one doesn’t quite work because… **Daniel Filan:** 0N + 1N = 1N. **Mark Xu:** Oh, sure. And so the N = 4 and above case, the heuristic argument is quite simple where you can just assume that the fourth powers are distributed randomly with their respective intervals or whatever; there’s like N fourth powers between 1 and N to the 4, and then you can just assume that these fourth powers are distributed independently and then you correctly deduce that there are going to be vanishingly small solutions to this equation. I’m not that familiar with the details here, but the N = 3 case is interesting, and in fact, the heuristic argument for the N = 3 case is basically the same as the proof for the N = 3 case, because the way the numerics work out for the N = 3 case is if you do this naive assumption, there’s going to be infinite expected solutions, and then there’s a complex structure in the equation itself that suggests that all the solutions are correlated with each other. So, either there’s going to be a lot of solutions or there’s going to be basically none solutions, and then you check and in fact, there’s going to be basically none solutions or basically zero solutions, and the heuristic argument for this fact is basically the proof of this fact, which is quite long. But the problem is, you also heuristically expect the solutions to be very sparse, and so you’re not that surprised that you haven’t found a solution until you’ve checked a lot of examples or a lot of numbers, and so the number of things that you have to check is large, such that you can fit the heuristic arguments… So you’re not that surprised. I don’t know if that made that much sense. **Daniel Filan:** That makes some sense. So I don’t know, it’s like this case where heuristic arguments… it’s like they’re not winning that much over proof. **Mark Xu:** It almost didn’t fit, but the phenomenon you were supposed to be surprised by was barely not surprising enough such that you had enough room to fit the argument in before you got too surprised, such that you would expect there to be an argument or something. I don’t know, it’s kind of unclear whether or not that should be evidence in favor of our claim or against our claim that there’s always heuristic arguments, because in fact there is a heuristic argument in this case, but it maybe barely wasn’t a heuristic argument, and so it’s kind of confusing. ### Why not maximum entropy? **Daniel Filan:** One thing that I found myself wondering was… It seems like you have this structure where you have this list of facts, and you’re wanting to get a heuristic argument given this list of facts. And in the paper you describe this cumulative propagation thing, which basically just involves keeping track of expectations and covariances and some higher order things. You pick some level that you want to keep track of and you do that. But then you mention some failures of this approach and some ways you can fix it. One thing that struck me as the obvious first approach is just having a [maximum entropy distribution](https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution) given some facts. So for people who don’t know, this is this idea of, you have some probability distribution over some set of interest and you don’t know what your probability distribution is, but you know that it’s got to have this average and this correlation between these two things and this covariance. And basically you say, “Well, given that I only know these things, I want to maximize the entropy of my probability distribution.” So, my probability distribution has to satisfy these constraints, but I’ve got to be as uncertain as I can possibly be given those constraints. And so, this is a big thing in Bayesian reasoning, and intuitively it seems like it would fit, this thing. Every time you hear a new fact, you can add a constraint to your… And now get a new maximum entropy distribution. It always gives answers. I remember in discussion of cumulant propagation, you mentioned some cases where maximum entropy would do better than it does, although I don’t remember the exact reason why. I think it was something to do with calculating the square of some quantity is negative in expectation. So, why not just do maximum entropy? **Mark Xu:** So, historically we’ve considered a lot of maximum entropy-type approaches and all of them were a bit unsatisfying. So, I think there’s three-ish reasons that we think maximum entropy is unsatisfying in various ways. I think the first one is perhaps most compelling, but most naive, and I’ll give that example first. So suppose that you have A and B, and then you’re taking the AND of A and B, so suppose we have C equals A and B. **Daniel Filan:** Yep. **Mark Xu:** And suppose that we know that C is 50/50, then the maximum entropy distribution on A and B given C is 50/50 both… Sorry, so if we condition on C being 50/50, the maximum entropy distribution over A and B has A and B correlated a little bit, but also it raises the probability of A and it raises the probability of B. **Daniel Filan:** Relative to… **Mark Xu:** Relative to 50/50. **Daniel Filan:** Okay. **Mark Xu:** And there’s various reasons why this is unsatisfying. The main reason, I guess, it’s unsatisfying is: suppose you have a circuit that is computing some stuff, and then you just add a bunch of auxiliary gates where you have some wires, and then you pick two wires and you’re like, “I want these wires to be correlated,” then you can just add the AND of those wires a bunch of times to your circuit, and then if you want the maximum entropy distribution over all the wires, it will want to push the AND of those two wires down to 50/50 because that’s the maximum entropy probability for a wire to be. And so, it’ll induce this fake correlation between the wires while you’re trying to do that. **Daniel Filan:** Sorry, just going back to the A, B, C which is A and B thing: is the idea that the thing you wanted was for A and B to be uncorrelated, but have high enough probability that when you combine them to get C, the probability of C being true is 50/50? **Mark Xu:** I think I actually explained the example wrong. So suppose you just have: your circuit is just two wires, it takes two inputs and it just outputs both of them, then the maximum entry distribution is 50/50 over both wires, that seems correct. And then suppose you just randomly added an AND gate, then the maximum entropy distribution would– **Daniel Filan:** So, when you say randomly added an AND gate, you mean like you have your two inputs and then you put an AND gate that takes your two inputs, and the output of that gate is your new output? **Mark Xu:** No, you don’t connect it to the output at all. You still output A and B, but you’ve just added an AND gate as some auxiliary computation that your circuit does for no reason. **Daniel Filan:** Okay. **Mark Xu:** Then, if you want to take the maximum entropy distribution over all the things your circuit is doing, then adding this gate will artificially raise the probability of A and B and also induce this correlation between A and B. **Daniel Filan:** Why? **Mark Xu:** So naively it won’t because your maximum entropy distribution will just be like A, B, C, all 50/50 and independent, and then suppose someone came to you and was like, “Actually, you need to respect the logical structure of this circuit. C can only be on if A and B are both on. The probability of C being on has to be the probability of A being on times the probability of B being on plus the interaction term,” then you’ll get this induced correlation between A and B plus A and B will be raised to 0.6 or whatever. **Daniel Filan:** So, is the idea that you have this intermediate thing in your circuit and you have to be maximum entropy also over this intermediate thing, and so even though you didn’t really care about the intermediate thing, it didn’t play any role of interest in anything else, because you’re being maximum entropy over that, you’ve got to mess around with the distributions over other variables? **Mark Xu:** Yeah, that’s right, and I think of this as a special case of a more general phenomenon where there’s this computation that you’re doing throughout the circuit, and you can pretend there’s some notion of time in your circuit, or gates that only take input wires as inputs are at time 1, and then if a gate takes as inputs gates that themselves take inputs as input, then that’s time 2, et cetera, et cetera, where you’re going down your circuit and computing things. And maximum entropy distributions have this property where things that you compute in “the future” go back in time and affect the past. **Daniel Filan:** In the sense of once you know things about the future, that gives you constraints on the past? **Mark Xu:** Yeah, and if you want to be maximum entropy, it causes you to mess with your probabilities of the input wires: if you learn that you’re computing lots of ANDs then you want to raise the probability of your input wires, et cetera, et cetera. And I think this is in general going to be a kind of an undesirable property, and it’s a bit complicated to talk about why it’s undesirable. I guess the simplest reason is someone can mess with you by putting a bunch of stuff at the end of your circuit and then once you get to the end of your circuit, you’re forced to go back and revise your initial guesses. Whereas it seems like intuitively, heuristic arguments want to be more deductive in nature, where you start with some premises and then you deduce facts from those premises, and nothing anyone can say after you’ve started with your premises or nothing anyone can add to your circuit after you deduce facts from your premises can cause you to go back in time and be like, “Actually, my premises were wrong because…” I don’t know, it doesn’t really make that much sense that if you assume the A is 50/50 and B is 50/50 and then you’re like, “Ah, I’m computing C, which is A and B, I guess A and B are like 60/40 now and they’re also slightly correlated.” That kind of reasoning seems intuitively invalid, and it’s the kind of reasoning that various forms of maximum entropy force you to take. And you can try to do various other stuff to eliminate this problem which we’ve explored historically, and I think none of them seem that satisfying and fix the problem that well or something. **Daniel Filan:** Right, that’s a useful response. You just end up with these useless variables that you shouldn’t have cared about the entropy of, but you do now. So, this relates to this question I have about the adversarial robustness of these heuristic estimators. **Mark Xu:** So, I said originally that there were three reasons why we didn’t maximum entropy. I’m happy to talk about the other two also, but maybe you don’t want to. **Daniel Filan:** I think I’m sold by reason 1. If you want to talk about reason 2 and reason 3, I’m all ears. **Mark Xu:** Well, so I think they are somewhat related to this more general problem of why heuristic arguments are going to be possible to begin with. So, I think reason 2 is going to be something like: we can’t actually maintain full probability distributions over things. So, there’s an important thing a probability distribution has to be, which is consistent with various facts or something. It has to be a real probability distribution. It has to describe actual things that are realizable. And I think heuristic arguments or your heuristic estimator can’t be real in that sense. Your distribution over possible states your circuit can take can’t be a distribution over actual possible states your circuit can take. So, one reason that we want this is some of our mechanistic anomaly detection applications have us believing things like A is on and B is on, but A and B are not on, because we want to be able to drop out mechanisms and you just straight out can’t do that if you have an actual probability distribution over actual outcomes because A being on and B being on must imply that A and B is being on. And the second reason is in terms of just computational complexity, it’s very likely just going to be impossible to actually have a distribution over realizable or physically possible or logically possible states, because the act of determining whether or not something is logically possible is very hard. I think Eric Neyman has a couple of neat examples where it’s just various circuits where it’s [#P-hard](https://en.wikipedia.org/wiki/%E2%99%AFP) to determine or to have a distribution over actual states that are satisfied by that circuit, although I’m not familiar with the details enough to talk about them here. And then there was some third reason which I’m forgetting. ### Adversarial robustness for heuristic arguments **Daniel Filan:** Cool, so one thing that I’m wondering about is: you mention in the paper that it’s difficult to get adversarial robustness. There are various reasons you think that you should be able to be fooled by people adversarially picking bad arguments. So often in the AI realm, when I hear that something is vulnerable to adversaries, I’m really worried because I’m like, “Well, what if my AI is an adversary?” I’m wondering, do you think that this worry applies in the heuristic arguments case? **Mark Xu:** Yeah, so first all, I guess I’ll talk about situations in which I think it’s basically impossible to be adversarially robust. So, suppose you have a hash function that gives -1 or 1, and you’re estimating the sum of hash of N over N2, and suppose your heuristic argument can just point out the values of various hash terms. So your naive heuristic estimator, presumption of independence, is 50/50 between one and negative one, so the estimate is zero; and then your heuristic argument consists of pointing out the values of various hash terms. In this situation, there’s just always going to exist arguments that continue to drive you up or continue to drive you down. And someone’s always going to be like, “Hey, hash of 2 is 1, hash of 7 is 1, hash of 9 is 1” and you’re just going to be driven up and up. It’s possible it should be like hash of N over N1.5 or something to make it bad. And so, you’re in this awkward spot where for any quantity with sufficient noise, even if you expect very strongly the noise to average out to zero, there will exist a heuristic argument that only points out the positive values of the noise and will drive you up, or there’s a heuristic argument that only points out the negative values of the noise and drive you down. And so, I think this is naively quite a big problem for the entire heuristic argument paradigm, if you’re ever relying on something like someone doing an unrestricted search for heuristic arguments of various forms and inputting them into your heuristic estimator. So, there’s a couple ways out. First, I’ll talk about the simplest way out, that I think ends up ultimately not working, but is worth talking about anyway. I think the simplest thing you can do is just have two people searching for heuristic arguments, one person is driving them up and one person driving them down. So, hopefully if you have one search process searching for heuristic arguments to drive you up and one to drive you down, then we don’t systematically maybe expect one person to be biased instead. I think this is a problem. For example, one obvious thing to do is both players have equal amounts of time to find heuristic arguments or something. However, if you consider the previous case with hash of N, suppose instead of being 50/50 1 and -1, hash of N is actually 1/4 probability of 1, 3/4 probability of -1, then in that case the player finding heuristic arguments to go down has a 3x advantage over the player finding heuristic arguments to go up. And so, then you’re no good because you can have these quantities where it’s far easier to find heuristic arguments to drive you up than to drive you down or vice versa. **Daniel Filan:** Although in that case you really should be below zero in expectation, right? **Mark Xu:** Yeah. **Daniel Filan:** I think there’s some example in [the paper](https://arxiv.org/abs/2211.06738) (that people can read) where it really checks out that debate is not going to work. [Examples of this sort are in Appendix E] **Mark Xu:** So, you have examples that that make you very sad, and so instead of doing debate, we want to do this other thing that’s going to be kind of interesting. So, if you imagine searching for heuristic arguments in the setting where you have hash of N over N1.5 or whatever, the way that you’re going to do this is you’re going to first check hash of 1, then you’re going to check hash of 2, and then you’re going to check hash of 3, et cetera, et cetera. And suppose that you wanted to drive the estimate of the quantity up, then you would check hash of 1, and suppose it’s positive, you’re like, “Great, let’s include Term 1” and then you check hash of 2 and it’s negative and you’re like, “Let’s not include Term 2”, and then you check hash of 3 and it’s also negative and you’re like, “Let’s not include that”, check hash of 4, it is positive, so then you include that. And you’re carefully going through these terms and eliminating the ones that are negative. However, the interesting thing is if you imagine being the person that’s doing this, there in some sense exists a broader heuristic argument that you first considered and then pruned down, which is: you know the values of all these four terms, and then you selected the terms that were only positive. And so, there’s some sense in which, if instead of including only the heuristic argument that you produce at the end, you included the entire search process for the heuristic argument and also I guess the heuristic argument you included at the end, then hopefully the heuristic estimator is forced to be like, “Well, we actually looked at all four of these terms and then we discarded two of them, but the discarding part isn’t relevant because I already know all four of these logical facts, so I’m forced to include all four of these terms.” And so, then the hope is not that there doesn’t exist heuristic arguments that are misleading, but there’s no way of searching for heuristic arguments that’s systematically misleading if you don’t already know the values of the heuristic arguments. And this is related to the presumption of independence being “correct on average” or something. And so if you’re presuming independence in a way that’s correct on average, then on average people can only… When searching for arguments and just looking at stuff, they can only lead you closer to the truth hopefully. **Daniel Filan:** Sure. So the idea is that if in some sense we can… Well, we should be able to have some broad search process over arguments that essentially isn’t adversarial and use that, sounds like is the upshot. **Mark Xu:** Well, it’s: even if the search is adversarial, if you’re conditioning on everything that the search process knows while it’s searching, then it can’t be adversarial, because all it can do is look at values, and it doesn’t know what the value is before it looks, and if you’ve actually presumed independence correctly, then on average everything it looks at has an equal chance of driving you up or down. **Daniel Filan:** I guess I have this concern that maybe the point of mechanistic anomaly detection was to help me know how to elicit this latent knowledge or something. And so, I might’ve been imagining using some AI to come up with arguments and tell me that the thing was bad. And if I’m using that AI to help me get mechanistic anomaly detection and I need mechanistic anomaly detection to help me get that AI be good, then that’s some sort of recursion. It might be the bad type of recursion. It might be the good type of recursion where you can bootstrap up, I guess. **Mark Xu:** I think it’s not very clear to me how this is ultimately going to shake out. I think this kind of adversarial robustness, or how we deal with adversarial robustness, is important. I think there are other ways out besides the way I described. I think that’s intuitively what we want to do: we want to just condition your heuristic estimator on everything, even the search for the arguments and also, I don’t know, just all the things. And hopefully if you don’t let any adversarial selection into your heuristic estimator, then it can’t be adversarially selected against or something. So, there’s a thing of: if you’re a proper Bayesian and someone’s adversarially selecting evidence to show you, the thing that’s supposed to happen is eventually you’re just like, “And now I think I’m being adversarially selected evidence to be shown, and I know this fact and so I’m just being a Bayesian”, and then you just update on that fact, and you just aren’t wrong on average if you’re correct about the kinds of ways in which you’re being adversarially selected against. **Daniel Filan:** Although, it does require you to… I don’t know, I feel like this is a bit of Bayesian cope because in order to do that… When you’re a Bayesian, you’re supposed to be like, “Oh, I’m just going to get observations and update on observations.” But now you’re like, “Well, now my new observation is that I’m observing that observation and maybe the observation process is weird,” but then I feel like, you’ve got this layer and after that layer you just trust it and before that layer you worry about things being screened or whatever. And I’m like, “Where do you put the layer?” You had to enlarge your model to make this be… I don’t know, it seems a bit- **Mark Xu:** I think being a Bayesian is hard, but the thing we’re going to try to do is just look over the shoulder of the person selecting the evidence to show you and then see everything that they see, and then hopefully they’re not misled on average because they’re just looking at stuff. **Daniel Filan:** In that case, it’s better than the Bayesian case or I don’t have the same complaints as I have about the Bayesian case. ### Other approaches to defining mechanisms **Daniel Filan:** So stepping back a bit, there are a few different ways that somebody might have tried to concretize a mechanism. So, there’s this one approach where we have something called a heuristic argument, and we’re going to work to try and figure out how to formalize that. I guess we haven’t explicitly said this yet maybe, but in the paper, it’s still an open problem, right? **Mark Xu:** Yeah, or we have- **Daniel Filan:** There are avenues. **Mark Xu:** I guess I would say: we have some heuristic estimator for some quantities that are deficient in various ways, and we’re trying to figure out what’s up and get better ones for other quantities and then unify them and hopefully all the dominoes will fall over once we quest sufficiently deeply. **Daniel Filan:** Nice, so there’s this approach. There’s also some people working on various causal abstractions. So [causal scrubbing](https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing) is work by [Redwood Research](https://www.redwoodresearch.org/), there’s [other causal abstraction work](https://arxiv.org/abs/2106.02997) by other groups. I’m wondering: out of the landscape of all the ways in which one might have concretized what a mechanism is, how promising do you think heuristic arguments is? **Mark Xu:** So, I’m tempted to say something like: if the causal abstraction stuff works out, we’ll just call that a heuristic argument and go with that sort of thing. ‘Heuristic arguments’ is intended to be this umbrella term for this sort of machine-checkable, proof-like thing that can apply to arbitrary computational objects, and I think that’s what I’m tempted to say. I think in fact, we do have various desiderata that we think are important for our particular approach to heuristic arguments, and we want them to be proof-like in various ways that e.g. causal abstractions often aren’t, or we want them to be fully mechanistic and have zero empirical parts in them. Whereas often, various causal abstraction approaches allow you to measure some stuff empirically and then do deduction with the rest of it. I very much think of us as going for the throat on this sort of heuristic argument stuff. There’s some question you might ask, which is like, “Why do you expect heuristic arguments to apply to neural nets?” And the answer to that is, well, neural nets are just a computational object and we want to have heuristic estimators that work for literally all computational objects, and then the reason why it applies to neural nets in particular is just because neural nets are a thing that you can formally define. Whereas there’s a lot of more restricted approaches to explanations that are like, “Well, we only actually care about the neural net thing, and so we ought to define it in particular about the neural net thing”. So maybe in summary, we’re trying to be in some sense maximally ambitious with respect to the existence of heuristic arguments. The research plan: progress and next steps ------------------------------------------ **Daniel Filan:** I got you. So, I’d like to move on a little bit to the overall plan. So I think in some of these early posts, the idea is step 1: formalize heuristic arguments, step 2: solve mechanistic anomaly detection given the formalism of heuristic arguments, and step 3: find a way of finding heuristic arguments. And really, I guess these three things could be run in parallel or not necessarily in order. I’m wondering, the last batch of publications was in late 2022. How’s the plan going since then? **Mark Xu:** So, I think Quarter 1 of 2023 was a bit rough and we didn’t get that much done. I think it’s sped up since then. So, I can talk about stuff that is… So, things that I think are good that have happened: we have a pretty good formal problem statement about what it means to be a heuristic estimator for a class of functions or a class of circuits that we call bilinear circuits, where just all your gates are squares. So, we have that desideratum and we can talk about what it means to be a good heuristic estimator with respect to that class of circuits in a way that things we think are good satisfy… Or some stuff we have satisfies and some other stuff we have doesn’t satisfy. So, hopefully that’s just a fully formal problem. We have a better class of heuristic estimators for cumulant propagation - .we have an upgrade on cumulant propagation that we call reductant propagation, which is just slightly different: instead of keeping track of cumulants, you keep track of a different thing called a reductant, and it’s slightly better in various ill-defined ways, but it’s more accurate empirically on most distributions of circuits we’ve tried, et cetera, et cetera and we feel better about it. We have a clearer sense of what it means to do mechanistic anomaly detection, and how that needs to go, and what the hard cases are going to be for doing mechanistic anomaly detection in terms of being unable to distinguish between mechanisms. I noticed I didn’t really answer your original question, which is like, “How’s it going?” I think it’s going par or something. Maybe Q1 was slightly below par and we’re doing slightly above par in Q2, and so it’s on average par or something. **Daniel Filan:** So, par in the sense of you’ve gotten a better sense of some special cases, not yet knocked totally out of the park, but- **Mark Xu:** Or if you told me that this is how much progress we would’ve made at the beginning of 2023, I would be like, “Okay, that seems good enough to be worth doing: let’s do that, this is how we should spend our time” or something. **Daniel Filan:** Cool. I’m wondering: given that progress - so out of the steps of formalize heuristic arguments, solve mechanistic anomaly detection given that, and find heuristic arguments, how optimistic are you about the steps? Which ones seem most difficult? **Mark Xu:** So, I think the thing that’s most difficult is formalizing heuristic arguments in a way that makes them findable. Here’s a formalization of heuristic arguments: they’re just proofs. And so, there’s some sense in which heuristic arguments being compact is intrinsic to the nature of what it is to be a heuristic argument. I think doing useful stuff with heuristic arguments is pretty unlikely to be the place where we fall down. I think it’s possible that heuristic arguments get formalized and then we’re like, “Darn, we can’t do any of the things that we thought we were going to be able to do with heuristic arguments, because it turns out that they’re very different than what we thought they were going to be like.” I think that would be pretty good though, because we would have heuristic arguments and we would in some sense be done with that part. It would be very surprising to me if they were not useful in various ways. I don’t know if that was a complete answer. **Daniel Filan:** I think that worked. I’m wondering if there’s been any experimental work on trying out mechanistic anomaly detection things beyond the… So, I guess you’ve mentioned there are various interpretability things that you think won’t scale. Is there any promising experimental work you’re aware of? **Mark Xu:** So, I am not very aware of experimental work in general, but I think Redwood is currently working on what they’re calling ELK benchmarks where they’re trying to do this sort of mechanism distinction on toy problems like function evaluation. I don’t know how that’s going because I am not up-to-date on details. **Daniel Filan:** Fair enough. **Mark Xu:** I think ARC employees often write code to check whether or not heuristic estimators do some things, or check how empirically accurate they are, or find counter examples by random search or something. Probably you don’t want to call that experimental work, because we’re just checking how accurate our heuristic estimators for permanents of matrices are, or whatever. So, I think the short answer is I think Redwood’s doing some stuff that I don’t know that much about, and I’m not really aware of other stuff being done, but that is probably mostly because I’m not that aware of other stuff and not because it’s not being done, although it probably also isn’t being done in the way that I would want it to be done. **Daniel Filan:** I got you, so it’s about time for us to be wrapping up. Before I close up, I’m wondering if there’s any question that you think I should have asked. **Mark Xu:** I think people often ask for probabilities that these sorts of things all work out, to which I often say: 1/7 that everything works out roughly the way that we think it’s going to work out and is super great within five years-ish - maybe not quite five years now because I’ve been saying 1/7 over five years for more than a few months. So, maybe it’s like four and a half years now. But other than that, I don’t think so. Following ARC’s research ------------------------ **Daniel Filan:** Fair enough. So finally, if people are interested in following your research or if we have bright minds who are perhaps interested in contributing, what should they do? **Mark Xu:** So ARC posts [blog posts](https://www.alignment.org/blog/) and various announcements on our website, [alignment.org](https://www.alignment.org/). And we’re also currently hiring, so you can go to [alignment.org](https://www.alignment.org/) and click the hiring button and then be directed to [our hiring page](https://www.alignment.org/hiring/). **Daniel Filan:** Great. Well, thanks for talking to me today. **Mark Xu:** You’re welcome. Thanks for having me on. **Daniel Filan:** This episode is edited by Jack Garrett, and Amber Dawn Ace helped with the transcription. The opening and closing themes are also by Jack Garrett. Financial support for this episode was provided by the [Long-Term Future Fund](https://funds.effectivealtruism.org/funds/far-future), along with [patrons](https://www.patreon.com/axrpodcast) such as Ben Weinstein-Raun, Tor Barstad, and Alexey Malafeev. To read a transcript of this episode or to learn how to [support the podcast yourself](https://axrp.net/supporting-the-podcast/), you can visit [axrp.net](https://axrp.net/). Finally, if you have any feedback about this podcast, you can email me at [feedback@axrp.net](mailto:feedback@axrp.net).
d7721b0e-a68b-438b-9ef4-cb47d9e18475
trentmkelly/LessWrong-43k
LessWrong
LINK: 'Philosophy Bites' episode on the Singularity In May 2010, a leading philosophy podcast called Philosophy Bites did a show on the Singularity.
5ad2a3e4-966b-4308-aa67-1c9613c3a40f
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
How might AGI kill people? If we pose a serious threat, it could hack our weapons systems and turn them against us. Future militaries are much more vulnerable to this due to rapidly progressing autonomous weapons. There’s also the option of creating bioweapons and distributing them to the most unstable groups you can find, tricking nations into WW3, or dozens of other things an agent many times smarter than any human with the ability to develop arbitrary technology, hack things (including communications), and manipulate people, or many other possibilities that something smarter than a human could think up. More can be found [here](https://www.lesswrong.com/posts/pxGYZs2zHJNHvWY5b/request-for-concrete-ai-takeover-mechanisms). If we are not a threat, in the course of pursuing its goals it may consume vital resources that humans need (e.g. using land for solar panels instead of farm crops). This video goes into more detail: <iframe src="https://www.youtube.com/embed/ZeecOKBus3Q" title="Why Would AI Want to do Bad Things? Instrumental Convergence" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
05062786-ca63-47a3-8026-aaecbd8af5dd
trentmkelly/LessWrong-43k
LessWrong
On Tables and Happiness I'm writing this post because I want to have a succinct and shareable piece summarising my non-rigorous views on happiness. During the past decade, I entered adulthood. With this transition came more choices, as my parents and teachers no longer constrained me. More choices came with more responsibility and choice paralysis. Should I stay here or move there? Should I focus on acquiring knowledge or furthering my social life? What actions will effectively improve my happiness? The model I came up with is a four-legged table. Imagine that a flat tabletop represents your happiness. You want to maintain a flat tabletop. If the four legs are out of balance, the tabletop will become tilted, and so will you. The tabletop of happiness stands on these four pillars: Health Knowledge Wealth Relationships This model is helpful because it's easy to get addicted to some pillars and neglect others. I know many people (including myself, at points in my life) who became obsessed with only one of the four cornerstones and neglected the rest. E.g. we all know the software engineer who is making piles of cash but whose relationships are falling apart, or the socialite who is struggling to pay rent because they're addicted to sacrificing themselves for those around them. Let's now look at each of the four cornerstones and identify practical ways to improve them. There are several low hanging fruits under each pillar. However, as you advance beyond the low hanging fruits, the game becomes more challenging as returns diminish. Health * There are cheap and straightforward habits that have an outsized benefit in the long run. It's worth adopting such habits as brushing and flossing daily, exercising daily, etc. * Conversely, some habits have outsized harm in the long run. For example, drunk driving, phone use while driving, consuming harmful substances, etc. Don't do these things. * Health is complex and complicated. You can easily get lost in a rabbit hole researching any he
ca96bb54-5ab3-4cfd-936c-5db6221f364c
trentmkelly/LessWrong-43k
LessWrong
Fear or fear? (A Meteuphoric post on distinguishing feelings from considered positions.)
bb190230-d1db-4000-b08b-a3d99df26ea3
trentmkelly/LessWrong-43k
LessWrong
AI labs can boost external safety research Frontier AI labs can boost external safety researchers by * Sharing better access to powerful models (early access, fine-tuning, helpful-only,[1] filters/moderation-off, logprobs, activations)[2] * Releasing research artifacts besides models * Publishing (transparent, reproducible) safety research * Giving API credits * Mentoring ---------------------------------------- Here's what the labs have done (besides just publishing safety research[3]). Anthropic: * Releasing resources including RLHF and red-teaming datasets, an interpretability notebook, and model organisms prompts and transcripts * Supporting creation of safety-relevant evals and tools for evals * Giving free API access to some OP grantees and giving some researchers $1K (or sometimes more) in API credits * (Giving deep model access to Ryan Greenblatt) * (External mentoring, in particular via MATS) * [No fine-tuning or deep access, except for Ryan] Google DeepMind: * Publishing their model evals for dangerous capabilities and sharing resources for reproducing some of them * Releasing Gemma SAEs * Releasing Gemma weights * (External mentoring, in particular via MATS) * [No fine-tuning or deep access to frontier models] OpenAI:[4] * OpenAI Evals * Superalignment Fast Grants * Maybe giving better API access to some OP grantees * Fine-tuning GPT-3.5 (and "GPT-4 fine-tuning is in experimental access"; OpenAI shared GPT-4 fine-tuning access with academic researchers including Jacob Steinhardt and Daniel Kang in 2023) * Update: GPT-4o fine-tuning * Early access: shared GPT-4 with a few safety researchers including Rachel Freedman before release * API gives top 5 logprobs Meta AI: * Releasing Llama weights Microsoft: * [Nothing] xAI: * [Nothing] ---------------------------------------- Related papers: * Structured access for third-party research on frontier AI models (Bucknall and Trager 2023) * Black-Box Access is Insufficient for Rigorous AI Audits (Casper et al.
cf7da1da-8238-4c42-a25c-325d7c355a40
trentmkelly/LessWrong-43k
LessWrong
Meetup : Warsaw November Meetup Discussion article for the meetup : Warsaw November Meetup WHEN: 17 November 2014 06:00:00PM (+0100) WHERE: Warsaw Where? Cafe Niespodzianka, Marszałkowska 7, 00-626 Warszawa When? Vote and decide: http://doodle.com/t28anzntaczs8nwi If you use Facebook, join our local group: www.facebook.com/groups/lwwarsaw Discussion article for the meetup : Warsaw November Meetup
90bad1e9-0f96-4a03-a476-085cf30d5d36
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Why do we post our AI safety plans on the Internet? *Cross-posted from the* [*EA Forum*](https://forum.effectivealtruism.org/posts/WPEwFS5KR3LSqXd5Z/why-do-we-post-our-ai-safety-plans-on-the-internet)*.* **TL;DR**: It is plausible that AGI safety research should be assumed compromised once it is posted on the Internet, even in a purportedly private Google Doc. This is because the corporation creating the AGI will likely be training it on as much data as possible. And whenever the AGI knows in advance of our plan “If we see Sign X of misalignment from the AI, we should shut it down and retrain,” it can use this plan against us: for example, by hiding from us Sign X that it would have shown under normal circumstances. If true, this concern implies that the impact of EAs’ past and current efforts on AI safety may have been fundamentally limited by our insufficient security mindset. Thus, we EAs should vigorously investigate whether this concern is well-founded, and make any necessary reforms in our AI safety research norms as soon as possible. ----- AI capabilities are continuing to scale up. This trend seems [likely to continue](https://drive.google.com/drive/u/0/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP) given empirical scaling laws, the decreasing cost of compute, and AI corporations’ unabating desire to build increasingly powerful AI. If past trends continue, future AI systems will likely be scaled up enough to eventually be able to create sophisticated, long-term plans for whatever mysterious goals they have evolved during training.  A system that can create long-term plans at a comparable or higher capability than humans is easy to make dangerous by accident. This problem is exacerbated by the fact that so far, nobody has found a way to robustly align an AI system towards a benign goal: or indeed, [any specific goal](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml). Many of us EAs prudently consider the societal risk posed by future unaligned AI to be one of the most pressing x-risks—if not *the* most pressing x-risk—in the world right now. [Hundreds of us EAs](https://80000hours.org/problem-profiles/artificial-intelligence/) devote our careers to AI safety, and the community as a whole donates about $50 million to AI safety every year. Our prudence, persistence, and self-sacrifice on this pressing issue deserve to be commended.  However, I have recently thought of a concern that, if true, implies that the impact of EAs’ past and current efforts on AI safety may have been fundamentally limited by our suboptimal *modus operandi* for these efforts**.**The concern is the following: *The corporation that creates the first AGI will (despite our best efforts) train it on as much data as possible, likely the whole Internet and potentially even private Google Docs. **Any AI safety research that is likely to be in the AGI’s training set should be assumed compromised.*** *This is because whenever the misaligned AGI knows in advance of our plan “If we see Sign X of misalignment from the AI, we should shut it down and retrain,” it can use this plan against us: for example, by hiding from us Sign X that it would have shown under normal circumstances. (Sign X could be a certain property of the AGI’s output, of its internal weights, or of its modified environment.)* *Also, a plan based on stochastic gradient descent (to increase the probability that the AGI is aligned at the point it reaches high capabilities) might be easier to exploit⁠—by an emergent misaligned AGI aiming to protect its misaligned goal during the training process⁠—if the AGI has prior knowledge of the plan.* *Moreover, the misaligned AGI may be able to use Internet information about itself and about the cybersecurity design of its sandbox to* [*prematurely escape the sandbox*](https://www.lesswrong.com/posts/p62bkNAciLsv6WFnR/how-do-we-align-an-agi-without-getting-socially-engineered)*.* Some AI safety researchers already refrain from posting some of their research on the Internet, in case that capabilities researchers may use it to increase AI capabilities (and thereby decrease the amount of time we have for solving AI safety). This is a prudent and commendable choice that I hope will continue. My additional proposition is that the risk of a misaligned AGI knowing our safety plans beforehand and using them against us should also be considered a strong reason to not post sensitive AI safety research on the Internet. If true, this concern implies that past AI safety research may have lost some or all of their potential value (as well as the potential value of future AI safety research that will build on it) by not having been kept secret. It is even possible that an AI safety plan is made net-negative by being made public, because the plan can be ostensibly impressive enough to be used by AGI corporations but still make it into their AGI’s training set. This would enable the AGI (which will likely be misaligned by default) to give alignment researchers a false sense of confidence.  Given this, we should vigorously investigate whether or not this concern is well-founded. This may run into the following practical difficulty. For the knowledgeable people who are best equipped to investigate this concern—experienced AI safety researchers—concluding that this concern is well-founded would be equivalent to concluding that they themselves may have reduced the value of their own past research by posting it on the Internet. This is an unfortunate source of potential cognitive bias. But realistically, our best option (as difficult as it is) may be to keep our cognitive biases in check as much as possible while vigorously investigating whether the aforementioned concern is well-founded. Outside perspectives from people who are new to the field of AI safety would be substantially helpful on this front, because they would not be investigating whether their own past practices were suboptimal. Below is my own first attempt at investigating whether the aforementioned concern is well-founded. **Counterpoint #1**: If your AI safety plan requires secrecy to succeed, then it wasn’t a good plan in the first place. **My rebuttal**: We should always be looking for plans that succeed regardless of the opponent’s counterplay. But our existing theoretical and empirical knowledge of real-world adversarial interactions suggests that a lot of the EV will be in plans that require randomization and secrecy to be optimal. AGI will likely be [misaligned](https://arxiv.org/abs/2206.13353) [by](https://arxiv.org/abs/1912.01683) [default](https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064), so the two-player zero-sum game can be a decent model of the by-default adversarial interaction between: 1. the alignment researcher (who is trying to train the AGI to be aligned and/or to convince management to stop trying to build AGI) and 2. the by-default misaligned AGI (who is trying to deceive the alignment researcher, protect its misaligned goal, and power-seek against humans at the first opportunity). In a two-player zero-sum game, the optimal strategy is called the **minimax strategy**, which maximizes the minimum gain. [Equivalently](https://mathworld.wolfram.com/MinimaxTheorem.html), the strategy is to minimize the maximum gain for the opponent. Colloquially, the minimax strategy can be thought of as the least exploitable strategy. Sometimes, the player’s minimax strategy is a deterministic “best strategy,” and guarantees the most favorable outcome assuming the worst-case scenario that the opponent will use the optimal counterplay. This is the case for sequential perfect-information games like chess and Go. But in other times, the player’s minimax strategy requires randomization and secrecy, because the other player will be able to optimally exploit any one deterministic strategy if it is known beforehand. This is the case for rock-paper-scissors, matching pennies, and other games where there are multiple strategies where none of them is absolutely better than any other. There are many real-world adversarial interactions that require rock-paper-scissors-type randomization and secrecy to play optimally. The [use of divination](https://www.jstor.org/stable/666530) that essentially acts as a randomization device has consistently emerged in hunting and in warfare, likely because of the strategic benefits of randomization in these settings. Rock-paper-scissors cycles in strategies are central to adversarial games that are either [real](https://kskdkarate.com/2021/02/16/best-fighting-style-rock-paper-scissors/)-[time](https://www.youtube.com/watch?v=ox2rUF8W_oE), [imperfect](https://www.pokernews.com/strategy/make-your-poker-game-less-exploitable-rock-paper-scissors-24419.htm)-[information](https://www.youtube.com/watch?v=0ELzL9a3XpM), or [both](https://ojs.aaai.org/index.php/AIIDE/article/view/12857). A real-world example of the strategic benefits of secrecy is provided by General Douglas MacArthur. He is consistently ranked by historians as one of America’s [most overrated](https://time.com/5724009/douglas-macarthur-is-one-of-americas-most-famous-generals-hes-also-the-most-overrated/) and [least competent](https://foreignpolicy.com/2010/06/02/the-worst-general-in-american-history-2/) generals, because of his poor decisions in the Korean War. Instead of prioritizing accurate intelligence and planning rationally, “[the general was focused on limiting and controlling the sources of intelligence, not allowing contrary or dissenting opinions, and simultaneously surrounding himself with yes-men](https://thediplomat.com/2019/01/is-this-the-worst-intelligence-chief-in-the-us-armys-history/).” But even the incompetent General MacArthur decisively won the Japanese front of World War 2 when he had prior knowledge of the enemy’s plans, thanks to Alan Turing and his colleagues having broken the Japanese Empire’s encryption.  Secrecy is likely of pivotal significance in real-world struggles that occur between intelligent adversaries. Whether or not one's plans are secret can even be the key difference-maker.   **Counterpoint #2**: A misaligned AGI whose capabilities are high enough to use our safety plans against us will succeed with an equal probability (e.g., close to 100%) regardless of whether it can access these plans. **My rebuttal**: It is possible that a misaligned AGI whose capabilities is undergoing a fast takeoff will benefit from both (1) a fast convergence to game-theoretically optimal play and (2) an expanding strategy space that allows for an increasingly deterministic victory against humanity’s smaller space of options.  However, [there is currently no strong case](https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-case#Speed_of_intelligence_growth_is_ambiguous) for why we should expect AGI’s capabilities takeoff to be fast. And there is at least indirect evidence from evolutionary biology and neuroscience that suggests that AGI’s capabilities takeoff will be slow, as argued by Jacob Cannell in his [insightful](https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know) [posts](https://www.lesswrong.com/posts/9Yc7Pp7szcjPgPsjf/the-brain-as-a-universal-learning-machine). If true, his argument has the [following implication](https://www.lesswrong.com/posts/p62bkNAciLsv6WFnR/?commentId=22msTXuyYeSQyjpHp): > “Fast takeoff traditionally implies time from AGI to singularity measured in hours or days, which you just don't get with merely mundane improvements like copying or mild algorithmic advances. [Eliezer Yudkowsky] (and perhaps Bostrom to some extent) anticipated fast takeoff explicitly enabled by many OOM brain inefficiency, such that the equivalent of many decades of Moore's Law could be compressed into mere moments. The key rate limiter in these scenarios ends up being the ability to physically move raw materials through complex supply chains processes to produce more computing substrate, which is bypassed through the use of hard drexlerian nanotech. > > But it turns out that biology is already near optimal-ish (cells in particular already are essentially optimal nanoscale robots; thus drexlerian nanotech is probably a pipe dream), so that just isn't the world we live in. > > … In a nutshell [Eliezer Yudkowsky/LessWrong] folks got much of their brain model from the heuristics and biases, [evolutionary psychology] literature which is based on the evolved modularity hypothesis, which turned out to be near completely wrong. So just by merely reading the sequences and associated lit [LessWrong] folks have unfortunately picked up a fairly inaccurate default view of the brain. > > In a nutshell the brain is a very generic/universal learning system built mostly out of a few different complimentary types of neural computronium (cortex, cerebellum, etc) and an actual practical recursive self improvement learning system that rapidly learns efficient circuit architecture from lifetime experience. The general meta-architecture is not specific to humans, primates, or even mammals, and in fact is highly convergent and conserved - evolution found and preserved it again and again across wildly divergent lineages. So there isn't so much room for improvement in architecture, most of the improvement comes solely from scaling.” > > Whether the takeoff in AGI’s capabilities will be fast or slow is of central importance to EAs’ macrostrategy for AI safety. Thus, this scientific debate should vigorously continue, ideally with perspectives from as many different relevant fields of science as possible. **[Edit**: Linda Linsefors has pointed out the following crucial point. Even if most of the AGI’s capabilities takeoff will come from scaling rather than architecture, as Jacob Cannell has argued, this leaves open the possibility of a takeoff by the AGI quickly getting more compute, such as by hacking other data centers. The probability of this occurring should be vigorously investigated as well.]   **Counterpoint #3**: We may be able to prevent the AGI from being misaligned on our first try. **My rebuttal**: Currently, we cannot. The default outcome is likely a misaligned AGI.  We should keep trying to find a way to reliably create an aligned AGI on our first try, because if this is possible in the timeframe we have, it would clearly be the best path forward.  But there is reason to believe that this is an extremely difficult problem. Parents cannot align their children or their pets on their first try. They must rely on reinforcement-learning-based approaches that heavily rely on trial and error, both from their own attempts and from others’ past attempts in similar situations. Even in the more optimistic reference case of gene-editing, [trial-and-error is still likely crucial](https://www.science.org/doi/10.1126/sciadv.aax2941) for reliably achieving the desired properties in the edited code, especially given that the code will be subject to inherently unpredictable evolutionary pressures (natural selection and epigenetics, the analogues of stochastic gradient descent and adaptive learning) once it is deployed. If we will probably need multiple trials to align AGI in the timeframe we have, then we should work to preemptively make the risk of any one trial causing a catastrophe as small as possible. And keeping our AI safety plans secret is likely to robustly help on this front.   **Counterpoint #4**: If AI safety researchers adopted security mindset to the point of keeping their sensitive research off of the Internet, then research would slow down too much. **My rebuttal**: This is a real tradeoff. A lot of us AI safety researchers (myself included) became interested in the problem by reading about it on Internet forums like the EA Forum and LessWrong. And the quick sharing of ideas on the Internet can in theory be helpful for innovation in general. I do think that in practice, the net innovation benefits of the Internet are overrated. Nobel laureate economist Robert Solow famously said “[You can see the computer age everywhere but in the productivity statistics](http://www.standupeconomist.com/pdf/misc/solow-computer-productivity.pdf).” The trend of advances in information technology like the Internet not leading to a significant increase in economy-wide innovation has been robust; if anything, [advances in information technology have coincided with times of economic stagnation](https://en.wikipedia.org/wiki/Productivity_paradox). Multiple hypotheses have been proposed for this *productivity paradox*, one of which (set forth by economist Robert Gorden and embraced by EA economist Tyler Cowen) is that “[perhaps the Internet is just not as revolutionary as we think it is](https://slate.com/business/2011/03/the-productivity-paradox-why-hasn-t-the-internet-helped-the-american-economy-grow-more.html).” The success case to emulate in methodology is the Manhattan project. It adopted a strong security mindset among all participating researchers, and preemptive efforts to patch the vulnerabilities resulting from information technology during a time of war. (The research environment would likely have been airgapped if the Manhattan project had occurred in the Internet age.) And yet, very quick and successful innovation was produced. To be fair, achieving a Manhattan-project-esque research collaboration among AI safety researchers would be a significant undertaking that would be difficult to bring about, at least in a decentralized manner. This kind of undertaking will probably require extensive discussion, planning, and execution by a collaboration between the AI safety research community and EAs of many different skillsets. But even partial success on achieving security mindset can be helpful on the margin, and may plausibly tap into both the Internet’s theoretical upside of facilitating communal innovation and the game-theoretic benefits of secrecy. For example, there is a change we EAs can start adopting right now without too much effort. The proposed motto is that “for AI safety, the Internet should be used primarily for recruitment and advocacy, while research should be kept off of the Internet if it has a chance of being sensitive.” The security mindset of the field of biosecurity should be used as a role model for this. The challenge, of course, will be to achieve this while also achieving a high rate of communal innovation in AI safety research.   **Counterpoint #5**: Each AI safety researcher is already optimally weighing the costs and benefits of posting their research online in an idiosyncratic manner, and a community-wide social norm that constrains them to act in a non-situation-specific manner can be counterproductive. **My rebuttal**: [My model](https://www.sciencedirect.com/science/article/abs/pii/S0022519322000297) is that it is difficult to accurately estimate *a priori* the benefit-risk tradeoff of posting a given AI safety plan online. So, human cognition probably defaults to a heuristic-based estimate. And such a heuristic-based estimate will likely be insufficiently careful, given that the threat model is unprecedented: a superintelligent, by-default-misaligned AGI who can use the plan for its own purposes. Thus, even a blunt social norm of “don’t post AI safety plans on the Internet if there is even a chance of it being sensitive” can be robustly helpful compared to the counterfactual. Even an AI safety idea that does not constitute a specific plan can be sensitive to post on the Internet. This is because future, more specific AI safety plans can be based on the earlier preparadigmatic idea. In essence, whenever an AI safety idea is posted online, this can compromise its “descendants” on the “tech tree” of AI safety research, even the descendant ideas whose existence could not have been predicted beforehand at human-level capabilities. Moreover, even if posting an AI safety idea online doesn’t increase AGI catastrophe risk by itself, it can do so indirectly by increasing AI safety researchers’ vulnerability to undeserved blame in unpredictable ways. A realistic assessment of the EA movement’s AI safety research efforts and governance efforts so far is that at our current rate of progress, we will (despite our best efforts) likely fail to prevent the first AGI disaster. In the plausible event that humanity is not extinct after the first AGI disaster, there will be a blame game. Whichever corporation caused the AGI catastrophe will desperately try to hide their responsibility and blame anyone else (including its competitors, and potentially even its own safety team). Its potentially more safety-minded competitors will try their best to prove to the public that their hands are clean. And in this blame game, AI safety researchers (and the EA community as a whole) may find themselves at the receiving end of the blame in unfair and unpredictable ways.  Even though the blame will be unfair, it is still realistic to plan for this possibility. One way to protect ourselves from this unfair blame is to have the bulk of our efforts (with the exception of things like recruiting and advocacy) off of the unsecure Internet, and therefore off of the AGI’s training set and the crosshairs of the AGI corporations. Then, there would be nothing concrete for potential blamers to point towards to suggest that AI safety researchers or EAs should be blamed. Yes, this is unfair. And we shouldn’t have to deal with this unfairness. But relying on others’ sense of fairness (especially that of desperate for-profit corporations) is probably not an EV-maximizing strategy. Today is the first day in the rest of our lives. We should always seek to maximize EV, even when this may entail doing things that are contrary to our intuitions: like making use of secrecy and luck for high-stakes decision-making, communicating via secure channels, and treating AGI as an intelligent adversary rather than a standard object of scientific study. ----- **Epistemic status (speculative)**: Does posting an AI safety plan on the Internet reduce its value? I am reasonably confident (at least 80% probability) that the effect exists. I am highly uncertain about the magnitude of the effect, which would depend situation-specifically on the given AI safety plan. My current, highly speculative estimate is that there is at least a 5% probability that a Manhattan-Project-esque shift in AI safety research norms—by a collaboration between AI safety researchers and the EA community—will result in an x-risk reduction that is, on a per-dollar level, maximal among past and current EA projects on x-risk reduction (with the exception of field-building). ----- **Acknowledgements**: I am grateful to Michael Aird, Pablo Antonio Moreno Casares, Adam Gleave, Marius Hobbhahn, Alex Lawson, Patrick Levermore, Eli Lifland, Richard Mallah, Joe O'Brien, Ethan Perez, Javier Prieto, and Madhu Sriram for very helpful conversations about the topic. I am also grateful to Michael Chen, Michael Einhorn, Linda Linsefors, and Gabe Mukobi for providing very valuable feedback on this draft.
fc13f590-23f1-489c-94f1-10e6b210a300
trentmkelly/LessWrong-43k
LessWrong
Ex-OpenAI employee amici leave to file denied in Musk v OpenAI case? Several ex-employees of OpenAI filed an amicus brief in the Musk v OpenAI[1] case. This proposed brief argues that OpenAI should not be allowed to move from its existing nonprofit structure to a for-profit structure. We now have an order on the motions to dismiss in the case. This order also seems to deny the motion for leave to file the amicus on behalf of the ex-OpenAI employees. The order in footnote 1 has this to say on the amicus brief: > The Court also received requests to file an amicus brief by former employees of OpenAI. (Dkt. Nos. 152 & 154.) The proposed brief contains factual assertions and does not assist the Court’s understanding of the legal issues needed to resolve the instant motions. The requests for leave to file are therefore DENIED Moreover, at Dkt. No. 104 the OpenAI defendants request that the Court take judicial notice of certain documents relating to litigation between the parties in state court. The Court does not find the documents helpful to its analysis of this order and the request is DENIED. In a previous post on this case, I noted at that time that the court had accepted amicus briefs from advocacy organization Encode Justice as well as the Delaware Attorney General. I think the DE AG is a different ball-game, you can see why a judge would be interested in accepting that brief, but it's interesting to me that the court accepted the Encode brief but not this one. The Encode brief came in earlier and was addressing the motion for preliminary injunction, where the question of whether the nonprofit to for-profit move should be enjoined was up for consideration. That question was central to the Encode brief, similar to the ex-OpenAI employee brief. The different timing (before vs after consideration of the preliminary injunction) could be relevant here, since arguably the ex-OpenAI employee brief is coming in when that question isn't as relevant to what the court has to rule on in the immediate. I'm also curious what the court means b
068c920c-3785-4960-a71e-6cf62c602b31
trentmkelly/LessWrong-43k
LessWrong
Not yet gods You probably don't feel guilty for failing to snap your fingers in just such a way as to produce a cure for Alzheimer's disease. Yet, many people do feel guilty for failing to work until they drop every single day (which is a psychological impossibility). They feel guilty for failing to magically abandon behavioral patterns they dislike, without practice or retraining (which is a cognitive impossibility). What gives? The difference, I think, is that people think they "couldn't have" snapped their fingers and cured Alzheimer's, but they think they "could have" used better cognitive patterns. This is where a lot of the damage lies, I think: Most people's "coulds" are broken. People think that they "could have" avoided anxiety at that one party. They think they "could have" stopped playing Civilization at a reasonable hour and gone to bed. They think they "could have" stopped watching House of Cards between episodes. I'm not making a point about the illusion of free will, here — I think there is a sense in which we "could" do certain things that we do not in fact do. Rather, my point is that most people have a miscalibrated idea of what they could or couldn't do. People berate themselves whenever their brain fails to be engraved with the cognitive patterns that they wish it was engraved with, as if they had complete dominion over their own thoughts, over the patterns laid down in their heads. As if they weren't a network of neurons. As if they could choose their preferred choice in spite of their cognitive patterns, rather than recognizing that choice is a cognitive pattern. As if they were supposed to choose their mind, rather than being their mind. As if they were already gods. We aren't gods. Not yet. We're still monkeys. ---------------------------------------- Almost everybody is a total mess internally, as best as I can tell. Almost everybody struggles to act as they wish to act. Almost everybody is psychologically fragile, and can be put into situatio
c66e5946-0673-4c10-bd42-915a52664f02
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #144]: How language models can also be finetuned for non-language tasks Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter. Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-144)** (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS =========== **[Pretrained Transformers as Universal Computation Engines](https://bair.berkeley.edu/blog/2021/03/23/universal-computation/)** *(Kevin Lu et al)* (summarized by Rohin): We’ve seen some very impressive few-shot learning results from **[GPT-3](https://arxiv.org/abs/2005.14165)** (**[AN #102](https://mailchi.mp/2485e6b42012/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals-for-ai-alignment)**) and **[CLIP](https://openai.com/blog/clip/)**. These work by training a large Transformer model on a giant pile of data in a particular modality (such as language or images), and then we express tasks within that modality (e.g. summarization for a language model). This paper asks the question: could such models also help with tasks in a *different* modality? Surprisingly, the answer seems to be yes! Specifically, the authors take the pretrained GPT-2 models and finetune on very different tasks, changing only the following parameters (which make up just ~0.1% of the model): 1. Input layer: This is a linear layer that transforms the input tokens before they go through the attention layers. 2. Output layer: This is a linear layer that uses the final representations to solve some downstream tasks. 3. Layer norm: These parameters are meant to mimic the statistics of the data distribution, and so need to be finetuned. 4. Positional embeddings. (They say that it only makes a slight difference to finetune these.) For downstream tasks, they consider tasks like memorizing bit sequences, computing XORs, MNIST and CIFAR (where each image is represented as a sequence of 64 tokens, and each token is a 4x4 patch of the image), and protein folding. None of these tasks involve any use of natural language -- the input modality is completely different. The headline result: these sorts of models tend to achieve similar performance as Transformer models trained from scratch on the same tasks, and better performance than models initialized with random weights and then finetuned using the method above. This suggests that *even for new data modalities* the GPT-2 pretraining helps, suggesting that the model has learned some “universal computation” in its attention layers (hence the title). Note though that the differences from the random initialization are not that large (2-6 percentage points, except 25 percentage points in Bit Memory), suggesting that a lot of this might be the inductive bias of the Transformer architecture itself. The rest of the paper delves into this more, running several experiments to learn more empirical facts. For example: 1. If the Transformers are pretrained on images instead of language, you do better on image tasks like CIFAR, but not as well on the other tasks. 2. Transformers do a *lot* better than LSTMs. 3. Pretrained Transformers also learn significantly faster than randomly initialized Transformers. **Read more:** **[Paper: Pretrained Transformers as Universal Computation Engines](https://arxiv.org/abs/2103.05247)** **Rohin's opinion:** This is a pretty cool result. I’m not sure what I would have predicted ahead of time -- the gains are small enough that I could believe I might have predicted them on a general basis of “probably training on realistic data gives you slightly better patterns of thought, so probably if you try hard enough you can find a small set of parameters to finetune that would work well”. However, another possible line of reasoning would be “the attention heuristics learned for language would probably throw away lots of information if we applied them directly to the input tokens, and the input linear layer may not be enough to handle this issue, so probably this just destroys any good performance of the model”. I could see myself being convinced by that too. TECHNICAL AI ALIGNMENT ======================= TECHNICAL AGENDAS AND PRIORITIZATION ------------------------------------- **[Alignment of Language Agents](https://medium.com/@deepmindsafetyresearch/alignment-of-language-agents-9fbc7dd52c6c)** *(Zachary Kenton et al)* (summarized by Rohin): This paper analyzes the various problems we consider in AI alignment from the perspective of language agents. Problems covered include **[specification gaming](https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/)** (**[AN #1](https://mailchi.mp/ff6340049bd0/alignment-newsletter-1)**), **[whom and what to align to](https://arxiv.org/abs/2001.09768)** (**[AN #85](https://mailchi.mp/84b4235cfa34/an-85-the-normative-questions-we-should-be-asking-for-ai-alignment-and-a-surprisingly-good-chatbot)**), **[intent alignment](https://www.alignmentforum.org/posts/ZeE7EKHTFMBs8eMxn/clarifying-ai-alignment)** (**[AN #33](https://mailchi.mp/b6dc636f6a1b/alignment-newsletter-33)**), **[removing tampering incentives](https://arxiv.org/abs/2011.08827)** (**[AN #126](https://mailchi.mp/77121cab0cff/an-126-avoiding-wireheading-by-decoupling-action-feedback-from-action-effects)**), and **[inner alignment](https://arxiv.org/abs/1906.01820)** (**[AN #58](https://mailchi.mp/92b3a9458c2d/an-58-mesa-optimization-what-it-is-and-why-we-should-care)**). These can be categorized as different kinds of misspecification, namely misspecification in the *training data*, the *training process*, and the *behavior under distributional shift*. While the conceptual problems are similar to the ones already considered for embodied RL agents, the ways they manifest are different. In particular, the authors highlight the possibility that language agents will *deceive us*, *manipulate us*, or *produce harmful content*. The authors review some existing definitions of deception and manipulation that are purely behavioral (that is, the definitions do not require an *intent* to deceive or manipulate). A signaller **deceives** a receiver if the signaller transmits (or suggestively doesn’t transmit) a signal that causes the receiver to believe some false claim that benefits the signaller. **Manipulation** is similar, except rather than causing the receiver to believe a false claim, it causes the receiver to take some action that benefits the signaller, that in some sense the receiver “shouldn’t” have taken. We could cash out “the receiver ‘shouldn’t’ have taken the action” just as "the action is harmful to the receiver", but from a safety / security mindset, the authors prefer a broader definition that aims to identify bad *means* of influencing the receiver, instead of only focusing on whether the *ends* were bad. Some other miscellaneous points: - Since the “action space” is just language, it seems like it should be easier (though still requires work) to prevent language agents from causing physical harm. - It will hopefully be easier to train language agents to be explainable, since they have native fluency in natural language with which they can explain their behavior. **Read more:** **[Paper: Alignment of Language Agents](https://arxiv.org/abs/2103.14659)** FORECASTING ------------ **[Measuring Mathematical Problem Solving With the MATH Dataset](https://arxiv.org/abs/2103.03874)** *(Dan Hendrycks et al)* (summarized by Rohin): We’ve seen **[GPT-3](https://arxiv.org/abs/2005.14165)** (**[AN #102](https://mailchi.mp/2485e6b42012/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals-for-ai-alignment)**) perform well on lots of downstream tasks. What about challenging high school math problems that require intuition to solve? The authors create the MATH dataset and demonstrate that this is in fact challenging for models: models currently get around 5-7%, even when pretraining on a dataset of math-relevant text and finetuning on the MATH training dataset. Note that the models have to get the answer exactly right: there is no partial credit. Not only are current models not very good at the task, but also they scale poorly -- while there isn’t much data to extrapolate from yet, a simple extrapolation suggests that models would need 10^35 parameters to achieve just 40% accuracy. In contrast, in a simple study with university students, performance ranged between 40% and 90%, with the best human only making minor arithmetic errors. This suggests we’ll need additional algorithmic improvements for better performance. The authors also consider allowing language models to have “scratch space” to work on the problem: the models are prompted to generate a solution where they explain their work. They find that this actually *decreases* accuracy, presumably because the poor generations at the beginning end up confusing the model. **Rohin's opinion:** While reading this paper, I kept stopping to do the math problems because, well, I’m just easily distracted by math problems. But it did demonstrate one thing -- when the model gets it right, it can be really impressively right (at least in this one presumably cherry picked example). In one example from the paper (search for “ab5”), the ground-truth solution is horribly hacky, my housemate and I each separately got significantly more elegant solutions, but the model-generated solution was more elegant than either of our solutions. It’s a good example of how AI capabilities can be really lopsided -- no human would generate this good of an explanation if they were getting 6% accuracy overall. MISCELLANEOUS (ALIGNMENT) -------------------------- **[My AGI Threat Model: Misaligned Model-Based RL Agent](https://www.alignmentforum.org/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent)** *(Steve Byrnes)* (summarized by Rohin): This post lays out a pathway by which an AI-induced existential catastrophe could occur. The author suggests that AGI will be built via model-based reinforcement learning: given a reward function, we will learn a world model, a value function, and a planner / actor. These will learn online, that is, even after being deployed these learned models will continue to be updated by our learning algorithm (gradient descent, or whatever replaces it). Most research effort will be focused on learning these models, with relatively less effort applied to choosing the right reward function. There are then two alignment problems: the *outer* alignment problem is whether the reward function correctly reflects the designer's intent, and the *inner* alignment problem is whether the value function accurately represents the expected reward obtained by the agent over the long term. On the inner alignment side, the value function may not accurately capture the reward for several reasons, including ambiguity in the reward signals (since you only train the value function in some situations, and many reward functions can then produce the same value function), manipulation of the reward signal, failures of credit assignment, ontological crises, and having mutually contradictory "parts" of the value function (similarly to humans). On the outer alignment side, we have the standard problem that the reward function may not reflect what we actually want (i.e. specification gaming or Goodhart's Law). In addition, it seems likely that many capability enhancements will be implemented through the reward function, e.g. giving the agent a curiosity reward, which increases outer misalignment. **Rohin's opinion:** While I disagree on some of the details, I think this is a good threat model to be thinking about. Its main virtue is that it has a relatively concrete model for what AGI looks like, and it provides a plausible story for both how that type of AGI could be developed (the development model) and how that type of AGI would lead to problems (the risk model). Of course, it is still worth clarifying the plausibility of the scenario, as updates to the story can have significant implications on what research we do. (Some of this discussion is happening in **[this post](https://www.alignmentforum.org/posts/pz7Mxyr7Ac43tWMaC/against-evolution-as-an-analogy-for-how-humans-will-create)**.) OTHER PROGRESS IN AI ===================== MISCELLANEOUS (AI) ------------------- **[2021 AI Index Report](https://aiindex.stanford.edu/report/)** *(Daniel Zhang et al)* (summarized by Zach): The AI Index Report is a project to track and distill data related to artificial intelligence. One central theme the report focuses on is the effects of COVID on AI research direction. The report highlights significant increases in spending on drug development, 4.5 times that in 2019. The report also focuses a spotlight on the relative lack of AI ethics benchmarks. This could pose a significant problem as surveillance technologies become an increasingly mature technology. Beyond these broad themes, there's data on publication trends, politics, diversity, and more in the 222-page report. Additionally, a significant amount of data is publicly available or interactive. **Read more:** **[Full report PDF](https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report_Master.pdf)** **Zach's opinion:** This is well presented and you can glean a lot from looking at the introductory sections. If you choose to dive into a particular topic, charts and methodology are presented in a clear manner with nice hyperlinking to make navigation relatively painless. There is also an **[interactive](https://aiindex.stanford.edu/vibrancy/)** visualization that allows for cross-country comparison according to user-defined metrics. Once again, very well presented. NEWS ===== **[Stanford Existential Risks Conference](https://www.sericonference.org/)** *(SERI)* (summarized by Rohin): This conference on existential risks will run April 17-18. Applications to attend close April 12. There will be no charge to attend the conference. **[Research Engineer, Safety (OpenAI)](https://jobs.lever.co/openai/2cbafe18-54f7-43c1-b306-9877b36efb44)** (summarized by Rohin): The Applied Safety team at OpenAI is looking to hire a research engineer, and explicitly states that the job is about safety of general-purpose AI systems (as opposed to narrow AI systems like autonomous vehicles). #### **FEEDBACK** I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**. #### **PODCAST** An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**.
5274dd01-9ea8-4c9b-8b79-46c64055b2a5
trentmkelly/LessWrong-43k
LessWrong
Babble Challenge: 50 Ways to Overcome Impostor Syndrome Here we go again. Time to become stronger. This week’s challenge: Imagine you are a bright young PhD student with interesting articles to write. You have picked your journals, developed the thesis, and are hard at work dratfting publications. But there's one problem. You have impostor syndrome. You often feel like your ideas are bad, or that you are unqualified, or should only write about one very narrow area. When you examine each doubt, they are clearly unfounded. Impostor syndrome slows down your progress during day long mood swings. Give me 50 ideas to solve impostor syndrome! Rules Rules from JacobJacob's babble challenges * 50 answers please * Post answers inside spoiler tags * Not all you ideas have to work ("feel better" or "go for a walk" are great answers) * When your stuck, say something stupid. > If you spend 5 min agonising over not having anything to say, you’re doing it wrong. You’re being too critical. Just lower your standards and say something, anything. Soon enough you’ll be back on track. Go get them tiger! I will post my attempt at 5:30 EST today
eb4c1efb-4fb4-4774-b22f-40424d421ce9
trentmkelly/LessWrong-43k
LessWrong
The current state of RSPs This is a reference post. It contains no novel facts and almost no novel analysis. The idea of responsible scaling policies is now over a year old. Anthropic, OpenAI, and DeepMind each have something like an RSP, and several other relevant companies have committed to publish RSPs by February. The core of an RSP is a risk assessment plan plus a plan for safety practices as a function of risk assessment results. RSPs are appealing because safety practices should be a function of warning signs, and people who disagree about when warning signs are likely to appear may still be able to agree on appropriate responses to particular warning signs. And preparing to notice warning signs, and planning responses, is good to do in advance. Unfortunately, even given which high-level capabilities are dangerous, it turns out that it's hard to design great tests for those capabilities in advance. And it's hard to determine what safety practices are necessary and sufficient to avert risks. So RSPs have high-level capability thresholds but those thresholds aren't operationalized. Nobody knows how to write an RSP that's not extremely conservative that passes the LeCun test: > the LeCun Test: Imagine another frontier AI developer adopts a copy of our RSP as binding policy and entrusts someone who thinks that AGI safety concerns are mostly bullshit to implement it. If the RSP is well-written, we should still be reassured that the developer will behave safely—or, at least, if they fail, we should be confident that they’ll fail in a very visible and accountable way. Maybe third-party evaluation of models or auditing of an RSP and its implementation could help external observers notice if an AI company is behaving unsafely. Strong versions of this have not yet appeared.[1] Anthropic Responsible Scaling Policy Basic structure: do evals for CBRN, AI R&D, and cyber capabilities at least every 6 months. Once evals show that a model might be above a CBRN capability threshold, implement t
f0958f50-1a88-40eb-90e6-a591dbfcb894
trentmkelly/LessWrong-43k
LessWrong
What do you actually do to replenish your willpower? After a bit of brief research, I still have no idea how willpower depletion actually works, or if I'm mistaking depletion for distraction, etc., etc.  I get the impression that there isn't much a consensus in the field on this subject.   What I'd like to know is this: what do you actually do to replenish willpower?   In other words, after working for several days in a row and being tired and not wanting to work on project x that has a large delay and little expectation (from the Procrastination Equation), how do you then work on project x?  Do you eat something sugary, recite a mantra, meditate, sleep?   I've read (I can't remember where) that completing difficult tasks gives a boost to willpower, but then how do you convince yourself to start that difficult task?  And what difficult task do you use?   Thanks.
601c717e-d905-468b-963b-5a11c76e1291
trentmkelly/LessWrong-43k
LessWrong
Learning human preferences: optimistic and pessimistic scenarios In this post and the next, I try and clarify - for myself and for others - the precise practical implications of the "Occam's razor is insufficient to infer the preferences of irrational agents " paper. Time and again, I've had trouble getting others to understand what that paper implies, and what it doesn't. It's neither irrelevant (like many no-free-lunch theorems), nor is it a radical skepticism/nothing is real/we can't really know anything paper. I've been having productive conversations with Rebecca Gorman, whom I want to thank for her help (and who phrased things well in terms of latent variables)! A simple biased agent Consider the following simple model of an agent: The agent's actions can be explained by their beliefs and preferences[1], and by their biases: by this, we mean the way in which the action selector differs from an unboundedly rational expected preference maximiser. The results of the Occam's razor paper imply that preferences (and beliefs, and biases) cannot be deduced separately from knowing the agent's policy (and hence, a fortiori, from any observations of the agent's behaviour). Latent and "natural" variables Let x be a latent variable of the policy π - or a some variable that can be deduced from π in some simple or natural way. A consequence of the Occam's razor result is that any such x will typically be a mixture of preferences, beliefs, and biases. For if the x tended to be restricted to one of these three components, that would mean that separating them would be possible via latent or simple variables. So, for example, if we conducted a principle component analysis on π, we would expect that the components would all be mixes of preferences/beliefs/biases. The optimistic scenario To get around the impossibility result, we need "normative assumptions": assumptions about the preferences (or beliefs, or biases) of the agent that cannot be deduced fully from observations. Under the optimistic scenario, we don't need many of t
66917e32-5472-4bb1-b931-736202c73cfb
trentmkelly/LessWrong-43k
LessWrong
Variational Bayesian methods I've noticed a lack of clear explanations of the fundamental idea behind the use of variational Bayesian methods, so I thought it would be worth writing something here on this topic. Intractability problems A problem that comes up routinely in Bayesian inference is the following: suppose that we have a model of an observed variable x in terms of a latent variable z. Our model tells us P(z) and P(x∣z) for any values of x,z. We want to know P(x), which our model does not explicitly give us. The trivial idea is to use basic probability to express this as P(x)=∫zP(x,z)dz=∫zP(x∣z)⋅P(z)dz where we interpret the probabilities as corresponding to probability densities when appropriate. While this is correct and in principle we can compute P(x) this way, in practice the latent space in which z takes values can be high dimensional and this makes the integral intractable to compute. Even if the set is finite, its size often grows exponentially with the problem we wish to study. For instance, suppose we're trying to solve a clustering problem[1] with a fixed number n=2 of clusters. The latent variable z is a discrete variable taking values in the set {1,2}, so you might think the integral (which in this case will just be a sum) will be easy to compute. However, in fact if we have N points that we wish to cluster into two groups, there are 2N possible latent variable assignments over the entire set of points, so we'd have to compute a sum with 2N terms! This is obviously a problematic situation. After some thinking about what's going wrong, though, it's easy to come up with an idea of how to make this process more efficient. The problem is that we're integrating over all possible values of z, but in fact most of the values of z are unlikely to have produced x and so will contribute virtually nothing to the integral. So most of our time during the computation is spent on evaluating the integrand at points that contribute virtually nothing to the final answer. Intuitively w
7effc807-292a-4bf0-b314-14747b47e198
trentmkelly/LessWrong-43k
LessWrong
The New Age of Social Engineering Why have so many online social networks failed to form healthy communities, and instead gained notoriety as hostile spaces? I argue that the reason these platforms have failed is because they didn’t learn the lessons taught by the High Moderns when humans were first faced with the challenge of engineering alongside systems that were built through millennia of natural evolution. In a chaotic environment such as human social relations, a different engineering approach is necessary to ensure that more good is done than harm. To gain the skills necessary to make these projects a success we need to learn from the history of social environments themselves, and of human engineering strategies. What follows is the story of social evolution becoming social engineering, how the meaning of both has changed radically in the last 20 years, and what this means for designers in the new Information Era. Part 1 — Ten millenniums of social engineering A key part of my thesis is that the way our social environment is formed has changed over the course of human history, and more rapidly in recent years. How do we know that to be true? Much of the work I’m building on comes out of the accounts provided by The Secret of Our Success by Joseph Henrich, as well as Seeing Like a State by James C. Scott. There are many things that I disagree with in these works, but I think they both get to the core idea that there exist two main ways in which human society develops. One of those ways is via an evolutionary process, where some societies develop some technique that aids in survival and flourishing, pass it on, and end up growing and outcompeting other societies. The people practicing these traditions often don’t have concrete knowledge as to why they work, but they become enshrined as tradition because they help the group succeed. This goes from knowledge about what plants are edible, to complex ideas like how the group should be structured. On the other hand, there is social engineering. In
92835256-9614-4202-8b9d-86a5e43cea92
trentmkelly/LessWrong-43k
LessWrong
Good intuitions Sometimes people have ‘good intuitions’. Which is to say something like, across a range of questions, they tend to be unusually correct for reasons that are hard to explain explicitly. How do people come to have good intuitions? My first guess is that new intuitions are born from looking at the world, and naturally interpreting it using a bunch of existing intuitions. For instance, suppose I watch people talking for a while, and I have some intuitions about how humans behave, what they want, what their body language means, and how strategic people tend to be. Then I might come to have an intuition for how large a part status plays in human interactions, which I could then go on to use in other cases. If I had had different intuitions about those other things, or watched different people talking, I might have developed a different intuition about the relevance of status. On this model, when a person has consistently unusually good intuitions, it could be that: A) Their innate intuition forming machinery is good: perhaps they form hypotheses easily, or they avoid forming hypotheses too easily. Or they absorb others’ useful words into their intuitions easily. B) They had a small number of particularly useful early intuitions, that tend to produce good further intuitions in the presence of the outside world. C) They have observed more or higher quality empirical data across the areas where they have superior intuitions. D) They got lucky, and randomly happen to have a lot of good intuitions instead of bad intuitions. Which of these plays the biggest part seems important, for: * Judging intuitions in hard or unusual areas: If A), then good intuitions are fairly general. So good intuitions about math (testable) suggest good intuitions about how to avoid existential risk (harder to test). This is decreasingly the case as we move down the alphabet. * Spreading good intuitions: If B), then it might be possible to distill the small number of core intuitions a person
53666b71-1845-4035-8aa1-ab47e5e653e1
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Pascal's Mugging: Tiny Probabilities of Vast Utilities Today's post, Pascal's Mugging: Tiny Probabilities of Vast Utilities was originally published on 19 October 2007. A summary (taken from the LW wiki):   > An Artificial Intelligence coded using Solmonoff Induction would be vulnerable to Pascal's Mugging. How should we, or an AI, handle situations in which it is very unlikely that a proposition is true, but if the proposition is true, it has more moral weight than anything else we can imagine? Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was "Can't Say No" Spending, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
decd87d1-b6bd-4c0c-9ac8-2cd9cb468651
trentmkelly/LessWrong-43k
LessWrong
Statistical error in half of neuroscience papers The statistical error that just keeps on coming, Ben Goldacre, Guardian, Friday 9 September 2011 20.59 BST We all like to laugh at quacks when they misuse basic statistics. But what if academics, en masse, deploy errors that are equally foolish? This week Sander Nieuwenhuis and colleagues publish a mighty torpedo in the journal Nature Neuroscience. They've identified one direct, stark statistical error so widespread it appears in about half of all the published papers surveyed from the academic psychology research literature. [...] How often? Nieuwenhuis looked at 513 papers published in five prestigious neuroscience journals over two years. In half the 157 studies where this error could have been made, it was. They broadened their search to 120 cellular and molecular articles in Nature Neuroscience, during 2009 and 2010: they found 25 studies committing this fallacy, and not one single paper analysed differences in effect sizes correctly.These errors are appearing throughout the most prestigious journals for the field of neuroscience. Update: Erroneous analyses of interactions in neuroscience: a problem of significance (PDF)