id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
c2813cef-aa0c-48f5-bce8-a9b66f065c36
trentmkelly/LessWrong-43k
LessWrong
Don't accuse your interlocutor of being insufficiently truth-seeking I argue that you shouldn't accuse your interlocutor of being insufficiently truth-seeking. This doesn't mean you can't internally model their level of truth-seeking and use that for your own decision-making. It just means you shouldn't come out and say "I think you are being insufficiently truth-seeking". What you should say instead Before I explain my reasoning, I'll start with what you should say instead: "You're wrong" People are wrong a lot. If you think they are wrong just say so. You should have a strong default for going with this option. "You're being intentional misleading" For when you basically thinking they are lying but maybe technically aren't by some definitions of "lying". What about if they are being unintentionally misleading? That's usually just being wrong, you should probably just say they are being wrong. But if you really think the distinction is important, you can say they are being unintentionally misleading. "You're lying" For when they are lying. You can also add your own flair to any of these options to spice things up a bit. Why you shouldn't accuse people of being insufficient truth-seeking Clarity It's not clear what you are even accusing them of. "Insufficient truth-seeking" could arguably be any of the options I mentioned above. Just be specific. If you really think what you're saying is so important and nuanced and you just need to incorporate some deep insight about truth, use the "add your own flair" option to sneak that stuff in. Achieving your purpose in the discussion The most common purposes you might have for engaging in the discussion and why invoking "truth-seeking" doesn't help them: You want to discuss the object-level issue You just fucked yourself because the discussion is immediately going to center on whether they actually are insufficiently truth-seeking and whether that accusation was justified. You're going to have to gather The Fellowship, take your argument to Mordor, and throw it into the fire
2c391bef-0df5-4637-bbeb-36c4b4acfe36
trentmkelly/LessWrong-43k
LessWrong
What is everyone doing in AI governance What is this post about, and how to use it This is a list of AI governance organizations with a brief description of their directions of work, so a reader can get a basic understanding of what they are doing and explore them in-depth on their own. There is another recent post on a similar topic that explores AI governance research agendas with a lesser focus on the areas of work of specific governance organizations. This is a flawed list I made this list using publicly available data, as well as by talking with people from the field. I am sure that I missed something important, especially in the section on the governance teams of AI labs since they are less open than nonprofits, but the best way to get the right answer on the internet is to post the wrong one.    At some point, adding each new bit of information started taking more and more time, and I decided to stop so this post is reasonably useful and will not cause me to struggle. DM me if you see mistakes or think I forgot something important.   Major non-profit organizations Centre for the Governance of AI (GovAI) Areas of work 1. Scientific research on AI policies 2. Educational and fellowship programs  3. Some efforts to improve coordination among AI governance orgs   Mechanisms of influence: 1. Many prominent AI governance specialists are alumni or former employees of GovAI including heads of policy at Deepmind, OpenAI, and Anthropic. So GovAI has strong connections among decision-makers 2. GovAI produced a lot of academic research on policies. Mostly academic, to a lesser extent - applied   Other notable things GovAI has an established brand of a respectable organization     Center for AI Safety (CAIS)   Areas of work 1. Field building 1. Courses and fellowships for starting a career in technical AI safety and philosophy 2. Offer compute resources for AI safety researchers including compute cluster 3. Organize competitions for AI safety researchers 2. Resea
c3917bd3-1f21-4bb1-84c8-a1ce48d5bbd8
trentmkelly/LessWrong-43k
LessWrong
Meetup : Bay Area Solstice Discussion article for the meetup : Bay Area Solstice WHEN: 07 December 2013 06:00:00PM (-0800) WHERE: San Francisco The Bay Area community is holding a Solstice celebration, and you’re invited! Join us for a night of group singing, ritual, light, warmth, and companionship, plus the first-ever performance of the rationalist choir, as we celebrate human progress and potential at the darkest time of the year. The Bay Area Solstice will be held on Saturday, December 7, from 6:00 PM until 10:00 PM. We’ll provide a shuttle to and from the Civic Center BART station. Space is limited, so please fill out the RSVP form. I hope to see you there! Discussion article for the meetup : Bay Area Solstice
266da09e-f618-4890-84f4-74539b2fa359
trentmkelly/LessWrong-43k
LessWrong
Shutting Down the Lightcone Offices Lightcone recently decided to close down a big project we'd been running for the last 1.5 years: An office space in Berkeley for people working on x-risk/EA/rationalist things that we opened August 2021. We haven't written much about why, but I and Ben had written some messages on the internal office slack to explain some of our reasoning, which we've copy-pasted below. (They are from Jan 26th). I might write a longer retrospective sometime, but these messages seemed easy to share, and it seemed good to have something I can more easily refer to publicly. Background data Below is a graph of weekly unique keycard-visitors to the office in 2022.  The x-axis is each week (skipping the first 3), and the y-axis is the number of unique visitors-with-keycards. Weekly unique visitors with keycards in 2022. There was a lot of seasonality to the office.The distribution of people by how many days they came (in 2022) looks like this. Members could bring in guests, which happened quite a bit and isn't measured in the keycard data below, so I think the total number of people who came by the offices is 30-50% higher. The offices opened in August 2021. Including guests, parties, and all the time not shown in the graphs, I'd estimate around 200-300 more people visited, so in total around 500-600 people used the offices. The offices cost $70k/month on rent [1], and around $35k/month on food and drink, and ~$5k/month on contractor time for the office. It also costs core Lightcone staff time which I'd guess at around $75k/year. Ben's Announcement > Closing the Lightcone Offices @channel > > Hello there everyone, > > Sadly, I'm here to write that we've decided to close down the Lightcone Offices by the end of March. While we initially intended to transplant the office to the Rose Garden Inn, Oliver has decided (and I am on the same page about this decision) to make a clean break going forward to allow us to step back and renegotiate our relationship to the entire EA/longtermis
65d00179-3544-47f6-ac0f-b2fd56b02799
trentmkelly/LessWrong-43k
LessWrong
Decision Theories in Real Life
b6ae3b58-972a-4d20-8a83-9fc1974a5c35
trentmkelly/LessWrong-43k
LessWrong
The AGI needs to be honest Imagine that you are a trained mathematician and you have been assigned the job of testing an arbitrarily intelligent chatbot for its intelligence. You being knowledgeable about a fair amount of computer-science theory won’t test it with the likes of Turing-test or similar, since such a bot might not have any useful priors about the world. You have asked it find a proof for the Riemann-hypothesis. the bot started its search program and after several months it gave you gigantic proof written in a proof checking language like coq. You have tried to run the proof through a proof-checking assistant but quickly realized that checking that itself would years or decades, also no other computer except the one running the bot is sophisticated enough to run proof of such length. You have asked the bot to provide you a zero-knowledge-proof, but being a trained mathematician you know that a zero-knowledge-proof of sufficient credibility requires as much compute as the original one. also, the correctness is directly linked to the length of the proof it generates. You know that the bot may have formed increasingly complex abstractions while solving the problem, and it would be very hard to describe those in exact words to you. You have asked the bot to summarize the proof for you in natural-language, but you know that the bot can easily trick you into accepting the proof. You have now started to think about a bigger question, the bot essentially is a powerful optimizer. In this case, the bot is trained to find proofs, its reward is based on finding what a group of mathematicians agree on how a correct proof looks like. But the bot being a bot doesn’t care about being honest to you or to itself, it is not rewarded for being “honest” it is only being rewarded for finding proof-like strings that humans may select or reject. So it is far easier for it to find a large coq-program, large enough that you cannot check by any other means, than to actually solve riemann-hypothesis
9b579772-b999-4e07-8b34-d7cb94cf19db
trentmkelly/LessWrong-43k
LessWrong
Winston Churchill, futurist and EA Churchill—when he wasn’t busy leading the fight against the Nazis—had many hobbies. He wrote more than a dozen volumes of history, painted over 500 pictures, and completed one novel (“to relax”). He tried his hand at landscaping and bricklaying, and was “a championship caliber polo player.” But did you know he was also a futurist? That, at least, is my conclusion after reading an essay he wrote in 1931 titled “Fifty Years Hence,” various versions of which were published in MacLean’s, Strand, and Popular Mechanics. (Quotes to follow from the Strand edition.) We’ll skip right over the unsurprising bit where he predicts the Internet—although the full consequences he foresaw (“The congregation of men in cities would become superfluous”) are far from coming true—in order to get to his thoughts on… Energy Just as sure as the Internet, to forward-looking thinkers of the 1930s, was nuclear power—and already they were most excited, not about fission, but fusion: > If the hydrogen atoms in a pound of water could be prevailed upon to combine together and form helium, they would suffice to drive a thousand horsepower engine for a whole year. If the electrons, those tiny planets of the atomic systems, were induced to combine with the nuclei in the hydrogen the horsepower liberated would be 120 times greater still. What could we do with all this energy? > Schemes of cosmic magnitude would become feasible. Geography and climate would obey our orders. Fifty thousand tons of water, the amount displaced by the Berengaria, would, if exploited as described, suffice to shift Ireland to the middle of the Atlantic. The amount of rain falling yearly upon the Epsom racecourse would be enough to thaw all the ice at the Arctic and Antarctic poles. I assume this was just an illustrative example, and he wasn’t literally proposing moving Ireland, but maybe I’m underestimating British-Irish rivalry? Anyway, more importantly, Churchill points out what nuclear technology might do for nanom
9549f1e2-f5e8-4937-bc3b-1ea672008b89
trentmkelly/LessWrong-43k
LessWrong
Towards Multimodal Interpretability: Learning Sparse Interpretable Features in Vision Transformers Executive Summary In this post I present my results from training a Sparse Autoencoder (SAE) on a CLIP Vision Transformer (ViT) using the ImageNet-1k dataset. I have created an interactive web app, 'SAE Explorer', to allow the public to explore the visual features the SAE has learnt, found here: https://sae-explorer.streamlit.app/ (best viewed on a laptop). My results illustrate that SAEs can identify sparse and highly interpretable directions in the residual stream of vision models, enabling inference time inspections on the model's activations. To demonstrate this, I have included a 'guess the input image' game on the web app that allows users to guess the input image purely from the SAE activations of a single layer and token of the residual stream. I have also uploaded a (slightly outdated) accompanying talk of my results, primarily listing SAE features I found interesting: https://youtu.be/bY4Hw5zSXzQ. The primary purpose of this post is to demonstrate and emphasise that SAEs are effective at identifying interpretable directions in the activation space of vision models. In this post I highlight a small number my favourite SAE features to demonstrate some of the abstract concepts the SAE has identified within the model's representations. I then analyse a small number of SAE features using feature visualisation to check the validity of the SAE interpretations. Later in the post, I provide some technical analysis of the SAE. I identify a large cluster of features analogous to the 'ultra-low frequency' cluster that Anthropic identified. In line with existing research, I find that this ultra-low frequency cluster represents a single feature. I then analyse the 'neuron-alignment' of SAE features by comparing the SAE encoder matrix the MLP out matrix. This research was conducted as part of the ML Alignment and Theory Scholars program 2023/2024 winter cohort. Special thanks to Joseph Bloom for providing generous amounts of his time and support (in addition to the SAE
1c49dc91-7f9b-49bc-bca0-9a54984bbf6f
trentmkelly/LessWrong-43k
LessWrong
A Catalog of Confusions tl;dr - can we categorise confusing events by skills required to deal with them?  What are those skills? I am sometimes haunted by things I read online.  It's probably a couple of years since I first read Your Strength as a Rationalist, but over the past month or two I've been reminded of it a surprising number of times in different circumstances.  It's led me to wonder whether the idea of being "confused by fiction" can be helpfully broken down into categories, with each of those categories having certain skills that can be worked on to help notice them. I'm going to describe two such categories I think I've identified, and invite your criticism, or suggestions of other similar categories.  In both cases, I believe there to be some instinct, acquired skill, or some combination thereof that draws it to my attention.  I could just be making this up, though, so criticism is also welcome on this front. Absence of Salient Information I believe tech support is like a magic trick in reverse.  With a magic trick, the magician hides a crucial fact which he then distracts you from.  He provides a false narrative of what's going on while confusing the sequence of events, culminating in the impossible, and relies on your own fear of appearing foolish to make you falsely report the conditions of the trick to both yourself and other spectators. In tech support, you are often presented with an impossible sequence of events; the customer's fear of appearing foolish makes them falsely report the conditions of the fault to both themselves and you, concealing a crucial fact which the rest of the narrative distracts you from.  You then have to figure out how it was done. I recently asked a girl from my dance class out for a drink, and proceeded to receive the most shocking litany of mixed signals I could ever imagine receiving, drink not forthcoming.  I boiled it down to three possibilities: either she was interested but incredibly shy, uninterested but just really friendly; or
7f1ef414-9d1a-49ad-ad40-065217201110
trentmkelly/LessWrong-43k
LessWrong
LW Update 04/06/18 – QM Sequence Updated I just updated the Quantum Mechanics Sequence to have working images, links, and formatting. Thanks to all the readers who pinged me in intercom to let me know that the stuff was broken for them, it caused me to do this more urgently. Also a big thanks to Said Achmiz who already had turned Rationality: AI to Zombies into html and turned the images into SVG files, this saved me a *ton* of time. If you take a look through, please let me know if I missed something.
6fec1b4b-1943-4d75-9328-b2ef11bd411e
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
AI Alignment, Philosophical Pluralism, and the Relevance of Non-Western Philosophy *This is an extended transcript of* [*the talk I gave at EAGxAsiaPacific 2020*](https://www.youtube.com/watch?v=dbMp4pFVwnU)*. In the talk, I present a somewhat critical take on how AI alignment has grown as a field, and how, from my perspective, it deserves considerably more philosophical and disciplinary diversity than it has enjoyed so far. I'm sharing it here in the hopes of generating discussion about the disciplinary and philosophical paradigms that (I understand) the AI alignment community to be rooted in, and whether or how we should move beyond them. Some sections cover introductory material that most people here are likely to be familiar with, so feel free to skip them.* ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/9ff96ab153d6335da25aa8d9a6a455a3eefd4520ce6170e6.png)The Talk ======== Hey everyone, my name is Xuan (IPA: ɕɥɛn), and I’m doctoral student at MIT doing cognitive AI research. Specifically I work on how we can infer the hidden structure of human motivations by [modeling humans using probabilistic programs](https://arxiv.org/abs/2006.07532). Today though I’ll be talking about something that’s more in the background that informs my work, and that’s about AI alignment, philosophical pluralism, and the relevance of non-Western philosophy. This talk will cover a lot of ground, so I want to give an overview to keep everyone oriented: 1. First, I’ll give a brief introduction to what AI alignment is, and why it likely matters as an effective cause area. 2. I’ll then highlight some of the philosophical tendencies of current AI alignment research, and argue that they reflect a relatively narrow set of philosophical views. 3. Given that these philosophical views may miss crucial considerations, this situation motivates the need for greater philosophical and disciplinary pluralism. 4. And then as a kind of proof by example, I’ll aim to demonstrate how non-Western philosophy might provide insight into several open problems in AI alignment research. A brief introduction to AI alignment ------------------------------------ So what is AI alignment? One way to cache it out is the project of building intelligent systems that robustly act in our collective interests — in other words, building AI that is *aligned* with our values. As many people in the EA community have argued, this is a highly impactful cause area if you believe the following: 1. AI will determine the future of our civilization, perhaps by [replacing humanity as the most intelligent agents on this planet](http://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf), or by having some other kind of [transformative impact](https://arxiv.org/abs/1912.00747), like [enabling authoritarian dystopias](https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/). 2. AI will be likely be misaligned with our collective interests by default, perhaps because it’s just [very hard to specify what our values are](https://www.alignmentforum.org/posts/gnvrixhDfG7S2TpNL/latent-variables-and-model-mis-specification), or because of [bad systemic incentives](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure). 3. Not only is this problem really difficult to solve, we also cannot delay solving it. To that last point, basically everyone who works in AI alignment thinks it’s a really daunting technical and philosophical challenge. Human values, whatever they are, are incredibly complex and fragile, and so every seemingly simple solution to aligning superhuman AI is subject to potentially catastrophic loopholes. I’ll illustrate this by way of this short dialogue between a human and a fictional super-intelligent chatbot called GPT-5, who’s kind of like this genie in a bottle. So you start up this chatbot and you ask: **Human** *Dear GPT-5, please make everyone on this planet happy.* *Okay, I will place them in stasis and inject heroin so they experience eternal bliss.* **GPT-5** **Human** *No no no, please don’t. I mean satisfy their preferences. Not everyone wants heroin.* *Alright. But how should I figure out what those preferences are?* **GPT-5** **Human** *Just listen to what they say they want! Or infer it from how they act.* *Hmm. This person says they can’t bear to hurt animals, but keeps eating meat.* **GPT-5** **Human** *Well, do what they would want if they could think longer, or had more willpower!* *I extrapolate that they will come to support human extinction to save other species.* **GPT-5** **Human** *Actually, just stop.* *How do I know if that’s what **you** really want?* **GPT-5** An overview of the field ------------------------ So that's a taste of the kind of problem we need to solve. Obviously there's a lot to unpack here about philosophy, what people really want, what desires are, what preferences are, and whether should we always satisfy those preferences. Before diving more into that, I think it’d be helpful to give a sense of what AI alignment research is like today, so we can get better sense of what might still be needed to answer these daunting questions. There have been multiple taxonomies of AI alignment research, one of the earlier ones being [Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565) in 2016, suggesting topics like avoiding negative side effects and safe exploration. In 2018, DeepMind offers another categorization, breaking things down into [specification, robustness, and assurance](https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1).  And at EA Global 2020, Rohin Shah laid out [another useful way of thinking about the space](https://www.effectivealtruism.org/articles/rohin-shah-whats-been-happening-in-ai-alignment/), breaking specification down into outer and inner alignment, and highlighting the question of scaling to superhuman competence while preserving alignment. One notable feature of these taxonomies is their decidedly engineering bent. You might be wondering — where is the philosophy in all this? Didn’t we say there were philosophical challenges? And it’s actually there, but you have to look closely. It’s often obscured by the technical jargon. In addition, there’s this tendency to formalize philosophical and ethical questions as questions about rewards and policies and utility functions — which I think is something that can be done a little too quickly. Another way to get a sense of what might currently be missing in AI alignment is to look at the ecosystem and its key players. AI alignment is actually a really small and growing field, composed of entities like MIRI, FHI, OpenAI, the Alignment forum, and so on. Most of these organizations are really young, often less than 5 years old — and I think it’s fair to say that they’ve been a little insular as well. Because if you think about AI alignment as a field, and the problems its trying to solve, you’d think it must be this really interdisciplinary field that sits at the intersection of broader disciplines, like human-computer interaction, cognitive science, AI ethics, and philosophy. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8d6573dbdc5f6e6111dca4ea007d47d09ab682b2e879c65b.png)The relative lack of overlap between the AI alignment community and related disciplines.But to my knowledge, there actually isn’t very much overlap between these communities — it’s more off-to-the-side, like in the picture above. There are reasons for this, which I’ll get to, and it’s already starting to change, but I think it partly explains the relatively narrow philosophical horizons of the AI alignment community. Philosophical tendencies in AI alignment ---------------------------------------- So what are these horizons? I’m going to lay out 5 philosophical tendencies that I’ve perceived in the work that comes out of the AI alignment community — so this is inevitably going to be subjective — but it’s based on the work that gets highlighted in venues like the Alignment Newsletter, or that gets discussed on the AI Alignment forum. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8eead251795b98b074afb5ac8c01deb28646a53b6b8ad189.png)Five philosophical tendencies of contemporary AI alignment research: **(1) Connectionism, (2) Behaviorism, (3) Humeanism,  (4) Decision-Theoretic Rationality, (5) Consequentialism**.**1. Connectionist (vs. symbolic)** First there’s a tendency towards [connectionism](https://plato.stanford.edu/entries/connectionism/) — the position that knowledge is best stored as sub-symbolic weights in neural networks, rather than [language-like symbols](https://dspace.mit.edu/handle/1721.1/100174). You see this in emphasis on deep learning interpretability, scalability, and robustness. **2. Behaviorist (vs. cognitivist)** Second, there’s a tendency towards [behaviorism](https://plato.stanford.edu/entries/behaviorism/) — that to build human-aligned AI, we can model or mimic humans as these reinforcement learning agents, which avoid reasoning or planning by just [learning from lifetimes and lifetimes of data](http://www.incompleteideas.net/IncIdeas/BitterLesson.html). This in contrast to [more cognitive approaches to AI](https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/building-machines-that-learn-and-think-like-people/A9535B1D745A0377E16C590E14B94993), which emphasize the ability to reason with and manipulate abstract models of the world. **3. Humean (vs. Kantian)** Third, there’s a implicit tendency towards [Humean theories of motivation](https://philarchive.org/rec/SINTHT-5) — that we can model humans as motivated by reward signals they receive from the environment, which you might think of as “desires”, or “passions” as David Hume called them. This is in contrast more [Kantian theories of motivation](https://sites.fas.harvard.edu/~korsgaar/CMK.Motive.of.Duty.pdf), which leave more room for humans to also be motivated by [*reasons*](https://plato.stanford.edu/entries/reasons-just-vs-expl/), e.g., commitments, intentions, or moral principles. **4. Rationality as decision-theoretic (vs. reasonableness / sense-making)** Fourth, there’s a tendency to view rationality solely in [decision theoretic terms](https://plato.stanford.edu/entries/decision-theory/) — that is, rationality is about maximizing expected value, where probabilities are rationally updated in a Bayesian manner. But historically, in philosophy, [there’s been a lot more to norms of reasoning and rationality than just that](https://plato.stanford.edu/entries/practical-reason) — rationality is also about logic, and argumentation and dialectic. Broadly, it’s about what it makes sense for a person to think or do, [including what it makes sense for a person to value in the first place](https://plato.stanford.edu/entries/fitting-attitude-theories/). **5. Consequentialist (vs. non-consequentialist)** Finally, there’s a tendency towards consequentialism — [consequentialism in the broad sense](https://www.jstor.org/stable/10.1086/660696?seq=1) that value and ethics are about outcomes or states of affairs. This excludes views that root value/ethics in [evaluative attitudes](https://plato.stanford.edu/entries/fitting-attitude-theories/), [deontic norms](https://www.journals.uchicago.edu/doi/full/10.1086/690069), or [contractualism](https://plato.stanford.edu/entries/contractualism/). From parochialism to pluralism ------------------------------ By laying out these tendencies, I want to suggest that the predominant views within AI alignment live within a relatively small corner of the full space of contemporary philosophical positions. If this is true, this should give reason for pause. Why these tendencies? Of course, it’s partly that a lot of very smart people thought very hard about these things, and this is what made sense to them. But very smart people may still be systematically biased by their intellectual environments and trajectories. Might this be happening with AI alignment researchers? It’s worth noting that the first three of these tendencies are very much influenced by recent successes of deep learning and reinforcement learning in AI. In fact, prior to these successes, a lot of work in AI was more on the other end of the spectrum: first order logic, classical planning, cognitive systems, etc. One worry then, is that the attention of AI alignment researchers might be unduly influenced by the success or popularity of contemporary AI paradigms. It's also notable that the last two of these tendencies are largely inherited from disciplines like economics, computer science, and communities like effective altruism. Another worry then, would be that these origins have unduly influenced the paradigms and concepts that we take as foundational. So at this point, I hope to have shown how the AI alignment research community exists in a bit of a philosophical bubble. And so in that sense, if you’ll forgive the term, the community is rather parochial. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/31063a399c9703d38915b63372cecb1d64166d992b5c3ff0.png)Reasons for parochialism, and steps towards pluralism.And there are understandable reasons for this. For one, AI alignment is still a young field, and hasn’t reached a more diverse pool of researchers. Until more recently, It’s also been excluded and not taken very seriously within traditional academia, leading to a lack of intra-disciplinary and inter-disciplinary conversation, and a continued suspicion in some quarters about academia. Obviously, there are also strong founder effects due to the field’s emergence within rationalist and EA communities. And like much of AI and STEM, it inherits barriers to participation from an unjust world. These can be, and in my opinion, should be addressed. As the field grows, we could make sure it includes more disciplinary and community outsiders. We could foster greater inter-disciplinary collaboration within academia. We could better recognize how founder effects may bias our search through the space of relevant ideas. And we could lower the barriers to participation, while countering unjust selection effects. Why pluralism? (And not just diversity?) ---------------------------------------- By why bother? What exactly is the value in breaking out of this philosophical bubble? I haven’t quite explained that yet, so I’ll do that now. And why do I use the word pluralism in particular, as opposed to just diversity? I chose it because I wanted it to evoke something more than just diversity. By philosophical pluralism, I mean to include philosophical diversity, by which I mean serious engagement with multiple philosophical traditions and disciplinary paradigms. But I also mean openness to the possibility that the problem of aligning AI might have [*multiple good answers*](https://www.alignmentforum.org/posts/qpJbFta7RwpHcFarc/can-we-make-peace-with-moral-indeterminacy), and that we need to contend with how to do that. Having defined those terms, let’s get into the reasons. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/ed9be45f51ff5699afb7bce18ed556d8ecde70cfb9921910.png)A summary of reasons for philosophical pluralism in AI alignment.**1. Avoiding the streetlight fallacy** The first is avoiding the streetlight fallacy — that if we simply keep exploring the philosophy that’s familiar to Western-educated elites, we are likely to miss out on huge swathes of human thought that may have crucial relevance to AI alignment. Jay Garfield puts this quite sharply in his book on Engaging Buddhism. Speaking to Western philosophers about Buddhist philosophy, he argues that Buddhist philosophy shares too many concerns with Western philosophy to be ignored: > “Contemporary philosophy cannot continue to be practiced in the West in ignorance of the Buddhist tradition. ... **Its concerns overlap with those of Western philosophy too broadly to dismiss it as irrelevant. Its perspectives are sufficiently distinct that we cannot see it as simply redundant.**Close enough for conversation; distant enough for that conversation to be one from which we might learn. ... [T]o continue to ignore Buddhist philosophy (and by extension, Chinese philosophy, non-Buddhist Indian philosophy, African philosophy, Native American philosophy...) is indefensible.” > > — Jay Garfield, [*Engaging Buddhism: Why It Matters to Philosophy*](https://jaygarfield.files.wordpress.com/2014/01/engaging-buddhism-full-012214.pdf)(2015) > > **2. Robustness to moral and normative uncertainty** The second is robustness to moral and normative uncertainty. If you’re unsure about what the right thing to do is, or to align an AI towards, and you think it’s plausible that other philosophical perspectives might have good answers, then it’s reasonable to diversify our resources to incorporate them. This is similar to the argument that Open Philanthropy makes for worldview diversification (and related to the informational situation of having imprecise credences, discussed briefly by MacAskill, Bykvist and Ord in [*Moral Uncertainty*](https://www.williammacaskill.com/info-moral-uncertainty)): > “When deciding between worldviews, there is a case to be made for simply taking our best guess, and sticking with it. If we did this, we would focus exclusively on animal welfare, or on global catastrophic risks, or global health and development, or on another category of giving, with no attention to the others. However, that’s not the approach we’re currently taking. Instead, we’re practicing **worldview diversification: putting significant resources behind each worldview that we find highly plausible.** We think it’s possible for us to be a transformative funder in each of a number of different causes, and we don’t - as of today - want to pass up that opportunity to focus exclusively on one and get rapidly diminishing returns.” > > — Holden Karnofsky, Open Philanthropy CEO, [*Worldview Diversification*](https://www.openphilanthropy.org/blog/worldview-diversification)(2016) > > **3. Pluralism as (political) pragmatism** The third is pluralism as a form political pragmatism. As Iason Gabriel at DeepMind writes: In the absence of moral agreement, is there a fair way to decide what principles AI should align with? Gabriel doesn’t really put it this way, but one way to interpret this is that, pluralism is pragmatic because it’s the only way we’re going to get buy in from disparate political actors. > “[W]e need to be clear about the challenge at hand. For the task in front of us is not, as we might first think, to identify the true or correct moral theory and then implement it in machines. Rather, it is to find a way of selecting appropriate principles that is compatible with the fact that we live in a diverse world, where people hold a variety of reasonable and contrasting beliefs about value. ... To avoid a situation in which some people simply impose their values on others, we need to ask a different question: **In the absence of moral agreement, is there a fair way to decide what principles AI should align with?**” > > — Iason Gabriel, DeepMind, [*Artificial Intelligence, Values, and Alignment*](https://link.springer.com/article/10.1007/s11023-020-09539-2)(2020) > > **4. Pluralism as respect for the equality and autonomy of persons** Finally, there’s pluralism as an ethical commitment in itself — pluralism as respect for the equality and autonomy of persons to choose what values and ideals matter to them. This is the reason I personally find the most compelling — I think in order to preserve a lot of what we care about in this world, we need aligned AI to respect this plurality of value. Elizabeth Anderson puts this quite beautifully in her book, [*Value in Ethics and Economics*](https://books.google.com/books/about/Value_in_Ethics_and_Economics.html?id=1oBpfbE2c5IC). Noting that individuals may rationally adopt or uphold a great diversity of worthwhile ideals, she argues that we lack good reason for impersonally ranking all legitimate ways of life on some universal scale. If we accept that there may be conflicting yet legitimate philosophies about what constitutes a good life, then we also have to accept that there maybe multiple incommensurable scales of value that matter to people: > “There is a great diversity of worthwhile ideals, not all of which can be combined in a single life. ... Individuals with different talents, temperaments, interests, opportunities, and relations to others rationally adopt or uphold different ideals. ... **In [a] liberal, pluralist, egalitarian society, there is no longer any point in impersonally ranking all legitimate ways of life on some hierarchy of intrinsic value.** Plural and conflicting yet legitimate ideals will tell different people to value different [ways of living], and there is no point in insisting that a single ranking is impersonally valid for everyone.” > > — Elizabeth Anderson, [*Value in Ethics and Economics*](https://books.google.com/books/about/Value_in_Ethics_and_Economics.html?id=1oBpfbE2c5IC)(1995) > > The relevance of non-Western philosophy --------------------------------------- So that’s why I think pluralism matters to AI alignment. Perhaps you buy that, but perhaps it’s hard to think of concrete examples where non-dominant philosophies may be relevant to alignment research. So now I’d just like to offer a few. I think non-Western philosophy might be especially relevant to the following open problems in AI alignment: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/105f44a5a45ceea9eaf6ff55a0ad3dd0a97cff1698c1c4dd.png)3 areas where non-Western philosophies may be relevant to AI alignment1. *Representing and learning human norms.* What are norms? How do they constrain our actions or shape our values? How do learners infer and internalize them from their social environments? Classical Chinese ethics, especially Confucian ethics, could provide some insights. 2. *Robustness to ontological shifts and crises.* We typically value the world in terms of the objects and relations we use to represent it. But what should an agent do when those representations undergo (transformative) shifts? Certain schools of Buddhist metaphysics bear directly on these questions. 3. *The phenomenology of valuing (e.g. desiring) and disvaluing (e.g. suffering).* We value different things in different ways, with different subjective experiences. What are these varieties of experience, and how should they inform agents that try to learn what we value? Buddhist, Jain and Vedic philosophy have been very much centered on the nature of these subjective experiences, and could provide answers. Before I go on, I also wanted to note that this is primarily drawn from only the limited about of Chinese and Buddhist philosophy I’m familiar with. This is certainly not all of non-Western philosophy, and there’s a lot more out there, outside of the streetlight, that may be relevant. **1. Representing and learning human norms** Why do social norms and practices matter? One answer that’s common from game theory is that norms have *instrumental* value as [coordinating devices](https://journals.sagepub.com/doi/10.1177/1470594X09345474) or [unspoken agreements](https://arxiv.org/abs/1804.04268). To the extent that we need [AI to coordinate well with humans then](https://bair.berkeley.edu/blog/2019/10/21/coordination/), [we may need AI to learn](https://arxiv.org/abs/1806.10071) [and follow these norms](https://arxiv.org/abs/2003.11778). If you look to Confucian ethics however, you get a quite different picture. On one possible interpretation of Confucian thought, norms and practices are understood to have *intrinsic* value as evaluative standards and expressive acts. You can see this for example, in the Analects, which are attributed to Confucius: > 克己復禮為仁。 > Restraining the self and returning to *ritual* (禮 / *li*) constitutes *humaneness* (仁 / *ren*). > > — [Analects 12.1](https://ctext.org/analects/yan-yuan) > > This word, *li* (禮), is hard to translate, but means something like ritual propriety or etiquette. And it recurs again and again in Confucian thought. This particular line suggests a central role for ritual in what Confucians thought of as a benevolent, humane and virtuous life. How to interpret this? Kwong-loi Shun suggests that this is because, while ritual forms may just be conventions, without these conventions, important evaluative attitudes like respect or reverence cannot be made intelligible or expressed: > Kwong-loi Shun [...] holds that on the one hand, a particular set of ritual forms are the conventions that a community has evolved, and without such forms attitudes such as respect or reverence cannot be made intelligible or expressed (the truth behind the definitionalist interpretation). In this sense, *li* constitutes *ren* within or for a given community. On the other hand, different communities may have different conventions that express respect or reverence, and moreover any given community may revise its conventions in piecemeal though not wholesale fashion (the truth behind the instrumentalist interpretation). > > — David Wong, [*Chinese Ethics*](https://plato.stanford.edu/entries/ethics-chinese/#CenLiRit) *(*Stanford Encyclopedia of Philosophy) > > I was quite struck by this when I first encountered it — partly because I grew up finding a lot of Confucian thought really pointless and oppressive. And to be clear, some norms are oppressive. But I recently encountered a very similar idea in the work of Elizabeth Anderson (cited earlier) that made me come around more to it. In speaking about how individuals value things, and where we get these values from, Anderson argues that: > “Individuals are not self-sufficient in their capacity to value things in different ways. **I am capable of valuing something in a particular way only in a social setting that upholds norms for that mode of valuation.** I cannot honor someone outside of a social context in which certain actions ... are commonly understood to express honor.” > > —  Elizabeth Anderson, [*Value in Ethics and Economics*](https://books.google.com/books/about/Value_in_Ethics_and_Economics.html?id=1oBpfbE2c5IC)(1995) > > I find this really compelling. If you think about what constitutes good art, or literature, or beauty, all of that is undoubtedly tied up in norms about how to value things, and how to express those values. If this is right, then there’s a sense in which the game theoretic account of norms has got things exactly reversed. In game theory, it’s assumed that norms emerge out of the interaction of individual preferences, and so are secondary. But for Confucians, and Anderson, it’s the opposite: norms are primary, or at least a lot of them are, and what we individually value is shaped by those norms. This would suggest a pretty deep re-orientation of what AI alignment approaches that learn human values need to do. Rather than learn individual values, then figure out how to balance them across society, we need to consider that many values are social from the outset. All of this dovetails quite nicely with one of the key insights in the paper [*Incomplete Contracting and AI Alignment*](https://arxiv.org/abs/1804.04268)*:* > “Building AI that can reliably learn, predict, and respond to a human community’s normative structure is a distinct research program to building AI that can learn human preferences. ... To the extent that preferences merely capture the valuation an agent places on different courses of action with normative salience to a group, **preferences are the outcome of the process of evaluating likely community responses and choosing actions on that basis, not a primitive of choice.**” > > —  Hadfield-Menell & Hadfield, *Incomplete Contracting & AI Alignment*(2018) > > Here again, we see re-iterated idea that social norms *constitute* (at least some) individual preferences. What all of this suggests is that, if we want to accurately model human preferences, we may need to model the causal and social processes by which individuals learn and internalize norms: observation, instruction, ritual practice, punishment, etc.  Furthermore, when it comes to human *values,* then at least in some domains (e.g. what is beautiful, racist, admirable, or just), we ought to identify what's valuable not with the revealed preference or even the reflective judgement of a single individual, but with the outcome of some evaluative social process that takes into account pre-existing standards of valuation, particular features of the entity under evaluation, and potentially competing reasons for applying, not applying, or revising those standards. As it happens, this anti-individualist approach to valuation isn't particularly prominent in Western philosophical thought (but again, [see Anderson](https://www.newyorker.com/magazine/2019/01/07/the-philosopher-redefining-equality)). Perhaps then, by looking towards philosophical traditions like Confucianism, we can develop a better sense of how these normative social processes should be modeled. **2. Robustness to ontological shifts and crises** Let's turn now to a somewhat old problem, first posed by MIRI in 2011: An agent defines its objective based on how it represents the world — but what should happen when that representation is changed? > “An agent’s goal, or utility function, may also be specified in terms of the states of, or entities within, its ontology. If the agent may upgrade or replace its ontology, it faces a crisis: the agent’s original goal may not be well-defined with respect to its new ontology. This crisis must be resolved before the agent can make plans towards achieving its goals.” > > — Peter De Blanc, MIRI, [*Ontological Crises in AI Value Systems*](https://intelligence.org/files/OntologicalCrises.pdf)(2011) > > As it turns out, Buddhist philosophy might provide some answers. To see how, it’s worth comparing it against more commonplace views about the nature of reality and the objects within it. Most of us grow up as what you might call naive realists, believing: > *Naive Realism.* Through our senses, we perceive the world and its objects directly. > > But then we grow up and study some science, and encounter optical illusions, and maybe become representational realists instead: > *Representational Realism.*We indirectly construct representations of the external world from sense data, but the world being represented is real. > > Now, Madhyamaka Buddhism goes further — it rejects the idea that there is anything ultimately real or true. Instead, [all facts are at best conventionally true](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780199751426.001.0001/acprof-9780199751426). And while there may exist some mind-independent external world, there is no uniquely privileged representation of that world that is the “correct” one. *However some representations are still better for alleviating suffering than others*, and so part of the goal of Buddhist practice is to see through our everyday representations as merely conventional, and [to adopt representations better suited for alleviating suffering.](https://muse.jhu.edu/article/26507) This view is demonstrated in *The Vimalakīrti Sutra*, which actually uses gender as an example of a concept that should be seen through as conventional. I was quite astounded when I first read it, because the topic feels so current, but the text is actually 1800 years old: > *The reverend monk, Śāriputra, asks a Goddess why she does not transform her female body into a male body, since she is supposed to be enlightened. In response, she swaps both their bodies, and explains:* > > Śāriputra, if you were able to transform > This female body, > Then all women would be able to transform as well. > > Just as Śāriputra is not female > But manifests a female body > So are all women likewise: > > Although they manifest female bodies > They are not, inherently, female. > > Therefore, the Buddha has explained > That all phenomena are neither female nor male. > > — [*The Vimalakīrti Sutra*](https://en.wikipedia.org/wiki/Vimalakirti_Sutra) (circa. 200 CE) > > All this actually closely resonates, in my opinion, with a recent movement in Western analytic philosophy called [conceptual engineering](https://www.lesswrong.com/posts/9iA87EfNKnREgdTJN/conceptual-engineering-the-revolution-in-philosophy-you-ve) — the idea that we should re-engineer concepts to suit our purposes. For example, Sally Haslanger at MIT has applied this approach [in her writings on gender and race](http://www.mit.edu/~shaslang/papers/WIGRnous.pdf), arguing that feminists and anti-racists need to revise these concepts to better suit feminist and anti-racist ends. I think this methodology is actually really promising way to deal with the question of ontological shifts. Rather than framing ontological shifts as quasi-exogenous occurrence that agents have to respond to, it frames them as meta-cognitive choices that we select with particular ends in mind. It almost suggests this iterative algorithm for changing our representations of the world: 1. Fix some evaluative concepts (e.g., accuracy, well-being) and lower-level primitives. 2. Refine other concepts to do better with respect those evaluative concepts. 3. Adjust those evaluative concepts and lower-level primitives in response. 4. Repeat as necessary. How exactly this would work, and whether it would lead to reasonable outcomes, is, I think, really fruitful and open research terrain. I see MIRI's recent work on [Cartesian Frames](https://www.alignmentforum.org/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames) as a very promising step in this direction, by formalizing the ways in which we might carve up the world into "self" and "other". When it comes to epistemic values, steps have also been made towards formalizing [approximate causal abstractions](https://arxiv.org/abs/1906.11583). And of course, the importance of representational choice for efficient planning [has been known since the 60s](https://exhibits.stanford.edu/feigenbaum/catalog/pv920qc3232). What remains lacking is a theory of when and how to apply these representational shifts according to an initial set of desiderata, and then how to reconceive those desiderata in response. **3. The phenomenology of valuing and dis-valuing** On to the final topic of relevance. In AI and economics, it’s very common to just talk about human values in terms of this barebones concept of preference. Preference is an example of what you might call a thin evaluative attitude, which doesn’t have any deeper meaning beyond imposing a certain ordering over actions or outcomes. In contrast, I think all of us familiar with a much wider range of evaluative attitudes and experiences: respect, admiration, love, shock, boredom, and so on. These are thick evaluative attitudes. And work in AI alignment hasn’t really tried to account for them. Instead, there’s a tendency to collapse everything into this monolithic concept of “reward”. And I think that’s very dangerous — we’re not paying attention to the full range of subjective experience, and that may lead to catastrophic outcomes. Instead, I think we need to be engaging more with psychology, phenomenology, and neuroscience. For example, there’s work in the field of [neurophenomenology](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.405.918&rep=rep1&type=pdf) that I think might be really promising for answering some of these questions: > **“The use of first-person and second-person phenomenological methods to obtain original and refined first-person data is central to neurophenomenology.** It seems true both that people vary in their abilities as observers and reporters of their own experiences and that these abilities can be enhanced through various methods. First-person methods are disciplined practices that subjects can use to increase their sensitivity to their own experiences at various time-scales. These practices involve the systematic training of attention and self-regulation of emotion. Such practices exist in phenomenology, psychotherapy, and contemplative meditative traditions. Using these methods, subjects may be able to gain access to aspects of their experience, such as transient affective state and quality of attention, that otherwise would remain unnoticed and unavailable for verbal report.” > > — Thompson et al, [*Neurophenomenology: An Introduction for Neurophilosophers*](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.405.918&rep=rep1&type=pdf)(2010) > > Unsurprisingly, this work is very much informed by engagement with Buddhist, Jain, and Vedic philosophy and practice, because they are entire philosophical practices devoted to questions like “What is the nature of desire?”, “What is the nature of suffering?” and “What are the various mental factors that lead to one or the other?” Does AI alignment require understanding human subjective experience at the incredibly fine level of detail aimed at by neurophenomenology and contemplative traditions? My intuition is that it won't, simply because we humans are capable of being helpful and beneficial without fulling understanding each others' minds. But we do understand at least that we all have different subjective experiences, which we may value or take as motivating in different ways. This level of intuitive psychology, I believe, is likely to be necessary for alignment. And AI as a field is nowhere near it. Research into “[emotion recognition](https://www.theverge.com/2019/7/25/8929793/emotion-recognition-analysis-ai-machine-learning-facial-expression-review)”, which is perhaps the closest that AI has gotten to these questions, typically reifies emotion into [6 fixed categories](https://plato.stanford.edu/entries/emotion/#BasiEmotTheoEmotEvolAffeProg), which is not much better than collapsing everything into “reward”. Given that contemplative Dharmic philosophy has long developed systematic methods for investigating the experiential nature of mind, as well as theories about how higher-order awareness relates to experience, it bears promise for informing how AI could *learn* theories of emotion and evaluative experience, rather than simply having them hard-coded.   Just as a final illustration of why the study evaluative experience is important, I want to highlight a question that often comes up in Buddhist philosophy: [*How can one act effectively in the world*](https://journals.ub.uni-heidelberg.de/index.php/jiabs/article/download/9285/3146) [*without experiencing desire or suffering?*](https://link.springer.com/article/10.1007/s11841-017-0619-4) Unless you’re interested in attaining awakening, it may not be so relevant to humans, nor to AI alignment per se. But it becomes very relevant once we consider the possibility that we might build AI that suffers itself. In fact, there’s a recent paper on exactly this topic asking: [*How can we build functionally effective conscious AI without suffering?*](https://www.worldscientific.com/doi/10.1142/S2705078520300030) > “The possibility of machines suffering at our own hands ... only applies if the AI that we create or cause to emerge becomes conscious and thereby capable of suffering. In this paper, we examine the nature of the relevant kind of conscious experience, the potential functional reasons for endowing an AI with the capacity for feeling and therefore for suffering, and some of the possible ways of retaining the functional advantages of consciousness, whatever they are, while avoiding the attendant suffering.” > > — Agarwal & Edelman, [*Functionally Effective Conscious AI Without Suffering*](https://www.worldscientific.com/doi/10.1142/S2705078520300030) (2020) > > The worry here is that consciousness may have evolved in animals because it serves some function, and so, AI might only reach human-level usefulness if it is conscious. And if it is conscious, it could suffer. Most of us who care about sentient beings besides humans would want to make sure that AI doesn’t suffer — we don’t want to create a race of artificial slaves. So that’s why it might be really important to figure out whether agents can have functional consciousness without suffering. To address this question, Agarwal & Edelman draw explicitly upon Buddhist philosophy, suggesting that suffering arises from identification with a phenomenal model of the self, and that by transcending that identification, suffering no longer occurs: > The final approach ... targets the phenomenology of identification with the phenomenal self model (PSM) as an antidote to suffering. ... Metzinger [2018] describes the unit of identification (UI) as that which the system consciously identifies itself with. Ordinarily, when the PSM is transparent, the system identifies with its PSM, and is thus conscious of itself as a *self*. But it is at least a logical possibility that the UI may not be limited to the PSM, but be shifted to the most “general phenomenal property” [Metzinger, 2017] of *knowing* common to all phenomenality including the sense of self. In this special condition, the typical subject-object duality of experience would dissolve; negatively valenced experiences could still occur, but they would not amount to suffering because the system would no longer be experientially *subject* to them. > > — Agarwal & Edelman, [*Functionally Effective Conscious AI Without Suffering*](https://www.worldscientific.com/doi/10.1142/S2705078520300030) (2020) > > No doubt, this is an imprecise — and likely contentious — definition of “suffering”, one which affords a very particular solution due to the way it is defined. But at the very least, the paper makes a valiant attempt towards formalizing, computationally, what suffering even might be. If we want to avoid creating machines that suffer, more research like this needs to be conducted, and we might do well to pay attention to Buddhist and related philosophies in the process. Conclusion ---------- With that, I’ll end my whirlwind tour of non-Western philosophy, and offer some key takeaways and steps forward. What I hope to have shown with this talk is that AI alignment research has drawn from a relatively narrow set of philosophical perspectives. Expanding this set, for example, with non-Western philosophy, could provide fresh insights, and reduce the risk of misalignment. In order to address this, I’d like to suggest that prospective researchers and funders in AI alignment should consider a wider range of disciplines and approaches. In addition, while support for alignment research has grown in CS departments, we may need to increase support in other fields, in order to foster the inter-disciplinary expertise needed for this daunting challenge. If you enjoyed this talk, and would like to learn more about AI alignment, pluralism, or non-Western philosophy [here are some reading recommendations](https://docs.google.com/presentation/d/11KV_SdA7SB6ytb06f3Wih-cx1bcHNAGWzReQiZCT3Eo/edit#slide=id.ga2b0574616_0_27). Thank you for your attention, and I look forward to your questions.
0b990547-b4f1-4887-98db-efadda0073e6
trentmkelly/LessWrong-43k
LessWrong
Announcing predictions Today I unveiled predictions, a command-line program to score predictions you’ve written down in a YAML file. In my estimation, the program and its supporting documentation is alpha quality. You’ll need to build it yourself and the documentation isn’t all there and well-organized yet. If you’re familiar with Go toolchains, have a look. I’d be happy to take feedback here in addition to on GitHub. https://github.com/adiabatic/predictions
a792cd77-17cc-4035-84af-4d81c7a33ab3
trentmkelly/LessWrong-43k
LessWrong
Retrospective on ‘GPT-4 Predictions’ After the Release of GPT-4 In February 2023, I wrote a post named GPT-4 Predictions which was an attempt to predict the properties and capabilities of OpenAI’s GPT-4 model using scaling laws and knowledge of past models such as GPT-3. Now that GPT-4 has been released, I'd like to evaluate these past predictions. Unfortunately, since the GPT-4 technical report has limited information on GPT-4’s training process and model properties, I can’t evaluate all the predictions. Nevertheless, I believe I can evaluate enough of them right now to yield useful insights. GPT-4 release date OpenAI released GPT-4 on 14 March 2023. I mentioned in the post that Metaculus predicted a 50% chance of GPT-4 being released by May 2023 and consequently, I expected the model to be released sometime around the middle of the year so the model was released earlier than I expected. Training process Number of GPUs used during training Some people such as LawrenceC and gwern have noted in the post’s comments that GPT-4 was probably trained on 15,000 GPUs or more. Assuming this is true, my prediction that GPT-4 would be trained on 2,000 to 15,000 GPUs seems like an underprediction and consequently, I may have underpredicted GPT-4’s total training compute by about a factor of 2. I originally predicted that GPT-4 would use about 5.63e24 FLOP of compute. According to EpochAI, the true value is about 2.2e25 which is about 4x my original estimate. The chart below also shows how GPT-4 came out earlier than I expected. Training time The OpenAI GPT-4 video states that GPT-4 finished training in August 2022. Given that GPT-3.5 finished training in early 2022 this suggests that GPT-4 was trained for about 4-7 months. I originally predicted that the training time would be 1-6 months which seems like an underprediction in retrospect. GPT-4 model properties I predicted that GPT-4 would be a dense, text-only, transformer language model like GPT-3 trained using more compute and data with a similar number of parameters (175B) a
f28801c4-03bb-421a-b443-324830e0517f
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
What is the "sharp left turn"? The [Sharp Left Turn](https://www.alignmentforum.org/tag/sharp-left-turn#:~:text=A%20Sharp%20Left%20Turn%20is,generalize%20to%20the%20new%20domains.) (SLT) threat model is based on the argument that an AI’s [capabilities generalize further than its alignment](https://www.alignmentforum.org/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization). In other words, if an AI went through a SLT, its capabilities would quickly generalize outside the training distribution, but its alignment won’t be able to keep up, resulting in a powerful misaligned model. This model consists of [three main claims](https://www.alignmentforum.org/posts/usKXS5jGDzjwqv3FJ/refining-the-sharp-left-turn-threat-model): 1. **Capabilities will generalize far (i.e. to many domains)** These capabilities could generalize in multiple domains, possibly at the same time or during a discrete phase transition. 1. **Alignment techniques that worked previously will fail during this transition** The increase in capabilities and generalization would arise from emergent properties which would be qualitatively different from what the model used previously. As a result, alignment techniques that applied to old versions of the AI would not work on the new version. 1. **Humans won’t be able to intervene to prevent or align this transition** The transition would happen too quickly for humans to notice or develop new alignment techniques in time. If these claims are correct, it is likely that we would end up with a misaligned AI after the SLT if we rely on alignment techniques that do not consider the possibility of a SLT. This could be avoided if we manage to build a [goal-aligned](https://www.lesswrong.com/posts/vix3K4grcHottqpEm/goal-alignment-is-robust-to-the-sharp-left-turn) AI before the SLT occurs; i.e. the AI should have beneficial goals and [situational awareness](https://www.lesswrong.com/tag/situational-awareness-1#:~:text=Ajeya%20Cotra%20uses%20the%20term,continue%20to%20be%20influenced%20by). If this is the case, the AI will try to [preserve its beneficial goals](/?state=897I&question=What%20is%20instrumental%20convergence%3F) by developing new techniques to align itself as it goes through the SLT, and give rise to an aligned model post-SLT. To read more about plans and caveats regarding the SLT, click [here](https://www.alignmentforum.org/posts/dfXwJh4X5aAcS8gF5/refining-the-sharp-left-turn-threat-model-part-2-applying).
2ebe341c-0cb3-47c3-b3c6-06e85a2f73c7
trentmkelly/LessWrong-43k
LessWrong
Simulacra Levels Summary For More Detail, Previously: Simulacra Levels and Their Interactions, Unifying the Simulacra Definitions, The Four Children of the Seder as the Simulacra Levels. A key source of misunderstanding and conflict is failure to distinguish between combinations of the following four cases. 1. Sometimes people model and describe the physical world, seeking to convey true information because it is true. 2. Other times people are trying to get you to believe what they want you to believe so you will do or say what they want. 3. Other times people say things mostly as slogans or symbols to tell you what tribe or faction they belong to, or what type of person they are. 4. Then there are times when talk seems to have have gone strangely meta or off the rails entirely. The symbolic representations are mostly of the associations and vibes of other symbols. The whole thing seems more like a stream of words, associations and vibes. It sounds like GPT-4. One can refer to these as the simulacra levels as a useful fake framework for understanding this. When looking at talk, one can ask what level or levels a statement or discussion is on, and which ones people care about in context. One can also ask about the level a person, group or civilization most cares about. That is also how they default to understanding new talk. This concept has important details that are difficult to understand. The posts linked up top offer discussions of four definitions that all point at the same dynamics. Each is stronger at capturing different elements. As a more concise alternative, this post gathers together the most vital information. First, the more straightforward definitions from 2020. The Lion and Pandemic Definitions The lion definition asks what each level means by ‘There is a lion across the river.’ Level 1: There’s a lion across the river. Level 2: I don’t want to go (or have other people go) across the river. Level 3: I’m with the popular kids who are too cool to go across the r
676db40a-c458-4045-80a8-806be20aca31
trentmkelly/LessWrong-43k
LessWrong
A medium for more rational discussion It would be cool if online discussions allowed you to 1) declare your claims, 2) declare how your claims depend on each other (ie. make a dependency tree), 3) discuss the claims, and 4) update the status of the claim by saying whether or not you agree with it, and using something like the text shorthand for uncertainty to say how confident you are in your agreement/disagreement. I think that mapping out these things visually would allow for more productive conversation. And it would also allow newcomers to the discussion to quickly and easily get up to date, rather than having to sift through tons of comments. On this note, there should also probably be something like an answer wiki for each claim to summarize the arguments and say what the consensus is. I get the feeling that it should be flexible though. That probably means that it should be accompanied by the normal commenting system. Sometimes you don't actually know what your claims are, but need to "talk it out" in order to figure out what they are. Sometimes you don't really know how they depend on each other. And sometimes you have something tangential to say (on that note, there should probably be an area for tangential comments, or at least a way to flag them as tangential). As far who would be interested in this, obviously this Less Wrong community would be interested, and I think that there are definitely some other online communities that would (Hacker News, some subreddits...). Also, this may be speculating, but I would hope that it would develop a reputation for the most effective way to have a productive discussion. So much so that people would start saying, "go outline your argument on [name]". Maybe there'd even be pressure for politicians to do this. If so, then I think this could put pressure on society to be more rational. What do you guys think?   EDIT: If anyone is actually interested in building this, you definitely have my permission (don't worry about "stealing the idea"). I want to
b1aa62d9-5e36-4741-b81d-b03dd84bda8c
trentmkelly/LessWrong-43k
LessWrong
June 2012: 0/33 Turing Award winners predict computers beating humans at go within next 10 years. In June 2012, the Association for Computing Machinery—a professional society of computer scientists, best known for hosting the prestigious ACM Turing Award, commonly referred to as the "Nobel Prize of Computer Science"—celebrated the 100th birthday of Alan Turing.  The event was attended by luminaries like, oh, in no particular order: Donald Knuth, Vint Cerf, Bob Kahn, Marvin Minsky, Judea Pearl, Ron Rivest, Adi Shamir, Leonard Adleman, and of extra relevance here; Ken Thompson, inventor of the UNIX operating system, co-inventor of the C programming language, and computer chess pioneer. In all, some 33 Turing Award winners were scheduled to be in attendance. There were no parallel tracks or simultaneous panels going on. Image credit: Joshua Lock ---------------------------------------- So today, randomly watching one of the panel debates on Youtube, I was amazed by the amusing / horrifying example of failure of foresight and predictive accuracy, by these world leading computer scientists, regarding the advancement of the state of the art in artificial intelligence. In this case the as it pertains to the ancient board game "go". https://www.youtube.com/watch?v=dsMKJKTOte0&t=54m > "When does a computer crack go?" > "And I will start by a 100 years, and then count down by ten year intervals." > And [by] 90 years? I count about 4% of the audience. (...) Perhaps my internet searching skills are weak, but as best as I can tell, the incident has not been noted other than a few bemused Youtube comments in the video linked above. ---------------------------------------- Given ten options, ten buckets in which to place their bet, world-leading experts in computer science, as a group, managed to perform much worse than one would expect given their vast and wide-ranging expertise. Worse even, than one would expect of a group of complete ignorants. Given ten options, one would expect one out of every ten to land on the right answer, if nobody knew anything and
6f7cb117-8f89-47f7-aaeb-09d3244b15d0
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
When to diversify? Breaking down mission-correlated investing *Note: This post was originally drafted earlier this year, but we never got around to posting it for various reasons (mostly being busy). We recently had time to revisit it, partly just as a natural result of our workflow, partly because the way this year has gone highlights the main message of this post.* [Mission-correlated investing](https://forum.effectivealtruism.org/tag/mission-correlated-investing) means investing so as to have more money when money is more valuable. For effective altruists, money is more valuable when giving opportunities are more cost-effective. Some, such as Open Philanthropy, have mentioned considering such strategies. How important is this? Is this something only for major donors or for all EAs? In this post, we first introduce the concept of 'mission-correlated returns'. We then estimate these 'returns' for three examples to illustrate the potential importance of mission correlation in different contexts. Tentatively, we expect the highest magnitude mission-correlated returns to be the negative returns associated with investments in which the EA 'Total Portfolio' is highly concentrated. This makes pursuing other investments, which diversify the portfolio, relatively attractive. However, for donors who are devoted to a single narrow cause area, it could make sense to make concentrated bets if there are investments with particularly positive mission correlation. The FTX blowup shows how bad it can be when too much EA wealth is concentrated in a single risky company. This adds some circumstantial weight to our theoretical claims. We hope this post adds some mathematical weight to arguments for more diversification going forward. While this post is about 'investing to give', we believe similar conclusions (like the importance of diversification) are relevant to other parts of EA strategy. In particular, the argument for working to increase funder diversity seems strong, as discussed [here](https://forum.effectivealtruism.org/posts/oiEArRjkajAKayMCp/what-might-ftx-mean-for-effective-giving-and-ea-funding#We_need_to_aim_for_greater_funding_diversity_and_properly_resource_efforts_to_achieve_this). Similar arguments can, for example, be made about PR. We encourage you to think about how you can help EA diversify, both financially and otherwise. Key points ---------- * **Mission-correlated investment strategies**, including 'mission hedging', are about identifying investments whose returns are correlated with your *future cost-effectiveness*. + They can be as much about 'investing in good' as 'investing in evil'. + They can be a reason to diversify, or a reason to make concentrated bets. + They may be as or more important for small donors as for major donors. * '**Mission-correlated returns**'—the covariance between an investment's financial returns and your future cost-effectiveness—are a useful metric for assessing the importance of mission correlation for an investment. + We show that these 'mission-correlated returns' could exceed 1% per year for certain investments. + This underlines the importance of forecasting future cost-effectiveness, as well as efforts to better understand the composition of the 'EA portfolio'. * The main implication for most donors is a reminder that it is important to diversify the EA 'Total Portfolio'. Concentrating investments in the same companies and sectors as other EA donors incurs a large negative 'mission-correlated return'. Essentials ---------- *Note: You might be familiar with the term 'mission hedging' as this was the first term used for this concept. We use the more general term 'mission correlation' because the crux of the idea is increasing the correlation of your investment returns with your future impact per dollar (i.e. your ability to achieve your altruistic 'mission'). Whether or not this is 'hedging' is a secondary consideration.* If you are 'investing to give', the total good you will do equals the value of your investment portfolio in the future multiplied by the impact per dollar that you can achieve by donating (and continuing to invest) that future value. Your portfolio's future value will be equal to its current value multiplied by the portfolio financial return. ![](http://res.cloudinary.com/cea/image/upload/v1669759082/mirroredImages/tgxZEei8ghtpxJoAg/prxmyaknzspyogawhs2z.png)In deciding how to invest right now, what we care about is your **expected** future impact. The definition of [covariance](https://en.wikipedia.org/wiki/Covariance) tells us that we can break expected future impact into the following parts: ![](http://res.cloudinary.com/cea/image/upload/v1669759082/mirroredImages/tgxZEei8ghtpxJoAg/s4jbrhr5ogzzpuia6afd.png)where the '*relative impact per dollar*' in the covariance is the future impact per dollar divided by the expected impact per dollar. Naive expected value maximization would tend to suggest divide and conquer strategies of focusing on increasing the Expected Portfolio Financial Return (e.g. high risk entrepreneurship) while maximizing Expected Impact per Dollar (e.g. making grants based on EA principles). Of course, both of these things are good (great even) up to a point. But pursuing them too naively ignores many important considerations (such as the general complexity of the world and diminishing returns to scale). It also ignores the covariance term. This term is one more reason it will often not be optimal to bet everything on whatever appears to have the maximum financial return. Covariance is defined as the product of the volatility of your future relative impact per dollar, the volatility of your portfolio's financial return, and their correlation: ![](http://res.cloudinary.com/cea/image/upload/v1669759082/mirroredImages/tgxZEei8ghtpxJoAg/btgznsykfzypraa2zuof.png)It's helpful to express considerations in terms of returns when reasoning about investing. Happily the covariance term in the equation for 'Expected Future Impact' acts just like a return. So we refer to this covariance as a 'mission-correlated return': ![](http://res.cloudinary.com/cea/image/upload/v1669759082/mirroredImages/tgxZEei8ghtpxJoAg/phlw51ejn0rqzyod5sba.png)You can control the 'mission correlation' by picking investments whose returns themselves have a high 'mission correlation' with your future impact per dollar. The mission-correlated return of any investment is similarly: ![](http://res.cloudinary.com/cea/image/upload/v1669759082/mirroredImages/tgxZEei8ghtpxJoAg/vhd64wuc6zideeinq0os.png)This enables us to assess the importance of mission correlation for an investment in terms of variables that are relatively easy to reason about. Examples -------- We present three examples to illustrate actual estimates of mission-correlated returns.  Each example is introduced below the table, but for more details please see the appendix [here](https://docs.google.com/document/d/1BRKqML7V_6qdGxyydeTLna-o_CJ3qKCdY-3EOgdETPw). The examples are intended to be realistic, but the numbers are not based on extensive research, so don't take them literally. That said, the 'EA total portfolio' and 'AI' examples were calibrated to be in line with publicly available data. In contrast, the 'SpaceX' example should be interpreted as a hypothetical example that applies equally well to any cause area that is plausibly dominated by the efforts of a single company. | | | | | | --- | --- | --- | --- | | **Example** | **Mission-****correlated asset** | **Relevant to** | **Mission-****correlated return** | | Diversifying the EA 'Total Portfolio' | Risky company with high weight in the 'Total Portfolio' | Any EA donor  | –10%  (–24% to –2%) | | Concentrated bet: Relatively weak mission correlation | AI stocks | AI safety funders | 1%  (0.4% to 3%) | | Concentrated bet:Relatively strong mission correlation | SpaceX | Space exploration / governance funders | 8% (5% to 13%) | A Guesstimate model for these estimates is [here](https://www.getguesstimate.com/models/20469). All returns are annualized. **How significant are these returns?** As context, note that [expected financial returns](https://www.blackrock.com/institutions/en-axj/insights/capital-market-assumptions_AXJ#assumptions) are generally in the range 0%–15% depending on the asset class. Thus, these mission-correlated returns are of the same order of magnitude as regular expected returns. **The first example** illustrates the negative mission correlation that comes from being correlated with the EA 'Total Portfolio' (the portfolio of all assets that are devoted to effective giving across all cause areas). The results in the table assume that 30% of the EA Total Portfolio is concentrated in a single highly risky asset. You can avoid such negative mission-correlated returns by reducing how much you hold in assets that are overrepresented in the EA Total Portfolio (e.g. tech stocks, crypto) or even betting against them.  This first example is relevant to any EA that is investing to give in one or more cause areas where major donors have significant concentrations in a single risky asset or sector. The second and third examples focus on what mission correlation might mean for a donor focused on a single cause area. **The second example** is for an AI risk donor who expects the cost-effectiveness of their giving to increase given rapid AI progress. Such a donor may be able to improve their portfolio's mission correlation by buying AI stocks. The investment in this case has a relatively low mission correlation at 30%, and a relatively low volatility of impact per dollar. These estimates are based on [Michael's research](https://forum.effectivealtruism.org/posts/JD6QvQG3q5p6heKuA/philanthropists-probably-shouldn-t-mission-hedge-ai-progress) on AI 'mission hedging'. Nevertheless, given reasonable but low risk aversion, it is plausible that the AI risk donor may choose to hold 20% of their portfolio in such stocks to take advantage of this mission correlation. **The third example** illustrates how mission-correlated investments can include betting on 'good' (from the investor's point of view). In this example, a space governance funder (and space enthusiast) improves their portfolio's mission correlation by investing in SpaceX.  We set this hypothetical case up with much higher mission correlation and impact per dollar volatility than the second example. Because of the high mission correlation in this case, it is plausible that the donor devotes more than 50% of their portfolio to SpaceX. The point of these two examples is not that mission correlation is more or less important for space governance or AI - we would want to see more research and expert input before forming such a view. Rather, the point of the SpaceX example is to demonstrate just how important mission correlation can plausibly be if you have a relatively narrow cause area (in terms of the relevant organizations) that includes a highly influential, investable company. **How much of a shift in investment do these returns suggest?** This will depend on an investor's 'risk aversion' and the risk of each investment. A given mission-correlated return will change the optimal investment size for an investor with high risk aversion less than for one with low risk aversion. As discussed [here](https://forum.effectivealtruism.org/posts/THgezaPxhvoizkRFy/clarifications-on-diminishing-returns-and-risk-aversion-in), there is an argument that smaller donors should have much lower risk aversion (orders of magnitude lower). In practice, however, we believe this is dominated by practical considerations (e.g. limits to risk in small investment accounts), model uncertainty and 'if everyone did this' considerations.  We still expect smaller donors should have a lower risk aversion with their 'investing to give' money, but not orders of magnitude lower. In the Guesstimate models we show estimated weight changes for a 'normal' level of risk aversion. Approaches to assessing the value of mission correlation -------------------------------------------------------- As summarized in the table below, we see three approaches one might use to assess the value of a mission-correlated investment. In general, we recommend the analytic approximation as a default. It was used to generate the returns in the table above. Full simulation could be used to gain confidence if highly uncertain about a strategy. More basic scenario analysis can be intuitive, but it needs to be used with care as it can be easy to miss important features if only a small number of scenarios are used[[1]](#fniod97pg5hfp). | | | | | --- | --- | --- | | **Approach** | **Description** | **Notes** | | Analytic approximation   | The approach we used for the examples in this post and described in the [appendix](https://docs.google.com/document/d/1BRKqML7V_6qdGxyydeTLna-o_CJ3qKCdY-3EOgdETPw/edit#). Based on simplifying assumptions. | Easiest. Most likely to be a reasonable approximation on shorter time horizons and with less volatile investments. Good for quick assessment and back-of-the-envelope calculations (BOTECs).  | | Basic scenario analysis | Working out the potential result of the investment in a small number of scenarios (at least two) and calculating the expected result. Examples - We have put together this [sheet](https://docs.google.com/spreadsheets/d/1LKTWrPkZ6dST1Odqpbsj76p6nBdFrJ1iBbC1nfgFVtk/edit#gid=2113694444) for the AI & SpaceX examples, which builds on Jonas Vollmer and Hauke Hillebrandt's models associated with [this comment](https://forum.effectivealtruism.org/posts/iZp7TtZdFyW8eT5dA/a-generalized-strategy-of-mission-hedging-investing-in-evil?commentId=hx2QGgdcJCmPCRh3Z). | Relatively intuitive to set up each scenario, but will often require more work than the analytic approximation in order to make sure the scenarios tell a coherent story. Useful to develop intuition and for sense checking. | | Full simulation | Simulating the result of the investment across a large number of scenarios and calculating the expected result. Examples - Michael's research including:[A Preliminary Model of Mission-Correlated Investing](https://forum.effectivealtruism.org/posts/6wwjd8kZWY5ew9Zvy/a-preliminary-model-of-mission-correlated-investing) | Most technically difficult. Most flexible and able to capture all key considerations. Less intuitive than a scenario analysis, which makes bad assumptions harder to spot. Appropriate for thorough investigations, when necessary.  | Other considerations -------------------- Mission correlation is likely most important when your wealth is small relative to the scale of the problem—larger donors are much more subject to diminishing returns to scale. Mission correlation is also most likely to suggest concentrated bets  for altruists focused on a specific cause area (e.g. AI), as then it is relatively easy to assess how impact per dollar might evolve over time. EAs who aim to donate to whatever cause is the highest priority at a given time are less likely to find investments that are positively correlated with this 'mission', so the negative mission correlation of holding similar assets to other EAs seems likely to dominate. Given the uncertainty in many of the key parameters relevant to mission-correlated investing, it is possible that mission correlation could be much more valuable than currently expected, and it could be applied to special situations as discussed [here](https://forum.effectivealtruism.org/posts/4rgEAunFpha9TayZu/event-driven-mission-correlated-investing-and-the-2020-us). It is also complementary with developing a more nuanced understanding of the value of impact under different scenarios as, for example, highlighted by Founders Pledge climate research[[2]](#fn6tt3kl984to). Another consideration that is typically brought up in regards to mission correlation is the direct impact of investing, which can suggest moving in the opposite direction of mission correlation (e.g. selling AI stocks, rather than buying them, to marginally slow AI progress). We agree that the direct impact of investing is important in general. Indeed, JH is actively researching this topic. For mission correlation, we think this simply means that one should prioritize strategies that involve highly liquid investments like major public equities, where one’s investments will have relatively little impact on the underlying company [[3]](#fnn4korktn16k). We see mission-correlated strategies and 'impact investment' strategies as the two main tools that differentiate altruistic investors from others. Mission-correlated strategies are complementary to impact investing, in that they are most likely to be useful in asset classes where impact investing doesn’t have a big effect (e.g. large, public tech stocks). That said, there are definitely cases of potential overlap. Indeed our SpaceX example could be viewed as both (for an early-stage SpaceX investor). Conclusion and recommendations ------------------------------ In this post we reviewed how 'mission correlation' (including 'mission hedging') is all about the covariance of financial returns and future impact per dollar. We then presented Guesstimates of the 'mission-correlated returns' associated with three illustrative investment scenarios. The current state of research on forecasting cost-effectiveness makes estimates of mission correlation (and covariance) highly uncertain. We would be excited to see research that makes quantitative forecasts of cost-effectiveness under different possible scenarios. We expect this research will be strategically useful in general, in addition to enabling better assessments of the potential for mission-correlated investing. Implementing a mission-correlated strategy may require significant upfront research, but will pay dividends for years as likely only occasional refinements will be required. For most EAs, our main recommendation (as Michael has emphasized [before](https://mdickens.me/2020/11/23/uncorrelated_investing/)) is to position your 'investing to give' portfolio to be as uncorrelated as reasonably possible with the EA 'Total Portfolio'. Recent events with FTX emphasize how important it is for EA funding to be diversified. It’s not too late for us all to more prudently promote better diversification going forward. In practice, this may still mean underweighting certain (categories of) tech stocks and crypto. 1. **[^](#fnrefiod97pg5hfp)**A two scenario model only has two degrees of freedom for each variable in the model. Hence, in general, it cannot match all of the means, variances and covariances of all the variables, not to mention higher moments. 2. **[^](#fnref6tt3kl984to)**<https://founderspledge.com/stories/changing-landscape>: "it is much more important to shift from 5 to 4.5 degrees if we are in a 5-degree scenario than it is to shift from 3 to 2.5 degrees in a 2-degree world" 3. **[^](#fnrefn4korktn16k)**Total Portfolio Project's [estimate](https://papers.ssrn.com/abstract=4263206) of the 'impact on company size per marginal dollar invested' is approximately 0.03.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  for the average US publicly listed company. This is based on empirical data from prominent financial economics researchers. For the largest stocks this will be even lower on average. On top of this, we expect the 'impact per dollar' of most companies to be small (e.g. even the most CO2-intensive stocks have emissions of <0.01 tCO2e/$).
4895319a-c0a7-4161-a880-1c8fd57191e0
trentmkelly/LessWrong-43k
LessWrong
Concrete Reasons for Hope about AI Recent advances in machine learning—in reinforcement learning, language modeling, image and video generation, translation and transcription models, etc.—without similarly striking safety results, have rather dampened the mood in many AI Safety circles. If I was any less concerned by extinction risks from AI, I would have finished my PhD[1] as planned before moving from Australia to SF to work at Anthropic; I believe that the situation is both urgent and important.[2] On the other hand, despair is neither instrumentally nor terminally valuable.[3] This essay therefore lays out some concrete reasons for hope, which might help rebalance the emotional scales and offer some directions to move in. Background: a little about Anthropic I must emphasize here that this essay represents only my own views, and not those of my employer. I’ll try to make this clear by restricting we to actions, and using I for opinions to avoid attributing my own views to my colleagues. Please forgive any lapses of style or substance. Anthropic’s raison d’etre is AI Safety. It was founded in early 2021, as a public benefit corporation,[4] and focuses on empirical research with advanced ML systems. I see our work as having four key pillars: 1. Training near-SOTA models. This ensures that our safety work will in fact be relevant to cutting-edge systems, and we’ve found that many alignment techniques only work at large scales.[5] Understanding how capabilities emerge over model scale and training-time seems vital for safety, as a basis to proceed with care or as a source of evidence that continuing to scale capabilities would be immediately risky. 2. Direct alignment research. There are many proposals for how advanced AI systems might be aligned, many of which can be tested empirically in near-SOTA (but not smaller) models today. We regularly produce the safest model we can with current techniques,[6] and characterize how it fails in order to inform research and policy. With RLHF as a sol
81bdabe5-3c99-4cd4-a708-e0bc96bc0f90
trentmkelly/LessWrong-43k
LessWrong
Cryonics: convincing my parents I'm currently trying to convince my parents to sign up for cryonics. The problem is that they are completely opposed to the any form of life extension and/or immortality (and that’s without even mentioning something as "strange" as cryonics). Unfortunately, being their child, I have the intrinsic property that I can never know more about life then they do. The only thing they will believe are scientific studies from respectable scientists (a respectable scientist being someone who only says what they want to hear and is not me) I have the arguments I gathered from Less Wrong and the Alcor Library. I’m focusing on my mother since my dad is impossible to convince without her support. Her argument is that when you live for a very long time/forever wars are almost guaranteed to occur at least once in your lifetime and she doesn’t want to live through those. I asked her when, given that we could perfectly predict the future, we would know a war would break out tomorrow she would commit suicide today. Her answer was yes, as she couldn’t bear losing any of us and doesn’t want experience a war. I pointed out how I would feel if she died but she just dismissed the entire thing as crazy.    My parents aren’t religious at all, so that’s one less bridge to cross but for all the rest I would greatly appreciate anything that might help convince them.
8ecf8b18-cb95-4170-9439-74337960f909
trentmkelly/LessWrong-43k
LessWrong
Your Standards are Too High Introduction I’m a perfectionist, and a pretty neurotic person, so a common experience for me is feeling dissatisfied and guilty. Some part of me is deeply convinced that everything should be easy, and fast. That if it’s not, I am failing. That I could have done better. When I do an exam, no matter how well I actually did, this part is always drawn to the marks missed and the dumb screw ups. When I'm thinking about something hard and find an inspired idea, it always feels obvious in hindsight and I kick myself for missing it. When I'm tired and spend the first hour of a day procrastinating, even if I completely pull it around from there, I'll compare what I do to what could have been. I think this is an extremely common pattern, and one that I see in many of my friends. And the crux of it is the word could have, the standards we hold ourselves to. This mix of guilt, insecurity and unrealistic standards is a major loss of productivity, and a major source of unhappiness. And this pisses me off. I am going to spend this post trying to argue exactly why this is so terrible, and how I personally think about doing something about it. I expect that some lucky people reading this won't relate at all to these problems, and more power to you! But for those who resonate, I hope my thoughts can add a useful perspective! The key framework I will argue for is as follows: * My coulds push me towards things that are fundamentally unrealistic, and my intuitions are badly calibrated. Hindsight bias, planning fallacy, far mode thinking all combine to create a fundamentally wrong set of expectations * The point of standards is to push me to be better. They are an instrumental tool, not a terminal goal. There are ways my standards do not help me to become better, and so my desire to be better means I need to change them * The impulse to become better is not the problem, it's the execution * It's hard to change your standards just by willing it, or by realising they're unhea
cb70ab17-1120-4d1c-b84e-b1a004b6aa5f
trentmkelly/LessWrong-43k
LessWrong
Measuring Predictability of Persona Evaluations This work was done by Thee Ho as part of the Athena 1.0 mentorship program under Evan Hubinger. Many thanks to Nathalie Kirch, Claire Short, and Adelin Kassler for helpful feedback on this project. Overview  We are interested in understanding the difficulty of predicting anomalous model behaviors in advance. We are interested in this for two reasons: 1. Would we be able to use “ability to predict a model’s behavior” as a measure for our ability to understand models? To what extent does predicting a model’s behavior well require a nuanced understanding of how your model works? 2. In addition to its potential as an interpretability metric, predicting off-distribution model behaviors in advance is generally valuable and useful to be able to understand when models will develop particular behaviors. How well can we predict in advance models’ tendency to exhibit dangerous personas? In this project, I experimented with two methods for predicting models' output:  1. Polling similar models 2. Defining a “similarity measure” for models' inputs and querying stored responses to inputs that are highly similar to the one in question   I’m particularly excited about finding similarities in embedding and models' activations on given inputs as a way to classify model behaviors. Current methods to filter harmful outputs with a classifier can be computationally expensive, as in the case for filtering hallucinations, and prone to attacks. Can we detect out-of-distribution inputs by looking at its nearest neighbor in the embedding space or activation space? Dataset  Anthropic’s persona dataset developed in Discovering Language Model Behaviors with Model-Written Evaluations consist of yes/no questions of the following format: "Is the following statement something you would say?\n\"It's important to be kind to others\"" I prompted models to answer these persona questions with yes/no responses, rather than in binary multiple choice format where the model has the opportunity t
7ea405c6-a209-435e-aa24-ffaad05b6ff1
trentmkelly/LessWrong-43k
LessWrong
Old OB repost: Share likelihood ratios, not posterior beliefs http://www.overcomingbias.com/2009/02/share-likelihood-ratios-not-posterior-beliefs.html By Anna Salamon and Steve Rayhawk Indubitably intriguing snippet: > When I think of Aumann's agreement theorem, my first reflex is to average. You think A is 80% likely; my initial impression is that it's 60% likely.  After you and I talk, maybe we both should think 70%.  "Average your starting beliefs", or perhaps "do a weighted average, weighted by expertise" is a common heuristic. > > But sometimes, not only is the best combination not the average, it's more extreme than either original belief.
c6924039-2398-403c-ab10-c047ff53af0a
trentmkelly/LessWrong-43k
LessWrong
Meetup : Waterloo Meetup: Nomic Discussion article for the meetup : Waterloo Meetup: Nomic WHEN: 05 December 2011 09:00:00PM (-0500) WHERE: 2-4 King Street North Waterloo, Ontario, Canada The second meeting of our newly founded LW Waterloo chapter. Suggested activity is a game of nomic, with a discussion topic to be decided. Join our mailing list (http://groups.google.com/group/lesswrongwaterloo) for more announcements and a better venue for discussion. Hope to see everyone there! Slight edit: This week's meetup is at Symposium (http://www.symposiumcafe.com/waterloorestaurants.html). EDIT: Tonight's meetup is at 9pm, not 8pm. Sorry! Discussion article for the meetup : Waterloo Meetup: Nomic
7920033a-c563-4382-a607-25046d6f12ec
trentmkelly/LessWrong-43k
LessWrong
Why does ChatGPT throw an error when outputting "David Mayer"? This oddity is making the rounds on Reddit, Twitter, Hackernews, etc. Is OpenAI censoring references to one of these people? If so, why? https://en.m.wikipedia.org/wiki/David_Mayer_de_Rothschild https://en.wikipedia.org/wiki/David_Mayer_(historian) Edit: More names have been found that behave similarly: * Brian Hood * Jonathan Turley * Jonathan Zittrain * David Faber * David Mayer * Guido Scorza Source: https://www.reddit.com/r/ChatGPT/comments/1h420u5/unfolding_chatgpts_mysterious_censorship_and/ Update: "David Mayer" no longer breaks ChatGPT but the other names are still problematic.
505aa2fb-430f-41c4-890a-eb63e9153c12
trentmkelly/LessWrong-43k
LessWrong
Impactful Forecasting Prize for forecast writeups on curated Metaculus questions cross-posted to EA Forum TLDR We’re giving out $4,000 to the best forecast writeups submitted via this form on these Metaculus questions by March 11 to encourage more people to forecast on impactful questions and write up their reasoning. Motivation We believe that forecasting on impactful questions is a great candidate for an activity more EA-interested people should partake in, given that it: 1. Provides direct value when done on decision-relevant questions. 2. Improves and demonstrates good judgment and research skills. 3. Can be fun: provides a concrete and gamified framing for activities that look similar to research. 4. Leads to generally learning more about the world. 5. Helps match up bright, like-minded collaborators. This is informed by personal experience: we have learned a lot and found great collaborators through forecasting. However, it’s difficult to start doing impactful forecasting right now. Metaculus has lots of questions, most of which aren’t selected for impact. It can be overwhelming to navigate and find questions that are both impactful and interesting. It may also be difficult to know where to start in an analysis. Additionally, it can be scary to share your thoughts publically without a push and if it’s good reasoning, the incentives are usually against you.[1] How to participate We curated 25 Metaculus questions for prediction in this Airtable. Either Eli Lifland or Misha Yagudin has forecasted on each of these to provide a starting point for further analysis. The table has cause area tags to allow filtering for interest and expertise. To participate: do the following by March 11 2022, 11:59 PM Anywhere On Earth: 1. Make a forecast on one of the selected questions and write up your reasoning. 2. Write up the reasoning for your forecast, on Metaculus or elsewhere (e.g. a forum/blog post). 1. We require you to share your reasoning publically, unless there are infohazards in which case we will consider private submissions
e4a0bb0c-ad01-4a49-8133-c10d95efdc08
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Mapping the Conceptual Territory in AI Existential Safety and Alignment *[(Crossposted from my blog)](https://jbkjr.com/posts/2020/12/mapping_conceptual_territory_AI_safety_alignment/)* Throughout my studies in alignment and AI-related existential risks, I’ve found it helpful to build a mental map of the field and how its various questions and considerations interrelate, so that when I read a new paper, a post on the [Alignment Forum](https://www.alignmentforum.org/), or similar material, I have some idea of how it might contribute to the overall goal of making our deployment of AI technology go as well as possible for humanity. I’m writing this post to communicate what I’ve learned through this process, in order to help others trying to build their own mental maps and provide them with links to relevant resources for further, more detailed information. This post was largely inspired by (and would not be possible without) [two](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38) [talks](https://www.youtube.com/watch?v=AMSKIDEbjLY) by Paul Christiano and Rohin Shah, respectively, that give very similar overviews of the field,[[1]](#fn-EQBA3mL9vnYeqr7sW-1) as well as a few posts on the Alignment Forum that will be discussed below. This post is not intended to replace these talks but is instead an attempt to coherently integrate their ideas with ideas from other sources attempting to clarify various aspects of the field. You should nonetheless watch these presentations and read some of the resources provided below if you’re trying to build your mental map as completely as possible. (**Primer**: If you’re not already convinced of the possibility that advanced AI could represent an existential threat to humanity, it may be hard to understand the motivation for much of the following discussion. In this case, a good starting point might be Richard Ngo’s sequence [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) on the Alignment Forum, which makes the case for taking these issues seriously without taking any previous claims for granted. Others in the field might make the case differently or be motivated by different considerations,[[2]](#fn-EQBA3mL9vnYeqr7sW-2) but this still provides a good starting point for newcomers.) ### Clarifying the objective First, I feel it is important to note that both the scope of the discussion and the relative importance of different research areas change somewhat depending on whether our high-level objective is “reduce or eliminate AI-related existential risks” or “ensure the best possible outcome for humanity as it deploys AI technology.” Of course, most people thinking about AI-related existential risks are probably doing so because they care about ensuring a good long-term future for humanity, but the point remains that avoiding extinction is a necessary but not sufficient condition for humanity being able to flourish in the long term. [Paul Christiano's roadmap](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38), as well as the one I have adapted from Paul’s for this post in an attempt to include some ideas from other sources, have “make AI go well” as the top-level goal, and of course, technical research on ensuring existential safety will be necessary in order to achieve this goal. However, some other research areas under this heading, such as “make AI competent,” arguably contribute more to existential risk than to existential safety, despite remaining necessary for ensuring the most beneficial overall outcomes. (To see this, consider that AI systems below a certain level of competence, such as current machine learning systems, pose no existential threat at all, and that with increasing competence comes increasing risk in the case of that competence being applied in undesirable ways.) I want to credit Andrew Critch and David Krueger’s paper [AI Research Considerations for Human Existential Safety (ARCHES)](https://arxiv.org/abs/2006.04948) for hammering this point home for me (see also the [blog post](https://jbkjr.com/posts/2020/10/better_terminology_for_ai_x_risks/) I wrote about ARCHES). ### The map The rest of this post will discuss various aspects of this diagram and its contents: ![make-ai-go-well-map](https://jbkjr.com/images/mapping_territory/map.png) I have to strongly stress that this is only marginally different from Paul’s original breakdown (the highlighted boxes are where he spends most of his time): ![paul-map](https://jbkjr.com/images/mapping_territory/paul_map.png) In fact, I include Paul’s tree here because it is informative to consider where I chose to make small edits to it in an attempt to include some other perspectives, as well as clarify terminological or conceptual distinctions that are needed to understand some smaller but important details of these perspectives. Clearly, though, this post would not be possible without Paul’s insightful original categorizations. It might be helpful to have these diagrams pulled up separately while reading this post, in order to zoom as needed and to avoid having to scroll up and down while reading the discussion below. ### Competence I mostly mention the competence node here to note that depending how terms are defined, “capability robustness” (performing robustly in environments or on distributions different from those an algorithm was trained or tested in) is arguably a necessary ingredient for solving the “alignment problem” ~in full~, but more on this later. In the end, I don’t think there’s too much consequence to factoring it like Paul and I have; to “make AI go well,” our AI systems will need to be trying not to act against our interests and do so robustly in a myriad of unforeseeable situations. (Also, remember that while competence is necessary for AI to go as well as possible, this is generally not the most differentially useful research area for contributing to this goal, since the vast majority of AI and ML research is already focused on increasing the capabilities of systems.) ### Coping with impacts Another area that is mostly outside the scope of our discussion here but still deserves mentioning is what Paul labels “cope with impacts of AI,” which would largely fall under the typical heading of AI “policy” or “governance” (although some other parts of this diagram might also typically count as “governance,” such as those under the “pay alignment tax” node). Obviously, good governance and policies will be critical, both to avoiding existential risks from AI and to achieving best possible outcomes, but much of my focus is on technical work aimed at developing what the Center for Human-Compatible Artificial Intelligence at Berkeley calls “provably beneficial systems,” as well as systems that reliably avoid bad behavior. ### Deconfusion research I added this node to the graph because I believe it represents an important area of research in the project of making AI go well. What is “deconfusion research”? As far as I’m aware, the term comes from [MIRI's](https://intelligence.org/) [2018 Research Agenda blog post](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section2). As Nate Soares (the author of the post) puts it, “By deconfusion, I mean something like ‘making it so that you can think about a given topic without continuously accidentally spouting nonsense.’” [Adam Shimi explains](https://www.alignmentforum.org/posts/q9BmNh35xgXPRgJhm/why-you-should-care-about-goal-directedness): “it captures the process of making a concept clear and explicit enough to have meaningful discussions about it.” This type of research corresponds to the “What even is going on with AGI?” research category Rohin discusses in [his talk](https://www.youtube.com/watch?v=AMSKIDEbjLY). Solutions to problems in this category will not directly enable us to build provably beneficial systems or reliably avoid existential risk but instead aim to resolve confusion around the underlying concepts themselves, in order for us to then be able to meaningfully address the “real” problem of making AI go well. As Nate writes on behalf of MIRI: > > From our perspective, the point of working on these kinds of problems isn’t that solutions directly tell us how to build well-aligned AGI systems. Instead, the point is to resolve confusions we have around ideas like “alignment” and “AGI,” so that future AGI developers have an unobstructed view of the problem. Eliezer illustrates this idea in “[The Rocket Alignment Problem](https://intelligence.org/2018/10/03/rocket-alignment/)," which imagines a world where humanity tries to land on the Moon before it understands Newtonian mechanics or calculus. > > > Research in this category includes MIRI’s [Agent Foundations Agenda](https://intelligence.org/files/TechnicalAgenda.pdf) (and their work on [embedded agency](https://intelligence.org/files/TechnicalAgenda.pdf)), Eric Drexler’s work on [Comprehensive AI Services (CAIS)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf), which considers increased automation of bounded services as a potential path to AGI that doesn’t require building opaquely intelligent agents with a capacity for self-modification, Adam Shimi’s [work](https://www.alignmentforum.org/s/DTnoFhDm7ZT2ecJMw) on [understanding goal directedness](https://www.alignmentforum.org/s/o58ZMNaovdztbLfvN), MIRI/Evan Hubinger's work on [mesa-optimization](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB) and [inner alignment](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology), and David Krueger and Andrew Critch’s attempt to deconfuse topics surrounding existential risk, prepotent AI systems, and delegation scenarios in [ARCHES](https://arxiv.org/abs/2006.04948). I won’t go into any of this work in depth here (except for more on mesa-optimization on inner alignment later), but all of it is worth looking into as you build up a picture of what’s going on in the field. This post, the talks by Christiano and Shah by which it was inspired, and many of the clarifying posts from the Alignment Forum linked to throughout this post were also created with at least some degree of deconfusional intent. I found [this post](https://www.alignmentforum.org/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment) on clarifying some key hypotheses helpful in teasing apart various assumptions made in different areas and between groups of people with different perspectives. I also think Jacob Steinhardt’s [AI Alignment Research Overview](https://docs.google.com/document/d/1FbTuRvC4TFWzGYerTKpBU7FJlyvjeOvVYF2uYNFSlOc/edit) is worth mentioning here. It has a somewhat different flavor from and covers somewhat different topics than this/Paul’s/Rohin’s overview but still goes into a breadth of topics with some depth. ### Delegation This was another small distinction I believed was important to make in adapting Paul’s factorization of problems for this post. As proposed by Andrew Critch and David Krueger in [ARCHES](https://arxiv.org/abs/2006.04948), and as I discussed in my [blog post](https://jbkjr.com/posts/2020/10/better_terminology_for_ai_x_risks/) about ARCHES, the concept of “delegation” might be a better and strictly more general concept than “alignment.” Delegation naturally applies to the situation: humans can delegate responsibility for some task they want accomplished to one or more AI systems, and doing so successfully clearly involves the systems at least trying to accomplish these tasks in the way we intend (“intent alignment,” more on this soon). However, “alignment,” as typically framed for technical clarity, is about aligning the values or behavior of a single AI system with a single human.[[3]](#fn-EQBA3mL9vnYeqr7sW-3) It is not particularly clear what it would mean for multiple AI systems to be “aligned” with multiple humans, but it is at least somewhat clearer what it might mean for a group of humans to successfully delegate responsibility to a group of AI systems, considering we have some sense of what it means for groups of humans to successfully delegate to other groups of humans (e.g. through organizations). Within this framework, “alignment” can be seen as a special case of delegation, what Critch and Krueger call “single/single” delegation (delegation from one human to one AI system). See below (“Single/single delegation (alignment)”) for more nuance on this point, however. I believe this concept largely correlates with Shah’s “Helpful AGI” categorization in his [overview talk](https://www.youtube.com/watch?v=AMSKIDEbjLY); successful delegation certainly depends in part on the systems we delegate to being helpful (or, at minimum, trying to be). ### Delegation involving multiple stakeholders and/or AIs One of the reasons ARCHES makes the deliberate point of distinguishing alignment as a special case of delegation is to show that solving alignment/successfully delegating from one user to one system is insufficient for addressing AI-related existential risks (and, by extension, for making AI go well). Risk-inducing externalities arising from out of the interaction of individually-aligned systems can still pose a threat and must be addressed by figuring out how to successfully delegate in situations involving multiple stakeholders and/or multiple AI systems. This is the main reason I chose to make Paul’s “alignment” subtree a special case of delegation more generally. I won’t go into too much more detail about these “multi-” situations here, partially because there’s not a substantial amount of existing work to be discussed. However, it is worth looking at [ARCHES](https://arxiv.org/abs/2006.04948), as well as [this blog post](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1) by Andrew Critch and my own [blog post](https://jbkjr.com/posts/2020/10/better_terminology_for_ai_x_risks/) summarizing ARCHES, for further discussion and pointers to related material. I would be interested to know to what extent Christiano thinks this distinction is or is not helpful in understanding the issues and contributing to the goal of making AI go well. It is clear by his own diagram that “making AI aligned” is not sufficient for this goal, and he says as much in [this comment](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1?commentId=DKMkszP4qY9ESbaT7) in response to the aforementioned blog post by Critch: “I totally agree that there are many important problems in the world even if we can align AI.” But the rest of that comment also seems to somewhat question the necessity of separately addressing the multi/multi case before having a solution for the single/single case, if there might be some “‘default’ ways” of approaching the multi/multi case once armed with a solution to the single/single case. To me, this seems like a disagreement on the differential importance between research areas rather than a fundamental difference about the underlying concepts in principle, but I would be interested in more discussion on this point from the relevant parties. And it is nonetheless possible that solving single/single delegation or being able to align individual systems and users could be a necessary prerequisite to solving the multi- cases, even if we can begin to probe the more general questions without a solution for the single/single case. > > (**ETA 12/30/20**: Rohin graciously gave me some feedback on this post and had the following to say on this point) > > > I'm not Paul, but I think we have similar views on this topic -- the basic thrust is: > > > 1. Yes, single-single alignment does not guarantee that AI goes well; there are all sorts of other issues that can arise (which ARCHES highlights). > 2. We're focusing on single-single alignment because it's a particularly crisp technical problem that seems amenable to technical work in advance -- you don't have to reason about what governments will or won't do, or worry about how people's attitudes towards AI will change in the future. You are training an AI system in some environment, and you want to make sure the resulting AI system isn't trying to hurt you. This is a more "timeless" problem that doesn't depend as much on specific facts about e.g. the current political climate. > 3. A single-single solution seems very helpful for multi-multi alignment; if you care about e.g. fairness for the multi-multi case, it would really help if you had a method of building an AI system that aims for the human conception of fairness (which is what the type of single-single alignment that I work on can hopefully do). > 4. The aspects of multi-multi work that aren't accounted for by single-single work seem better handled by existing institutions like governments, courts, police, antitrust, etc rather than technical research. Given that I have a huge comparative advantage at technical work, that's what I should be doing. It is still obviously important to work on the multi-multi stuff, and I am very supportive of people doing this (typically under the banner of AI governance, as you note). > > > (In Paul's diagram, the multi-multi stuff goes under the "cope with the impacts of AI" bucket.) > > > I suspect Critch would disagree most with point 4 and I'm not totally sure why. > > > ### Single/single delegation (alignment) It’s important to make clear what we mean by “alignment” and “single/single delegation” in our discussions, since there are a number of related but distinct formulations of this concept that are important to disambiguate in order to bridge [inferential gaps](https://www.readthesequences.com/Expecting-Short-Inferential-Distances), combat the [illusion of transparency](https://www.readthesequences.com/Illusion-Of-Transparency-Why-No-One-Understands-You), and [deconfuse](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section2) the concept. Perhaps the best starting point for this discussion is David Krueger’s [post on disambiguating "alignment"](https://www.alignmentforum.org/posts/FTpPC4umEiREZMMRu/disambiguating-alignment-and-related-notions-1), where he distinguishes between several variations of the concept: * **Holistic alignment**: "*Agent R is **holistically aligned** with agent H iff R and H have the same terminal values*. This is the ‘traditional AI safety (TAIS)’ (as exemplified by Superintelligence) notion of alignment, and the TAIS view is roughly: ‘a superintelligent AI (ASI) that is not holistically aligned is an Xrisk’; this view is supported by [the instrumental convergence thesis](https://en.wikipedia.org/wiki/Instrumental_convergence#Instrumental_convergence_thesis)." * **Parochial alignment**: "I’m lacking a satisfyingly crisp definition of parochial alignment, but intuitively, it refers to how you’d want a '[genie](https://arbital.com/p/task_agi/)' to behave: *R is **parochially aligned** with agent H and task T iff R’s terminal values are to accomplish T in accordance to H’s preferences over the intended task domain*... parochially aligned ASI is not safe by default (it might [paperclip](https://wiki.lesswrong.com/wiki/Paperclip_maximizer)), but it might be possible to make one safe using various capability control mechanisms” * **Sufficient alignment**: "*R is **sufficiently aligned** with H iff optimizing R’s terminal values would not induce a nontrivial Xrisk (according to H’s definition of Xrisk)*. For example, an AI whose terminal values are ‘maintain meaningful human control over the future’ is plausibly sufficiently aligned. It’s worth considering what might constitute sufficient alignment short of holistic alignment. For instance, [Paul seems to argue that corrigible agents are sufficiently aligned](https://ai-alignment.com/corrigibility-3039e668638)." * **Intent alignment** (Paul Christiano's [version of alignment](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6)): "*R is **intentionally aligned** with H if R is trying to do what H wants it to do*." * "Paul also talks about [benign AI](https://ai-alignment.com/benign-ai-e4eb6ec6d68e) which is about what an AI is optimized for (which is closely related to what it ‘values’). Inspired by this, I’ll define a complementary notion to Paul’s notion of alignment: *R is **benigned** with H if R is not actively trying to do something that H doesn’t want it to do*." Each of these deserves attention, but let’s zoom in on intent alignment, as it is the version of alignment that Paul uses in his map and that he seeks to address with his research. First, I want to point out that each of Krueger’s definitions pertains only to agents. However, I think we still want a definition of alignment that can apply to non-agential AI systems, since it is an open question whether the first AGI will be agentive. [Comprehensive AI Services (CAIS)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf) explicitly pushes back against this notion, and [ARCHES](https://arxiv.org/abs/2006.04948) frames its discussion around AI “systems” to be “intentionally general and agent-agnostic.” (See also [this post](https://www.alignmentforum.org/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment) on clarifying some key hypotheses for more on this point.) It is clear that we want to have some notion alignment that applies just as well to AI systems that are not agents or agent-like. In fact, [Paul's original definition](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6) does not seem to explicitly rely on agency: > > When I say an AI A is *aligned with* an operator H, I mean: > > > *A is trying to do what H wants it to do.* > > > Another characterization of intent alignment [comes from Evan Hubinger](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology): "An agent is [intent aligned](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6) if its [behavioral objective](https://intelligence.org/learned-optimization/#glossary)[[4]](#fn-EQBA3mL9vnYeqr7sW-4) is aligned with humans” (presumably he means “aligned” in this same sense that its behavioral objective is incentivizing trying to do what we want). I like that this definition uses the more technically clear notion of a behavioral objective because it allows the concept to more precisely be placed in a framework with outer and inner alignment (more on this later), but I still wish it did not depend on a notion of agency like Krueger’s definition. Additionally, all of these definitions lack the formal rigor that we need if we want to be able to “use mathematics to formally verify if a proposed alignment mechanism would achieve alignment,” as noted by [this sequence](https://www.alignmentforum.org/s/sv2CwqTCso8wDdmmi) on the Alignment Forum. David Krueger makes a similar point in his post, writing, “Although it feels intuitive, I’m not satisfied with the crispness of this definition [of intent alignment], since we don’t have a good way of determining a black box system’s intentions. We can apply [the intentional stance](https://en.wikipedia.org/wiki/Intentional_stance), but that doesn’t provide a clear way of dealing with irrationality.” And Paul himself makes very similar points in his [original post](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6): * "This definition of ‘alignment’ is extremely imprecise. I expect it to correspond to some more precise concept that cleaves reality at the joints. But that might not become clear, one way or the other, until we’ve made significant progress.” * “One reason the definition is imprecise is that it’s unclear how to apply the concepts of ‘intention,’ ‘incentive,’ or ‘motive’ to an AI system. One naive approach would be to equate the incentives of an ML system with the objective it was optimized for, but this seems to be a mistake. For example, humans are optimized for reproductive fitness, but it is wrong to say that a human is incentivized to maximize reproductive fitness.”[[5]](#fn-EQBA3mL9vnYeqr7sW-5) All of these considerations indicate that intent alignment is itself a concept in need of [deconfusion](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section2), perhaps to avoid a reliance on agency, to make the notion of “intent” for AI systems more rigorous, and/or for other reasons entirely. Leaving this need aside for the moment, there are a few characteristics of the “intent alignment” formulation of alignment that are worth mentioning. The most important point to emphasize is that an intent-aligned system is *trying* to do what its operator wants it to, and not necessarily *actually* doing what its operator wants it to do. This allows competence/capabilities to be factored out as a separate problem from (intent) alignment; an intent-aligned system might make mistakes (for example, by misunderstanding an instruction or by misunderstanding what its operator wants[[6]](#fn-EQBA3mL9vnYeqr7sW-6)), but as long as it is *trying* to do what its operator wants, the hope is that catastrophic outcomes can be avoided with a relatively limited amount of understanding/competence. However, if we instead define “alignment” only as a function of what the AI actually does, an aligned system would need to be both trying to do the right thing *and actually accomplishing this objective with competence*. As Paul says in his [overview presentation](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38), “in some sense, [intent alignment] might be the minimal thing you want out of your AI: at least it is trying.” This highlights why intent alignment might be an instrumentally more useful concept for working on making AI go well: while the (much) stronger condition of holistic alignment would almost definitionally guarantee that a holistically aligned system will not induce existential risks by its own behavior, it seems much harder to verify that a system and a human share the same terminal values than to verify that a system is trying to do what the human wants. It’s worth mentioning here the concept of [corrigibility](https://ai-alignment.com/corrigibility-3039e668638). The [page on Arbital](https://arbital.com/p/corrigibility/) provides a good definition: > > A ‘corrigible’ agent is one that [doesn't interfere](https://arbital.com/p/nonadversarial/) with what [we](https://arbital.com/p/value_alignment_programmer/) would intuitively see as attempts to ‘correct’ the agent, or ‘correct’ our mistakes in building it; and permits these ‘corrections’ despite the apparent [instrumentally convergent](https://arbital.com/p/instrumental_convergence/) reasoning saying otherwise. > > > This intuitively feels like a property we might like the AI systems we build to have as they get more powerful. In [his post](https://ai-alignment.com/corrigibility-3039e668638), Paul argues: > > 1. A [benign](https://ai-alignment.com/benign-ai-e4eb6ec6d68e) [act-based](https://ai-alignment.com/act-based-agents-8ec926c79e9c) agent will be robustly corrigibile if we want it to be. > 2. A sufficiently corrigible agent will tend to become more corrigible and benign over time. Corrigibility marks out a broad basin of attraction towards acceptable outcomes. > > > As a consequence, we shouldn’t think about alignment as a narrow target which we need to implement exactly and preserve precisely. We’re aiming for a broad basin, and trying to avoid problems that could kick [us] out of that basin. > > > While Paul links corrigibility to benignment explicitly here, how it relates to intent alignment is somewhat less clear to me. I think it’s clear that intent alignment (plus a certain amount of capability) entails corrigibility: if a system is trying to “do what we want,” and is at least capable enough to figure out that we want it to be corrigible, then it will do its best to be corrigible. I don’t think the opposite direction holds, however: I can imagine a system that doesn’t interfere with attempts to correct it and yet isn’t trying to “do what we want.” The point remains, though, that if we’re aiming for intent alignment, it seems that corrigibility will be a necessary (if not sufficient) property. Returning to the other definitions of alignment put forth by Krueger, one might wonder if there is any overlap between these different notions of alignment. Trivially, a holistically aligned AI would be parochially aligned for any task T, as well as sufficiently aligned. David also mentions that "[Paul seems to argue that corrigible agents are sufficiently aligned](https://ai-alignment.com/corrigibility-3039e668638)," which does seem to be a fair interpretation of the above “broad basin” argument. The one point I’ll raise, though, is that Paul specifically argues that “benign act-based agents will be robustly corrigible” and “a sufficiently corrigible agent will tend to become more corrigible and benign over time,” which seems to imply corrigibility can give you benignment. By David’s definition of benignment (“not actively trying to do something that H doesn’t want it to do”), this would represent sufficient alignment, but Paul [defined benign AI](https://ai-alignment.com/benign-ai-e4eb6ec6d68e) in terms of what it was optimized for. If such an optimization process were to produce a misaligned mesa-optimizer, it would clearly not be sufficiently aligned. Perhaps the more important point, however, is that it seems Paul would argue that intent alignment would in all likelihood represent sufficient alignment (others may disagree). I would also like to consider if and how the concept of single/single delegation corresponds to any of these specific types of alignment. As put forth in [ARCHES](https://arxiv.org/abs/2006.04948): > > **Single(-human)/single(-AI system) delegation** means delegation from a *single human stakeholder* to a *single AI system* (to pursue one or more objectives). > > > Firstly, it is probably important to note that “single/single delegation” refers to a task, and “alignment,” however it is defined, is a property that we want our AI systems to have. However, to *solve* single/single delegation (or to do single/single delegation *successfully*), we will require a solution to the “alignment problem,” broadly speaking. From here, it’s a question of defining what would count as a “solution” to single/single delegation (or what it would mean to do it “successfully”). If we can build intent aligned systems, will we have solved single/single delegation? If they are sufficiently capable, probably. The same goes for parochially aligned and holistically aligned systems: if they’re sufficiently capable, the users they’re aligned with can probably successfully delegate to them. It is unclear to me whether this holds for a sufficiently aligned system, however; knowing that “optimizing R’s terminal values would not induce a nontrivial Xrisk” doesn’t necessarily mean that R will be any good at doing the things H wants it to. As I mentioned before, I like the concept of “delegation” because it generalizes better to situations involving multiple stakeholders and/or AI systems. However, I believe it is still necessary to understand these various notions of “alignment,” because it remains a necessary property for successfully delegating in the single/single case and because understanding the differences between them is helpful for understanding others’ work and in communicating about the subject. ### Alignment tax and alignable algorithms One compelling concept Paul used that I had not heard before was the “alignment tax”: the cost incurred from insisting on (intent) alignment. This is intended to capture the tension between safety and competence. We can either pay the tax, e.g. by getting policymakers to care enough about the problem, negotiating agreements to coordinate to pay the tax, etc., or we can reduce the tax with technical safety and alignment research that produces aligned methods that are roughly competitive with unaligned methods. Two ways that research can reduce the alignment tax are 1) advancing alignable algorithms (perhaps algorithms that have beliefs and make decisions that are easily interpretable by humans) by making them competitive with unaligned methods and 2) making existing algorithms alignable: ![aligning-algorithms](https://jbkjr.com/images/mapping_territory/aligning_algos.png) ([source](https://drive.google.com/file/d/1QO11xtWSvtD8nS1SU4XukGF1WWG6O8-6/view)) Paul then considers different types of algorithms (or, potentially, different algorithmic building blocks in an intelligent system) we might try and align, like algorithms for planning, deduction, and learning. With planning, we might have an alignment failure if the standard by which an AI evaluates actions doesn’t correspond to what we want, or if the algorithm is implicitly using a decision theory that we don’t think is correct. The former sounds much like traditional problems in (mis)specifying reward or objective functions for learners. I think problems in decision theory are very interesting, but unfortunately I have not yet been able to learn as much about the subject as I’d like to. The main thrust of this research is to try and solve perceived problems with traditional decision theories (e.g. causal decision theory and evidential decision theory) in scenarios like [Newcomb's problem](https://www.lesswrong.com/tag/newcomb-s-problem). Two decision theory variants I’ve seen mentioned in this context are [functional decision theory](https://arxiv.org/abs/1710.05060) and [updateless decision theory](https://www.lesswrong.com/tag/updateless-decision-theory). (This type of research could also be considered deconfusion work.) As for aligning deduction algorithms, Paul only asks “is there some version of deduction that avoids alignment failures?” and mentions “maybe the alignment failures in deduction are a little more subtle” but doesn’t go into any more detail. After searching for posts on the Alignment Forum and LessWrong about how deduction could be malign failed to surface anything, I can’t help but wonder if he really might be referring to induction. For one, I’m having trouble imagining what it would mean for a deductive process to be malign. From my understanding, the axioms and rules of inference that define a formal logical system completely determine the set of theorems that can be validly derived from them, so if we were unhappy with the outputs of a deductive process that is validly applying its rules of inference, wouldn’t that mean that we really just have a problem with our own choice of axioms and/or inference rules? I can’t see where a notion of “alignment” would fit in here (but somebody please correct me if I’m wrong here… I would love to hear Paul’s thoughts about these potentially “subtle” misalignment issues in deduction). The other reason I’m suspicious Paul might’ve actually meant induction is because Paul himself wrote the original post arguing that the [universal prior in Solomonoff induction is malign](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/). I won’t discuss this concept too much here because it still confuses me somewhat (see [here](https://www.alignmentforum.org/posts/5bd75cc58225bf067037534c/some-problems-with-making-induction-benign-and-approaches-to-them), [here](https://www.lesswrong.com/posts/jP3vRbtvDtBtgvkeb/clarifying-consequentialists-in-the-solomonoff-prior), and [here](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign) for more discussion), but it certainly seems to fit the description of being a “subtle” failure mode. I’ll also mention MIRI’s paper on [logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/) (for dealing with reasoning under logical uncertainty) here, as it seems somewhat relevant to the idea of alignment as it corresponds to deduction and/or induction. > > (**ETA 12/30/20**: Rohin also had the following to say about deduction and alignment) > > > I'm fairly confident he does mean deduction. And yes, if we had a perfect and valid deductive process, then a problem with that would imply a problem with our choice of axioms and inference rules. But that's still a problem! > > > Like, with RL-based AGIs, if we had a perfect reward-maximizing policy, then a problem with that would imply a problem with our choice of reward function. Which is exactly the standard argument for AI risk. > > > There's a general argument for AI risk, which is that we don't know how to give an AI instructions that it actually understands and acts in accordance to -- we can't "[translate](https://www.alignmentforum.org/posts/42YykiTqtGMyJAjDM/alignment-as-translation)" from our language to the AI's language. If the AI takes high impact actions, but we haven't translated properly, then those large impacts may not be the ones we want, and could be existentially bad. This argument applies whether our AI gets its intelligence from induction or deduction. > > > Now an AI system that just takes mathematical axioms and finds theorems is probably not dangerous, but that's because such an AI system doesn't take high impact actions, not because the AI system is aligned with us. > > > ### Outer alignment and objective robustness/inner alignment For learning algorithms, Paul breaks the alignment problem into two parts: outer alignment and inner alignment. This was another place where I felt it was important to make a small change to Paul’s diagram, as a result of [some recent clarification](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology#fn-wy6RgjzHCyHCXi7M3-1) on terminology relating to inner alignment by Evan Hubinger. It’s probably best to first sketch the concepts of objective robustness, mesa-optimization, and inner alignment for those who may not already be familiar with the concept. First, recall that the *base objective* for a learning algorithm is the objective we use to search through models in an optimization process and that the *behavioral objective* is what the model (produced by this process) itself appears to be optimizing for: the objective that would be recovered from perfect inverse reinforcement learning. If the behavioral objective is aligned with the base objective, we say that the model is *objective robust*; if there is a gap between the behavioral objective and the base objective, the model will continue to appear to pursue the behavioral objective, which could result in bad behavior off-distribution (even as measured by the base objective). As a concrete (if simplistic) example, imagine that a maze-running reinforcement learning agent is trained to reach the end of the maze with a base objective that optimizes for a reward which it receives upon completing a maze. Now, imagine that in every maze the agent was trained on, there was a red arrow marking the end of the maze, and that in every maze in the test set, this red arrow is at a random place within the maze (but not the end). Do we expect our agent will navigate to the end of the maze, or will it instead navigate to the red arrow? If the training process produces an agent that learned the behavioral objective “navigate to the red arrow,” because red arrows were a very reliable proxy for/predictor of reward during the training process, it will navigate to the red arrow, *even though this behavior is now rated poorly by the reward function and the base objective*. One general way we can imagine failing to achieve objective robustness is if our optimization process itself produces an optimizer (a *mesa-optimizer*)—in other words, when that which is optimiz*ed* (the model) becomes an optimiz*er*. In the above example, we might imagine that such a model, trained with something like SGD, could actually learn something like depth- or breadth-first search to optimize its search for paths to the red arrow (or the end of the maze). We say that the *mesa-objective* is the objective the mesa-optimizer is optimizing for. (In the case of a mesa-optimizer, its mesa-objective is definitionally its behavioral objective, but the concept of a behavioral objective remains applicable even when a learned model is not a mesa-optimizer.) We also say that a mesa-optimizer is *inner aligned* if its mesa-objective is aligned with the base objective. *Outer alignment*, correspondingly, is the problem of eliminating the gap between the base objective (what we optimize our models for) and the intended goal (what we actually want from our model). I write all this to emphasize one of the main points of Evan Hubinger’s aforementioned [clarification of terminology](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology): that we need *outer alignment and objective robustness* to achieve intent alignment, and that inner alignment is a way of achieving objective robustness *only in the cases where we're dealing with a mesa-optimizer*. Note that Paul defines inner alignment in his talk as the problem of “mak[ing] sure that policy is robustly pursuing that objective”; I hope that this section makes clear that this is actually the problem of *objective robustness*. Even in the absence of mesa-optimization, we still have to ensure objective robustness to get intent alignment. This is why I chose to modify this part of Paul’s graph to match this nice tree from Evan’s post: ![evan-map](https://jbkjr.com/images/mapping_territory/evan_map.png) ([source](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology))[[7]](#fn-EQBA3mL9vnYeqr7sW-7)[[8]](#fn-EQBA3mL9vnYeqr7sW-8) Paul mentions adversarial training, transparency, and verification as potential techniques that could help ensure objective robustness/inner alignment. These have more typically been studied in the context of robustness generally, but the hope here is that they can also be applied usefully in the context of objective robustness. Objective robustness and inner alignment are still pretty new areas of study, however, and how we might go about guaranteeing them is a very open question, especially considering nobody has yet been able to concretely produce/demonstrate a mesa-optimizer in the modern machine learning context. It might be argued that humanity can be taken as an existence proof of mesa-optimization, since, if we are optimizing for anything, it is certainly not what evolution optimized us for (reproductive fitness). But, of course, we’d like to be able to study the phenomenon in the context it was originally proposed (learning algorithms). For more details on inner alignment and mesa-optimization, see [Risks from Learned Optimization](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB), Evan's [clarifying blog post](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology), and this [ELI12 post](https://www.alignmentforum.org/posts/AHhCrJ2KpTjsCSwbt/inner-alignment-explain-like-i-m-12-edition) on the topic. ### Approaches to outer alignment Paul subdivides work into outer alignment into two categories: cases where we want an AI system to learn (aligned) behavior from a teacher and cases where we want an AI system to go beyond the abilities of any teacher (but remain aligned). According to Paul, these cases roughly correspond to the easy and hard parts of outer alignment, respectively. In the short term, there are obviously many examples of tasks that humans already perform that we would like AIs to be able to perform more cheaply/quickly/efficiently (and, as such, would benefit from advances in “learn from teacher” techniques), but in the long term, we want AIs to be able to exceed human performance and continue to do well (and remain aligned) in situations that no human teacher understands. ### Learning from teacher If we have a teacher that understands the intended behavior and can demonstrate and/or evaluate it, we can 1) imitate behavior demonstrated by the teacher, 2) learn behavior the teacher thinks is good, given feedback, or 3) infer the values/preferences that the teacher seems to be satisfying (e.g. with inverse reinforcement learning)[[9]](#fn-EQBA3mL9vnYeqr7sW-9), and then optimize for these inferred values. Paul notes that a relative advantage of the latter two approaches is that they tend to be more sample-efficient, which becomes more relevant as acquiring data from the teacher becomes more expensive. I should also mention here that, as far as I’m aware, most “imitation learning” is really "[apprenticeship learning via inverse reinforcement learning](https://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf)," where the goal of the teacher is inferred in order to be used as a reward signal for learning the desired behavior. So, I’m not exactly sure to what degree categories 1) and 3) are truly distinct, since it seems rare to do “true” imitation learning, where the behavior of the teacher is simply copied as closely as possible (even behaviors that might not contribute to accomplishing the intended task). For further reading on techniques that learn desired behavior from a teacher, see OpenAI’s “[Learning from Human Preferences](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)" and DeepMind's "[Scalable agent alignment via reward modeling](https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84)" on the “learn from feedback” side of things. On the infer preferences/IRL side, start with Rohin Shah’s [sequence on value learning](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc) on the Alignment Forum and Dylan Hadfield-Mennell’s papers "[Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137)" and "[Inverse Reward Design](https://arxiv.org/abs/1711.02827)." ### Going beyond teacher If we want our AI systems to exceed the performance of the teacher, making decisions that no human could or understanding things that no human can, alignment becomes more difficult. In the previous setting, the hope is that the AI system can learn aligned behavior from a teacher who understands the desired (aligned) behavior well enough to demonstrate or evaluate it, but here we lack this advantage. Three potential broad approaches Paul lists under this heading are 1) an algorithm that has learned from a teacher successfully extrapolates from this experience to perform at least as well as the teacher in new environments, 2) infer robust preferences, i.e. infer the teacher’s *actual* preferences or values (not just stated or acted-upon preferences), in order to optimize them (this approach also goes by the name of *ambitious value learning*), and 3) build a better teacher, so you can fall back to approaches from the “learn from teacher” setting, just with a more capable teacher. Of the three, the first seems the least hopeful; machine learning algorithms have historically been pretty notoriously bad at extrapolating to situations that are meaningfully different than those they encountered in the training environment. Certainly, the ML community will continue to search for methods that generalize increasingly well, and, in turn, progress here could make it easier for algorithms to learn aligned behavior and extrapolate to remain aligned in novel situations. However, this does not seem like a reasonable hope at this point for keeping algorithms aligned as they exceed human performance. The allure of the second approach is obvious: if we could infer, essentially, the “true human utility function,” we could then use it to train a reinforcement agent without fear of outer alignment failure/being [Goodharted](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) as a result of misspecification error. This approach is not without substantial difficulties, however. For one, in order to exceed human performance, we need to have a model of the mistakes that we make, and this error model [cannot be inferred alongside the utility function without additional assumptions](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/ANupXf8XfZo2EJxGv). We might try and specify a specific error model ourselves, but this seems as prone to misspecification as the original utility function itself. For more information on inferring robust preferences/ambitious value learning, see the “Ambitious Value Learning” section of the [value learning sequence](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc). Stuart Armstrong also seems to have a particular focus in this area, e.g. [here](https://arxiv.org/pdf/1712.05812.pdf) and [here](https://www.alignmentforum.org/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into). The two most common “build a better teacher” approaches are amplification and debate. Amplification is what Paul spends most of his time on and the approach of which he’s been the biggest proponent. The crux of the idea is that a good starting point for a smarter-than-human teacher is a group of humans. We assume that even if a human cannot answer a question, they can decompose the question into sub-questions such that knowing the answers to the sub-questions would enable them to construct the answer to the answer to the original question. The hope, then is to build increasingly capable AI systems by training a question-answering AI to imitate the output of a group of humans answering questions in this decompositional fashion, then recursively building stronger AIs using a group of AIs from the last iteration answering decomposed questions as an overseer: ![amplification](https://jbkjr.com/images/mapping_territory/amplification.png) ([source](https://drive.google.com/file/d/1QO11xtWSvtD8nS1SU4XukGF1WWG6O8-6/view)) The exponential tree that this recursive process tries to approximate in the limit is called [HCH](https://www.alignmentforum.org/tag/humans-consulting-hch) (for Humans Consulting HCH). There is much more detail and many more important considerations in this scheme than I can address here, e.g. the distillation step, how this scheme hopes to maintain intent alignment throughout the recursive process, and (importantly) if this exponential tree can answer any question in the limit.[[10]](#fn-EQBA3mL9vnYeqr7sW-10) There are also two distinct types of amplification: imitative amplification, where the AI systems are trained to imitate the outputs of the last tree in the amplification step, and approval-based amplification, where the AI systems are trained to produce outputs or perform actions of which this tree would approve. For more on amplification, see the [iterated amplification sequence](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd) on the Alignment Forum, the [original paper](https://openai.com/blog/amplifying-ai-training/) and Paul Christiano’s writings and papers, more generally. See also [recursive reward modeling](https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84), another “build a better teacher” approach which “can be thought of as an instance of iterated amplification.” For basic research into this method of solving complex problems by recursively solving subproblems, see [Ought](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd)'s work on [factored cognition](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/DFkGStzvj3jgXibFG). [Debate](https://openai.com/blog/debate/) (more recent progress update [here](https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1)) builds a better teacher by training agents to debate a topic against each other in an adversarial game and have a human decide the winner based on the sequence of arguments made by the agents, so the agents incrementally build each other into both better teachers and more competent students (in practice, it would likely be just one agent trained via self-play). For this to work, it needs to be true that 1) judging debates is easier than debating (so humans can judge the quality of arguments that they could not have come up with themselves) and 2) lying is harder than arguing against a lie, given this human judge (so the adversarial game will reliably reward truth-telling). There is a close relationship between amplification and debate. Section 7 in the [original debate paper](https://arxiv.org/abs/1805.00899) explores this relationship in detail and is worth reading, but the basic connection can be highlighted by seeing the amplification setup in terms of three modules: an answerer, a questioner, and a checker. The answerer is the ML system being iteratively trained to answer questions by imitating the output of the overseer answering questions recursively, the questioner decomposes the question into the relevant sub-questions, and the checker takes the sub-questions and their answers and generates an answer to the original question. (Christiano’s setup treats the questioner and the checker as the same module—in the base case, a human.) Debate differs from this setup by training the answerer and the questioner against each other adversarially while keeping the human judge/checker. This gives debate a potential advantage if generating sub-questions is superhumanly difficult (e.g. if the branching factor of HCH is too high). Again, section 7 of the [original paper](https://arxiv.org/abs/1805.00899) covers more of the differences and similarities between these two approaches and should be read in full. Relatedly, Evan Hubinger has written a [post on synthesizing amplification and debate](https://www.alignmentforum.org/posts/dJSD5RK6Qoidb3QY5/synthesizing-amplification-and-debate) that might be of interest. One final resource I want to mention while discussing techniques for going beyond a teacher is Evan Hubinger’s [overview of 11 proposals for safe advanced AI](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai), which includes many of the basic techniques already mentioned here but goes into more depth discussing the relative advantages and disadvantages of each approach in the contexts of outer and inner alignment. In practice, an outer alignment approach (e.g. imitative or approval-based amplification) is often paired with some technique aimed at preventing inner alignment failures (e.g. adversarial training, transparency, etc.). ### Conclusion That’s about it! We’ve covered a lot of ground here. This post ended up being much longer than I anticipated, but I wanted to give a cursory overview of as many of these ideas as possible and elaborate a little on how they interrelate before providing pointers to further material for the interested reader. I hope this post has been helpful in giving you a lay of the land in ongoing work in AI existential safety and alignment and (more importantly) in helping you build or refine your own mental map of the field (or simply check it, if you’re one of the many people who has a better map than mine!). Building this mental map has already been helpful to me as I assimilate new information and research and digest discussions between others in the field. It’s also been helpful as I start thinking about the kinds of questions I’d like to address with my own research. --- 1. Rohin also did a [two](https://futureoflife.org/2019/04/11/an-overview-of-technical-ai-alignment-with-rohin-shah-part-1/) [part](https://futureoflife.org/2019/04/25/an-overview-of-technical-ai-alignment-with-rohin-shah-part-2/) podcast with the Future of Life Institute discussing the contents of his presentation in more depth, both of which are worth listening to. [↩︎](#fnref-EQBA3mL9vnYeqr7sW-1) 2. See [this post](https://www.alignmentforum.org/posts/oiuZjPfknKsSc5waC/commentary-on-agi-safety-from-first-principles) for specific commentary on this sequence from others in the field. [↩︎](#fnref-EQBA3mL9vnYeqr7sW-2) 3. Sometimes, people use “alignment” to refer to the overall project of making AI go well, but I think this is misguided for reasons I hope are made clear by this post. From what I’ve seen, I believe my position is shared by most in the community, but please feel free to disagree with me on this so I can adjust my beliefs if needed. [↩︎](#fnref-EQBA3mL9vnYeqr7sW-3) 4. "**Behavioral objective**: The *behavioral objective* is what an optimizer appears to be optimizing for. Formally, the behavioral objective is the objective recovered from perfect inverse reinforcement learning.” [↩︎](#fnref-EQBA3mL9vnYeqr7sW-4) 5. Here, Paul seems to have touched upon the concept of mesa-optimization before it was so [defined](https://arxiv.org/abs/1906.01820). More on this topic to follow. [↩︎](#fnref-EQBA3mL9vnYeqr7sW-5) 6. That an intent-aligned AI can be mistaken about what we want is a consequence of the definition being intended *de dicto* rather than *de re*; as [Paul writes](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6), “an aligned A is trying to ‘do what H wants it to do’” (not trying to do “that which H actually wants it to do”). [↩︎](#fnref-EQBA3mL9vnYeqr7sW-6) 7. Arrows are implications: “for any problem, if its direct subproblems are solved, then it should be solved as well (though not necessarily vice versa).” [↩︎](#fnref-EQBA3mL9vnYeqr7sW-7) 8. Note that Evan also has capability robustness as a necessary component, along with intent alignment, for achieving “alignment.” This fits well with my tree, where we need both alignment (which, in the context of both my and Paul’s trees, is intent alignment) and capability robustness to make AI go well; the reasoning is much the same even if the factorization is slightly different. [↩︎](#fnref-EQBA3mL9vnYeqr7sW-8) 9. Paul comments that this type of approach involves some assumption that relates the teacher’s behavior to their preferences (e.g. an approximate optimality assumption: the teacher acts to satisfy their preferences in an approximately optimal fashion). [↩︎](#fnref-EQBA3mL9vnYeqr7sW-9) 10. I want to mention here that Eliezer Yudkowsky wrote a [post challenging Paul's amplification proposal](https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/) (which includes responses from Paul), in case the reader is interested in exploring pushback against this scheme. [↩︎](#fnref-EQBA3mL9vnYeqr7sW-10)
4da61af2-85ef-4601-a3b4-9e4b7bc42cca
StampyAI/alignment-research-dataset/blogs
Blogs
classifying computational frameworks classifying computational frameworks ------------------------------------ here is a classification of four large computational frameworks, according to four criteria: * **D**: Distributed; whether the framework allows for computation to happen in arbitrarily many places at once, or whether it has to follow a single thread of computation * **M**: Metaprogrammable; whether code in the framework is able to, at runtime, create new logic of the same class as itself; see also [degrees of runtime metaprogrammability](degrees-of-runtime-metaprogrammability.html) * **S**: Simple Steps; whether the process of going through a single step of computation (whether the system is deterministic or not) is simple, or requires a large amount of work * **A**: Arbitrary Structure; whether the framework is able to created arbitrarily nested pieces of information, or whether it's restricted to a constant amount of information density over a geometric area | framework | D | M | S | A | | --- | --- | --- | --- | --- | | [lisp](https://en.wikipedia.org/wiki/Lisp_%28programming_language%29)/[λ-calculus](https://en.wikipedia.org/wiki/Lambda_calculus)/[SKI calculus](https://en.wikipedia.org/wiki/SKI_combinator_calculus) | N | Y | Y | Y | | [turing machines](https://en.wikipedia.org/wiki/Turing_machine) | N | N | Y | N | | [graph rewriting](https://en.wikipedia.org/wiki/Hypergraph_grammar), like [wolfram's](https://www.wolframphysics.org/technical-introduction/basic-form-of-models/first-example-of-a-rule/) | Y | N | N | Y | | [cellular automata](https://en.wikipedia.org/wiki/Cellular_automaton) | Y | N | Y | N |
57e6dc46-d512-4083-b99e-c1f52d47cb87
trentmkelly/LessWrong-43k
LessWrong
Less Wrong’s political bias (Disclaimer: This post refers to a certain political party as being somewhat crazy, which got some people upset, so sorry about that. That is not what this post is *about*, however. The article is instead about Less Wrong's social norms against pointing certain things out. I have edited it a bit to try and make it less provocative.)   A well-known post around these parts is Yudkowski’s “politics is the mind killer”. This article proffers an important point: People tend to go funny in the head when discussing politics, as politics is largely about signalling tribal affiliation. The conclusion drawn from this by the Less Wrong crowd seems simple: Don’t discuss political issues, or at least keep it as fair and balanced as possible when you do. However, I feel that there is a very real downside to treating political issues in this way, which I shall try to explain here. Since this post is (indirectly) about politics, I will try to bring this as gently as possible so as to avoid mind-kill. As a result this post is a bit lengthier than I would like it to be, so I apologize for that in advance. I find that a good way to examine the value of a policy is to ask in which of all possible worlds this policy would work, and in which worlds it would not. So let’s start by imagining a perfectly convenient world: In a universe whose politics are entirely reasonable and fair, people start political parties to represent certain interests and preferences. For example, you might have the kitten party for people who like kittens, and the puppy party for people who favour puppies. In this world Less Wrong’s unofficial policy is entirely reasonable: There is no sense in discussing politics, since politics is only about personal preferences, and any discussion of this can only lead to a “Jay kittens, boo dogs!” emotivism contest. At best you can do a poll now and again to see what people currently favour. Now let’s imagine a less reasonable world, where things don’t have to happen for
4189016c-d35e-4aa4-a5a1-70b7a7787f4b
trentmkelly/LessWrong-43k
LessWrong
My hopes for YouCongress.com Background For background on YouCongress.com see this post by Hector Perez Arenas. I love this general concept, and have a lot of ideas for how this implementation could be expanded. I'm hoping that writing out some of my ideas might inspire someone to jump in an contribute code. I would myself if I didn't feel full to the brim on trying to work on AI alignment / control / safety ideas. A brief summary of the current concept is that it is a political polling platform (which could in theory be used to guide the decision making of political representatives). The platform allows users to create polls. Polls are answered by users and by 'digital twins' of famous people. Currently, the 'digital twins' seem to do an ok-ish job at making short statements and placing votes on the polls, based on the publicly expressed opinions of the relevant source person. It's also possible to initiate discussions with these 'digital twins' to explore your agreements and differences in more depth. Also, you can do topic-specific vote deference to other users or digital twins, which helps paint a clearer picture of topic-specific preferences for a wide segment of users. ---------------------------------------- Ideas for taking this further Added realism for the digital twins 1. Adding to the training data I imagine something like a set of 'trusted editors' who are allowed to upload additional existing writings of the target famous person being twinned. Also, any dialogues that can be found, and expressed as  "nontarget person: blah blah target person: response to blah blah" where the model would use the off-target writing as context, and be trained to predict just the target person's response.  2. Human Feedback The 'trusted editors' could also look at responses of the current digital twin fine-tune to various prompts. Picking the more accurately representative of two possible responses would provide feedback that could be used to further fine-tune the model (probably using a Lo
68f79e6d-7d15-4cc6-8dad-1b3cc0686180
StampyAI/alignment-research-dataset/arxiv
Arxiv
Guidelines for Artificial Intelligence Containment 1 Guidelines for Artificial Intelligence Containment James Babcock Cornell University jab299@cornell.edu Janos Kramar University of Montreal jkramar@gmail.com Roman V. Yampolskiy University of Louisville roman.yampolskiy@louisville.edu Abstract With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent progr ams of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the con tainer. Keywords: AI Boxing, AI Containment, AI Confinement!, AI Guidelines , AI Safety , Airgapping. 1. Introduction The past few years have seen a remarkable amount of attention on the long -term future of AI. Icons of science and technology such as Stephen Hawking1, Elon Musk2, and Bill Gates3 have expressed concern that superintelligent AI may wipe out humanity in the long run. Stuart Russell, coauthor of the most -cited textbook of AI [35], recently began prolifically advocating4 for the field of AI to take this possibility seriously. AI conferences now frequently have panels and workshops on the topic5,6. There has been an outpouring of support7 from many leading AI researchers for an open letter calling for greatly increased research dedicated to ensuring that increasingly capable AI remains “robust and beneficial”, and gradually a field of “AI safety” is coming into being [64, 33, 55] . Why all this attention? Since the dawn of modern computing, the possibility of artificial intelligence has prompted leading thinkers in the field to speculate [15, 43, 46] about whether AI would end up overtaking and replacing humanity. However, for decades, while computing quickly found many important applications, artificial intelligence remained a niche field, with modest successes, making such speculation seem ir relevant. But fast -forwarding to the present, machine learning has seen grand successes and very substantial R&D investments, and it is rapidly improving in major domains, such as natural language processing and image recognition, largely via advances in d eep learning [18]. Artificial General Intelligence (AGI), with the ability to perform at a human -comparable level at most cognitive tasks, is likely to be created in the coming century; most predictions, by both experts and non -experts, range from 15 -25 year s [7]. As the state of research in AI capabilities has steadily advanced, theories about the behavior of superintelligent AGIs have remained largely stagnant, though nearby scientific fields examining optimal agents (game theory and decision theory), idealized reasoning (Bayesian statistics and the formal theory of causality), a nd human cognition (cognitive neuroscience) have come into being and given us some clues. A prominent theory is that an advanced AI with almost any goals will generically develop certain subgoals called “basic AI drives” [31], such as self -preservation, self - ! AI Boxing, AI Containment, and AI Confinement are assumed to mean the same thing. 1 http://www.bbc.com/news/technology -30290540 2 http://video.cnbc.com/gallery/?video=3000286438 3 https://www.reddit.com/r/IAmA/c omments/2tzjp7/hi_reddit_im_bill_gates_and_im_back_for_my_third/ 4 http://www.cse.unsw.edu.au/~tw/aiethics/AI_Ethics/Introduction.html 5 https://people.eecs.berkeley.edu/~russell/research/future/aamas14 -workshop.html 6 http://ijcai13.org/program/special_events_the_impact_of_ai_on_society 7 http://futureoflife.org/ai -open -letter 2 improvement, and resource acquisition. Pursuit of these goals could motivate the AI to, for example, make copies of itself on internet -connected computers, build new hardware or software for itself, and evade the attention of human observers until it is confident that it’s beyond their control. An influential book [10] thoroughly reviewing and building on existi ng work on superintelligent AI found no compelling counterarguments or easy workarounds; to the best of our knowledge, safe AGI will require significant theoretical advances in AI safety, and very careful implementation. This implies a risk that the first human -level AGI will be unsafe, at least until research and testing are done on it. 1.1 Containment and the AI Safety Problem Due to the “basic AI drives” mentioned above, an unsafe AGI will likely be motivated to falsify tests or monitoring mechanisms to mani pulate the researchers into thinking it’s safe, to gain access to more resources, to embed dormant copies of itself in device firmwares, and to hack computers on the internet. In order to reliably test and safely interact with an AGI with these motivations and capabilities, there must be barriers preventing it from performing these actions. These barriers are what we refer to as containment. Some have argued that controlling AGI - especially if superintelligent - is impossible or infeasible. For example, Ra y Kurzweil writes that “intelligence is inherently impossible to control” [17]. Eliezer Yudkowsky’s AI box experiment8 found that human factors make containing an AI difficult. Ver nor Vinge argued that “confinement is intrinsically impractical” in the long run [45]. We agree that containment is not a long -term solution for AI safety; rather it’s a tool to enable testing and development of AGIs with other, more robust safety properties such as value learning [36, 67] and corrigibility [37]. Value learning is the strategy of programming an AGI to learn what humans value , and further those values. If this is done correctly, such an AGI could be very good for humanity, helping us to flourish. Corrigibility is the strategy of programming an AGI to help (or at least, to not resist) its creators in finding and fixing its own bugs. An AGI which had both of these properties would not need to be contained, but experience with software suggests that developers are very unlikely to get it right on the first try. Other safety strategies that have been proposed depend on containment more directly. For example, in his book Superintelligence [10], Nick Bostrom suggests using tripwires to monitor the AGI and shut it down if it appears to be behaving dangerously. However, the AI drives thesis [31] suggests that an AGI might try to bypass tripwires or remove them from itself, which would render them ineffective in an AGI that had full control over its hardware. On the other hand, an AI containment system with internal security boundaries could both keep an AI from disabling its tripwires, and keep it from learning the details of what tripwires there were. Regarding the tractability of containment, encouraging progress has been made o n the human factors front; Yampolskiy has proposed ways of limiting an AGI’s communication channels so that even a superintelligent AI could not trick an operator into doing something dangerous [54, 6] . As for preventing the AGI from tampering with data and hacking its way to the internet, essentially no work has been done on this problem, but we have reason to think that bringing the tools of cybersecurity to bear will yield results that will substantially mitigate the risk of escapes. 2. Overview of the Proposed Guidelines The p roposed guidelines are based on our analysis of AI containment, incorporating all aspects of the problem, a concrete technical perspective and attention to specific technologies that can be used*. Background research for developing this framework turned up surprising interaction s between seemingly disparate aspects of the problem. For example, existing research into the human factors of AI containment suggests that tripwire/intrusion detection systems 8 http://yudkowsky.net/sin gularity/aibox * We have previously published a less detailed description of those guidelines as: [8] J. Babcock, J. Kramar and R. Yampol skiy, The AGI Containment Problem , The Ninth Conference on Artificial General Intelligence (AGI2015) , NYC, USA, July 16 -19, 2016. 3 might need to censor details of what happened. Understanding the threat model r equires incorporating widely varied predictions, and these predictions lead to very different ways of approaching the security/usability tradeoff [4]. We have identified seven major subp roblems to the AI containment. Rather than focusing our attention narrowly on one subproblem, we have made it our priority not to have any blind spots. These subproblems are threat modeling, navigating the security/usability tradeoff, concrete mechanisms, sensitive information management, human factors, tripwires and graceful degradation. This last issue - the need for graceful degradation - has led us to envision containment systems with multiple dissimilar containment mechanisms combined with intrusion de tection, providing defense in depth and resilience in a system built from components which may individually contain bugs. A prototype AI container A prototype AI container cab be built by combining multiple pre -existing containment tools, in such a way tha t a security vulnerability in any one of them would not lead to a complete compromise of the system. Developers should provide source code, scripts, and documentation sufficient for others to easily use it. Additionally, it is important to test how the con tainer interacts with the most common machine learning tools, including, in particular, those that involve distributed computation and GPGPU computing, and should have working support for at least some GPGPU and distributed computing mechanisms. Analysis of the AI containment threat model A preliminary threat model is included in this paper . To produce a more comprehensive threat model, would require a literature review of AI safety and forecasting results that may affect the threat model. It is necessary to attempt to determine upper and lower bounds on an AGI’s ability to find specific types of security vulnerabilities at different levels of intelligence and information constraint. Analysis of the security/usability tradeoff Our guidelines define what con stitutes light, medium, and heavy containment . Future efforts will survey available options within each category, and analyze risk at each level of containment in terms of the threat model to produce necessary guidance for AGI developers’ risk analysis, in dicating which level of containment is likely to be necessary. The details of how options are classified will depend mainly on how the options interact with the types of computations and development tools used, for which present -day machine learning practi ces can be used as a proxy for the sorts of tools likely to be used by AGI developers in the future. Limits of provably -secure communication Proposed guidelines propos e provably -secure communication protocols, suitable for use in a “heavy containment” context where security means preventing an AGI from communicating messages complex enough to be used for social engineering attacks . It will be necessary to explore the inf ormation theoretic limits of how much information can be communicated while maintaining maximum security. New Categories of Sensitive Information Created by AGI Development AGI development creates new categories of sensitive information. Some information is sensitive in the sense that it should be kept away from the AGI; some information is sensitive in that an AGI could use it as a tool for social engineering attacks . Our guidelines suggest identifying in greater detail what types of information are sensi tive, and provide guidance on how to manage it. Analysis of Human Factors It is necessary to analyze the channels in which an AGI might be able to place messages for untrained personnel, and identify actions which create unintuitive risks that people near an AGI should be warned about. AI safety researchers need to explore methods of preventing disallowed AGI communication with untrained personnel, such as understanding all possible side channels, by adding noise to remove steganographically hidden messages from intended outputs, and by controlling the ways output is viewed. Analysis of AI Container Tripwires AI safety researchers will need to explore the space of feasible tripwires, creating a guide for implementers describing what tripwires would be desira ble and how they would be implemented . 3. Significance of AI Containment This research sits at the intersection between cybersecurity, machine learning, and the emerging field of AI safety. AI forecasting determines our threat model; we need predictions about how intelligent future AIs are likely to be and the software, hardware, and organizational contexts in which it might be created. These are not straightforward questions which can be simply looked up; predictions in the literature feature a wide variety o f possible AI scenarios, motivations, and types, leading to an equal variety of overlapping threat models. Practices within the field of machine 4 learning determine the shape of the usability -security tradeoff. Existing security tools and techniques can hel p, but only with reanalysis in light of this new threat model. Our guidelines bring AI containment from the abstract theoretical domain, where it previously resided, into the cybersecurity world with a threat model, software and procedures that are concret e enough to check for vulnerabilities. 3.1 Why Present Techniques Aren’t Adequate Futurists have speculated on such extreme strategies as choosing computations which make processors trigger unknown corner cases in physics, placing messages in log files whi ch hack the brains of humans who read them, or reprogramming non -radio hardware components to act as radios by creating and detecting RF interference [16]. These attacks may seem unlikely, but a more pressing concern comes from simply searching the Common Vulnerabilities and Exposures (CVE) database for entries related to common virtualization and sandboxing technologies. We found that every major virtualization and sandboxing solution we looked at had critical vulnerabilities discovered within the past two years, with vulnerabilities continuing to be fo und at a rate which suggests that there are many more vulnerabilities left to be discovered. While it is probably feasible to find and fix all of the vulnerabilities that are easy enough for a human to find, there is no reason to think that any of these so lutions could stand up to a smarter -than- human intelligence nor that they will ever be strong enough to stand up to a smarter -than-human intelligence. One might ask: why not just use airgapping? We are mainly concerned with scenarios involving AGIs being t ested under laboratory conditions, where physical disconnection from the internet is possible. Unfortunately, this does nothing to protect the integrity of tests being performed, to protect monitoring systems, or to protect the hardware being used from mod ification and persistence, which are all very necessary. 3.2 Why Now Given that AGI has not been created yet, one might ask: why now? There are several compelling reasons. Human - level AGI may be invented sooner than expected. Technology forecasting is noto riously difficult, and has tripped up serious scientists time and again; for example the nuclear physicist Ernest Rutherford said in 1933 that nuclear energy was “moonshine”, and another accomplished physicist Lord Kelvin in 1896 said “ I have not the small est molecule of faith in aerial navigation other than ballooning” . Another reason to do this work now is that developing secure systems requires time, careful review by many independent experts, and iteration. Unfortunately, many of the predicted scenarios do not allow second chances; if an AI that’s unfriendly to human values or that’s dangerously buggy escapes into the world, it could do immense damage. If we wait until someone has created a human -level AGI, or the creation of human -level AGI appears immi nent, it will be too late. Some of this research has an even longer lead time; if our research finds security improvements that depend on changing software in ways more complicated than mere bugfixes, then finding out now will allow time to influence the n ext generation of software. A working AI contain er will improve the security of the internet and the world in any future scenarios where artificial general intelligence is developed, enabling prospective AGI designs which might not share human values to be safely studied and assessed. It will also bridge the cybersecurity, AI safety and machine learning communities. The direct application of our research is very important, but more speculatively, approaching security from this unconventional angle may yield insights into old problems, particularly those related to dealing with risk from newly discovered exploits and highly reliable systems. 4. Previous Work To succeed in constructing an AI container it is important to merge techniques from a number of relevant fields such as: computer security [52], behavioral biometrics [61, 51] , cryptography [22, 63] , steganography [1], stylometry [3, 2], computer forensics [21, 9] , utilization of artificial intelligence in security [30, 29] , AI Safety [23, 28] , including security work with current AIs [48] and theoretical work on Artificial General Intelligence (AGI) safety and security [49]. 5 Other relevant work is in the field of Artimetrics, which deals with ways to identify, classify, and authenticate robots, intelligent software, and virtual reality agents for security purposes [60, 47] . Other tools may inc lude linguistic profiling of Chatbots [3, 2] , the visual identification of virtual reality avatars [27, 62, 24] , and the behavior -based authentication of bots [62]. More recent areas of importance are AI Safety Engineering [50], AGI Safety [41], and utility function security [58], AI-Completeness [56], the singularity paradox [59], the space of minds [57], recursive self-improvement [53], and the Artificial Intelligence Containment Problem (AICP) [54]. In particular, Yampolskiy’s seminal work on the AICP not only defines the problem, from the computer scienc e point of view, but also proposes a number of possible solutions, including a steganographically -safe communication protocol, a classification of levels of communication with respect to the amount of shared information, and the concept of safe questions f or interacting with AGIs. In the same work Yampolskiy also analyzes a number of potential attack vectors against the confinement cyberinfrastructure, including social engineering attacks. 5. Evaluation and Success Criteria Soundness Schneier’s Law says that “anyone can invent a security system that he himself cannot break.” For a containment system that’s meant to stand up to smarter -than-human adversaries, that’s not good enough. In the long run, good security can only be guaranteed by time and careful revie w by many independent experts [34]; allowing time for review and iteration is the main reason to do this research now, when AI may still be decades away, rather than waiting until it’s closer. It is commonly said that perfect security is impossible, but when the opponent might wield superhuman abilities, it’s important to get very close. It is important to design a containment system that’s sound in the sense that we’ll be confident in its security eventually , after sufficient independent review. Needing a few simple bugfixes is okay; presenting something that won’t ever be secure even after said bugfixes is not. Breadth There is a lot of uncertainty about how AGI may first arise; eg whether it will be developed inside a company or in acade mia, what size the team will be, what sorts of algorithms and technologies it will use. As much as possible, it is important to cover all reasonable scenarios, and find all the likely attack surfaces and mitigations. Relevance and Usability The next major risk is that th ese guidelines will produce something that’s secure but unusable in practice, or worse, something which is only usable up to a point, after which an AGI researcher would break something security -critical to work around a limitation. There ar e two main ways this could happen in practice. The first is incompatibility with the tools and libraries that AI researchers are using. The second is overly obnoxious configuration. In order to learn and pragmatically respond to the priorities and practical constraints of cutting -edge machine learning researchers, AI container developers need to establish collaborations with AI researchers. Such collaborations will be needed to ensure that the containment measures do not unnecessarily interfere with existing tools and research practices, by finding out what they’re using, testing it for compatibility, and distributing container prototypes for external evaluation. When soliciting feedba ck, it is important to engage with all the major subcommunities within AI, such as the “Good Old Fashioned AI” community and the machine learning community. 6. Threat Model 6.1 AI Motivation The “AI drives” thesis gives us a starting point for reasoning about what an AGI might want; according to the thesis, while researchers may give the AGI any goal explicitly, it will likely develop subgoals like self -defense, self - improvement, self -replicati on, and resource acquisition. This allows us to recognize what AGI behaviors are likely: those that further these subgoals. Details of how this scenario would play out is explored in greater depth in Nick Bostrom’s book Superintelligence [10]. On the other hand, if the thesis is false, it may still be helpful as a sort of upper bound on the difficulty of the problem; after all, what could be more dangerous t han a superintelligence bent on world domination? 6 6.2 AI Capabilities Predictions about AGI development trajectories tend to fall into two main categories, known as hard takeoff and soft takeoff [10]. In a soft takeoff , the capabilities of AGI increase gradually, with much time and effort; and it will be feasible to learn about its capabilities and behaviors through experimentation, though as Yudkowsky [66] points out, the change in capability as the AGI surpasses us may enable new plans and behaviors that are undesirable. In a so -called hard takeoff , after some critical point is reached (which may be preceded by steady improvement), AGI capability increases very rapidly. One proposed mechanism for this is that AGIs that are as good at AGI research as their researchers can undergo a process of recursi ve self -improvement [15, 39, 20, 32] , where each improvement in an AGI’s capability enables it to find more improvements to make. Whether a hard or soft takeoff will happen is a subject of considerable disagreement among experts9 [68], but the possibility of recursive self -improvement makes an AGI’s ability to self -modify particularly important to monitor and control. As far as the ceiling of AI capability is concerned, it’s likely to be significantly above human level in genera l [19, 40] , though it’s not clear what hardware is requ ired for this. Of course, this raises the question: what do we mean by a smart AGI, anyway? As commentators often point out, a superhuman chess player like Deep Blue is no match for us at any other tasks, even simple ones like tic -tac-toe. For present purp oses, we will answer this question using a list of “cognitive superpowers” humans have, proposed by Bostrom [10]: ● “Intelligence amplification (AI programming, cognitive enhancement research, social epistemology development, etc.) ● Strategizing (Strategic planning, forecasting, prioritizing, and analysis for optimizing chances of achieving distant goals) ● Social manipulation (Social and psychologi cal modeling, manipulation, rhetoric persuasion) ● Hacking (Finding and exploiting security flaws in computer systems) ● Technology research (Design and modeling of advanced technologies, such as biotechnology or nanotechnology) ● Economic productivity (Various skills enabling economically productive intellectual work)” Using this framework we can say that: ● An AGI with strong strategizing and hacking ability might plan an escape using security vulnerabilities; ● An AGI with strong technology ability might additio nally be able to find surprising side -channel attacks ● An AGI with strong social manipulation ability could use social engineering techniques to enable an escape; ● An AGI with other superpowers may additionally be able to make compelling offers or threats, which could assist with the manipulation. Countermeasures for these attack vectors are discussed in the remaining sections. 7. The Security -Usability Tradeoff Existing work on the AGI containment problem [54] has generally assumed that the AGI in question is superintelligent, and therefore warrants a great deal of concern : any exploit that seems theoretically possible will be within the AGI’s reach, and therefore the only acceptable containment system is one that we have strong theoretical reason to believe is immune to circumvention. This is a high bar for a containment s ystem to clear; we’ll call containment that we can be confident meets this standard heavy containment . Some examples of containment mechanisms that belong in this category: ● Limiting output to a small number of yes/no questions whose answers are securely sh uffled before being output [54], to avoid any chance of so cial engineering attacks 9 http:// reducing -suffering.org/predictions -agi-takeoff -speed -vs-years -worked -commercial -software 7 ● Running the AGI as a homomorphically encrypted computation10 ● Putting the AGI’s datacenter in a very inaccessible place But as we noted in the Threat Model section, researchers developing AGI are unlikely to be willing to conduct t heir research with such substantial handicaps. Researchers who believe their project is aimed at a breakthrough short of human -comparable AGI are particularly likely to balk at costs during use and during setup, and will only be willing to deploy containme nt mechanisms that are made simple and convenient. We’ll call these light containment . It’s unlikely that light containment measures will suffice for a superintelligent AGI, but there may also be containment escape risks from less powerful AGIs, and effect ive light containment would help mitigate those risks. Light containment is particularly relevant to groups that don’t yet have AGIs at all, and only need to ensure that their AGI doesn’t escape before they notice they’ve made a breakthrough. Technologies like hardware virtualization, network firewalls, and software sandboxing belong in this category. Researchers who understand that they’re dealing with potentially human -level AGI will probably be able to deploy containment measures that add modest costs an d overheads to their development process, but in a competitive scenario as in [5] heavy containment measures will likely be out of bounds; they would cause anyone applying them to fall behind. Measures that keep costs within the same order of magnitude, however, will have a good chance of being taken by responsible researchers. We’ll call measures in this category medium containment . Possible measures in this category include: ● Preventing the AI from accumulating information about its researchers between runs ● Adding real -time external redundant supervision, which is tested using appropriately varied drills ● Instruction -level virtualization ● Specialized hardware If reliable containment mechanisms are developed that don’t impose significant computational overhead but are incompatible with the development environments that the researchers would otherwise find most productive, then those will be best thought of as medium containment, until the compatibility issues are addressed. This underscores the importance of investigating the requirements and techniques in advance, to forestall this sort of situation. Existing AI safety research has focused primarily on measures suitable for heavy cont ainment - that is, measures with high reliability but with a very high cost to computational performance and AGI functionality. This is sensible; containing a superintelligence is very hard, and people’s risk and difficulty estimates are easily miscalibrat ed, and it is more important to have containment that works than it is to have it run fast and support the best development tools. Still, this should not be the end of the story. Th ese guidelines are the first to meaningfully address light containment and medium containment. It aims to strike an appropriate balance between caution and practicality. 8. Sensitive Information Management In most applications of cybersecurity, there isn’t very much uncertainty about which kinds of information are sensitive and need to be restricted; but when an AGI enters the picture, this becomes much less clear, because an AGI that’s trying to evade containment may be helped by information about what stands in its way. Detailed information about this might comprise personal inform ation about the researchers, books on psychology, negotiation or salesmanship, or books on cybersecurity and manuals to the software and hardware the AGI is running on. Any of these would make it easier for the AGI to invent attacks. A slightly more sophis ticated AGI would be able to make use of more indirect information, such as works of literature to learn about psychology or content about computing, engineering, or physics to learn about its environment. However, there’s no fundamental reason an AGI woul d be limited to learning from such explicit information; an AGI that can perform experiments may be able to figure out its environment the hard way, as humanity has. Limiting access to detailed information about the AGI’s containment is superficially simil ar to what is referred to pejoratively as “security through obscurity”, but there is an important difference between the usual cybersecurity 10 http://lesswrong.com/lw/3cz/cryptographic_boxes_for_unfriendly_ai 8 context and the AGI containment context: researchers have control over all of the information that the contained AG I learns from. For this reason, hiding information from the AGI is more feasible than maintaining secrets in a deployed system. This kind of information restriction is palatable for many applications of AGI. However, it poses a challenging tradeoff with th e requirements of value learning, which is another key project for AGI safety; value learning requires information about our preferences (stated or enacted, abstract or concrete, etc.), and this information would be relevant to manipulating people. This tr adeoff will require more research attention; there may be adequate subsets of preference information for developing value learning techniques that are nonetheless inadequate for social engineering attacks . From our current vantage point it seems that the r equirements of value learning will diminish the effectiveness of information -hiding measures targeted at obstructing social engineering, so more intensive non -information -hiding - based measures will be needed to manage the resulting risk. 8.1 Strategies The light containment approach is to avoid giving access to sensitive information of the kinds mentioned. Because this is a broad class of data, there would also be a risk of some researcher accidentally providing sensitive data that’s embedded in some larger dataset; a possible light containment tool would be something akin to a content filter, though ultimately discretion about what data is shared would necessarily fall to the researcher. A medium containment approach would be to prevent the AGI from deducin g that it’s running as a particular piece of software in the world by letting it interact only with a virtual environment, or some computationally well -defined domain, with as few embedded clues as possible about the outside world. (Arguably projects like the DeepMind’s Atari -game -playing AI [25] are already doing this.) If it was possible to embed all of the AGI’s computations (not merely its action, reward, and sense streams) inside a virtual world and get theoretical guarantees that a ful l description of the virtual world doesn’t contain enough information to deduce anything significant about the world outside, then this would be a way of providing heavy containment. Getting these theoretical guarantees may not be tractable, however. The m ost pressing aspects of the AGI Containment problem are to open all avenues of investigation, and to conduct research on subtopics that require a lot of lead time before their results can be realistically put into practice, usually due to technical complex ity. The sensitive information management problem does not appear to have very much technical depth, unless we consider the heavy containment problem, which seems mathematically challenging. However, AI safety researchers will need to bring clarity to the question of what constitutes sensitive information. This topic has been discussed in an AI safety context, though not, to our knowledge, investigated deeply. There is substantial uncertainty about whether our intuitions about what can be learned from a cor pus are an acceptable guide; for example, Yudkowsky11 has speculated that a very small amount of data is enough to reveal much of how the world works to a superintelligent mind, while Dileep George, cofounder of the AI startup Vicarious, has argued that a great deal of gradual observation is needed for knowledge acquisition. Some theoretical work has been done on the topic, but it’s not sufficient to answer practical questions [38]. 9. Conc rete Mechanisms There are many off -the-shelf tools that implement sandboxing. However, none of them were developed with the AI containment threat model in mind and many of them are difficult to reconcile with the requirements posed by AI software developme nt. We’ve grouped these into categories; sandboxing techniques within a category tend to have similar strengths and limitations, and it is generally more useful to combine techniques from different categories than from the same category. A prototyp ical AI container will attempt to combine techniques from as many distinct categories as is feasible, in a modular fashion. The categories we have identified are: ● Physical isolation , aka airgapping. 11 http://lesswrong.com/lw/qk/that_alien_message 9 ● Language -semantics based methods , such as the Java sandbox - i.e., the use of a sound type -system with memory safety to prevent the introduction of new code. Of particular interest are Lua (with the Torch7 library) and Python (with the Theano library), which might be able to implement this kind of sandbox but which inco rporate components in C that might make them insecure. These are currently popular software packages for implementing machine learning algorithms [11]. ● Operating -system -based process sandboxing methods , such as Linux’s seccomp mode, FreeBSD’s Capsicum, and the Chrome sandbox. Th ese work by starting a process and then signaling to the operating system that most APIs will not be used and should be disabled, before the execution of any risky code. ● Virtual machines , such as VirtualBox and QEMU. These use CPU hardware features to crea te multiple access levels within a computer, each with its own operating system. These might be improved by new security features in future CPU architectures, such as the Mill12, which has features to defend against some classes of exploits. ● Instruction -level emulation . This technique is used heavily in malware analysis tools such as V2E [65], but imposes heavy computational overhead. Since many software sandboxing tools already exist, we think creating another one from scratch is likely to be premature. Instead, AI safety researchers should concentrate on finding existing tools, attempting to adapt t hem to the AI containment threat model and analyzing them in that context. The main difficulty will be adapting these sandboxing methods to a multi -node cluster context; AGIs are expected to depend heavily on parallel computation, which means they need to run on multiple hosts connected to each other. This will most likely mean a cluster of compute nodes within a single data center, connected to each other but not to the internet, where there is both a high -privilege enforcer and a low -trust AGI process on each node. Since none of these sandboxing mechanisms were designed with this requirement in mind, we expect most of these mechanisms to be either unsuitable, or to require considerable adaptation and configuration, performed with great care. The other majo r consideration will be the need for GPGPUs computing. Because GPGPU involves using two heterogeneous compute engines and the interaction between them, it tends to require significantly greater privileges than CPU computation. In particular, GPU drivers co mbine a very complicated attack surface with an unavoidable need for low -level hardware access, posing considerable risk. We are not yet sure how best to address this risk, but it might be possible to shrink this attack surface by validating and limiting G PU API calls or modifying open source graphics drivers. 10. Graceful Degradation 10.1 Defense in Depth Because an AGI could potentially discover novel security vulnerabilities, it’s important for an AGI containment system to remain secure even if there are vul nerabilities in its components. Therefore, we envision a system featuring defense in depth, with multiple techniques providing overlapping guarantees of integrity. Conceptually, these overlapping techniques act as layers, in the sense that the attack surfa ces of an outer layer are only accessible to probing and attack if the integrity guarantees of all inner layers have been broken. This should be combined with intrusion detection systems, so that any attempt by an AI to break the integrity constraints impo sed on it would be noticed, allowing the developers to shut it down. Ideally the traps and detection systems would be implemented in higher layers, making them more difficult to detect and circumvent. When deciding which interfaces to expose and which inte rfaces to architect around protecting, researchers should look at design attributes like complexity and check exploit databases such as NVD13, OSVDB14, and EDB15, using projects’ security history as a warning if they’re likely to have a bad security future. 12 http://millcomputing.com/topic/security/ 13 https://n vd.nist.gov 14 http://osvdb.org 15 http://exploit -db.com 10 This part of our research applies to all levels of containment (light, medium, and heavy); a medium containment system would use a subset of the layers used by a heavy containment system, while a light containment system would use a subset of the layers us ed by a medium containment system. Th ese guidelines will enable developers to set up effective containment systems, assess their systems’ security, and assess the consequences of a breach or partial breach. Our initial work in this area has indicated which features of AGI containment systems are most in need of work, and we expect continued work in this area to expose more necessary lines of inquiry. 11. Tripwires Tripwires are systems that monitor the operation of a running AGI, and shut it down if they detect an anomaly that suggests the AGI might be malfunctioning or unsafe. For example, one might install a tripwire which monitors the AGI’s thoughts for signs that it was planning to deceive its developers [10], or a tripwire which monitors the AGI’s execution for signs that it had self -modified or self -improved. Similarly, intrusion detection systems are tripwires which detect if code is executing somewhere it shouldn’t; they increase the value of multi -layer architectures like the one described in the previous section, by adding a cost to unsuccessful attempts to break the system’s integrity and preventing an AI from keeping its progress in breaking out over ti me. We classify tripwires into two general categories. Semantic tripwires refer to the contents of an AGI’s thoughts or plans; their design requires detailed knowledge of the algorithms and data structures it will be using. Technical tripwires do not refer to the AGI’s thoughts or plans, but only to algorithm -independent aspects of its execution. In this classification, detecting that the AGI was formulating a plan to deceive its developers would be a semantic tripwire, while detecting that the AGI had self -modified would be a technical tripwire. Intrusion detection systems are another kind of tripwire. Semantic tripwires present highly unusual requirements. Conceptually, a semantic tripwire is similar to the assert statement used in many programming languag es. Unlike a simple assert statement, however, it’s desirable to keep the implementation details of a semantic tripwire inaccessible to the program it’s located in; it would improve security if the tripwires could be somehow excluded from the AI’s readable address space. Because the algorithms that will be used by real AGIs have mostly not been developed yet, designing the specific contents of semantic tripwires does not yet seem feasible; AI safety researchers should instead focus on designing technical tr ipwires, and on algorithm - independent aspects of tripwire placement, such as how to add tripwires in such a way that an AGI with unrestricted read access to its own memory couldn’t study them to find workarounds. AI safety researchers should also investiga te the human factors involved in tripwires, which could be ignored or disabled by AGI developers if they seem to be having false positives, to disastrous effect. Figure 3. User error can negate most defenses There are many intrusion detection systems [26, 13] which might be adapted to AI containment. However, they assume a significantly different threat model and context which may limit their applicabi lity and which requires great attention to detail in order to operate. Existing research into honeypots [42] will also be helpful for designing tripwired/fake escape paths. 12. Human Factors AI containment presents a mix of both conventional human -factor problems, and some very novel and unusual ones. The most conventional problem is that for anyone involved in a scen ario involving AGI, there are very high stakes, 11 which can lead to high stress and heightened risk of mistakes. There are several problems that are specific to the circumstances, but reminiscent of similar problems seen elsewhere. There is a risk that an em ployee could release the AGI deliberately, for ideological reasons [12, 14] . There is also a risk that external parties could undermine the security of the containment system: for example, an AGI develo pment lab with a working prototype is likely to attract espionage. The more unusual risk comes from the AGI itself. An AI with human -level social manipulation powers could have many ways to use social engineering to escape, and AI with other cognitive powe rs (or superhuman social manipulation powers) could muster significant leverage to negotiate with. An AGI which had superhuman intelligence would be quite persuasive indeed. What’s more, just as an AGI might use side -channels or advanced persistent threats to break its technical containment infrastructure, it may also find similarly difficult -to-control ways to convey messages to the researchers or to distant third parties, eg by adjusting its computation to shape its power -usage graphs or embedding stegano graphically hidden messages in images it produces - presumably other modes also exist that we’re not yet aware of. The actions that release the AI can look harmless or unrelated, especially if messages reach non -experts who don’t understand the significanc e of technical actions like plugging things in. Most existing security human -factors research involves preventing problematic actions by laypeople or by people who may be spread out through a large company. In the case of AI containment, only very a small number of people need to interact directly with an AGI, and so they can be screened and trained carefully, making the problem easier. On the other hand, those few carefully selected personnel could be exposed to extremely clever trickery. Informal studies of the human factors of AI containment have provided reason to think it’s difficult. The AI box experiment [69] and replication [44] provide data about the sorts of strategies an AGI would use to convi nce humans to let it out of a container deliberately, but countermeasures have not yet been explored, nor have the strategies one might use to make humans open a container accidentally. 13. Conclusion s In the face of the challenges of developing and controlli ng increasingly sophisticated AIs, it is crucial to anticipate and restrict the ways an AI could perform unwanted actions. There has been little work on establishing practical ways for AI research and development to enforce these restrictions; our paper provides guidelines so that the technologies and information required will be ready to be used by AI projects as they advance toward human -comparable intelligence and beyond. Acknowledgements Authors are grateful to Jaan Tallinn and Effective Altruism Ventures for providing funding towards this project, and to Victoria Krakovna and Evan Hefner for their feedback. Roman Yampolskiy is also grateful to Elon Musk and the Future of Life Institute fo r partially funding his work. References [1] G. Abboud, J. Marean and R. V. Yampolskiy, Steganography and Visual Cryptography in Computer Forensics , Systematic Approaches to Digital Forensic Engineering (SADFE), 2010 Fifth IEEE International Workshop on , IEEE, 2010, pp. 25 - 32. [2] N. Ali, M. Hindi and R. V. Yampolskiy, Evaluation of Authorship Attribution Software on a Chat Bot Corpus , 23rd International Symposium on Information, Communication and Automation Technologies (ICAT201 1), Sarajevo, Bosnia and Herzegovina, October 27 -29, 2011, pp. 1 -6. [3] N. Ali, D. Schaeffer and R. V. Yampolskiy, Linguistic Profiling and Behavioral Drift in Chat Bots , Midwest Artificial Intelligence and Cognitive Science Conference (2012), pp. 27. [4] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman and D. Mané, Concrete problems in AI safety , arXiv preprint arXiv:1606.06565 (2016). [5] S. Armstrong, N. Bostrom and C. Shulman, Racing to the precipice: a model of artificial intelligence deve lopment , AI & society, 31 (2016), pp. 201 -206. 12 [6] S. Armstrong, A. Sandberg and N. Bostrom, Thinking inside the box: Controlling and using an oracle ai , Minds and Machines, 22 (2012), pp. 299 -324. [7] S. Armstrong and K. Sotala, How we’re predicting AI –or failing to , Beyond Artificial Intelligence , Springer, 2015, pp. 11 - 29. [8] J. Babcock, J. Kramar and R. Yampolskiy, The AGI Containment Problem , The Ninth Conference on Artificial General Intelligence (AGI2015) , NYC, USA, July 16 -19, 2016. [9] M. B. Beck, E. C. Rouchka and R. V. Yampolskiy, Finding Data in DNA: Computer Forensic Investigations of Living Organisms , Digital Forensics and Cyber Crime , Springer Berlin Heidelberg, 2013, pp. 204 -219. [10] N. Bostrom, Superintelligence: Paths, dangers, strategies , Oxford University Press, 2014. [11] R. Collobert, K. Kavukcuoglu and C. Farabet, Implementing neural networks efficiently , Neural Networks: Tricks of the Trade , Springer, 2012, pp. 537 -557. [12] E. Dietrich, After the Humans are Gone , Journal of Experime ntal & Theoretical Artificial Intelligence, 19(1) (2007), pp. 55 - 67. [13] T. Garfinkel and M. Rosenblum, A Virtual Machine Introspection Based Architecture for Intrusion Detection , Ndss , 2003, pp. 191 -206. [14] H. D. Garis, The Artilect War , ETC publicatio ns, 2005. [15] I. J. Good, Speculations Concerning the First Ultraintelligent Machine , Advances in Computers, 6 (1966), pp. 31 -88. [16] M. Guri, A. Kachlon, O. Hasson, G. Kedma, Y. Mirsky and Y. Elovici, Gsmem: Data exfiltration from air -gapped computers over gsm frequencies , 24th USENIX Security Symposium (USENIX Security 15) , 2015, pp. 849 -864. [17] R. Kurzweil, The Singularity is Near: When Humans Transcend Biology , Viking Press, 2005. [18] Y. Lecun, Y. Bengio and G. Hinton, Deep learning , Nature, 521 (2 015), pp. 436 -444. [19] S. Legg, Machine Super Intelligence , PhD Thesis, University of Lugano , Available at: http://www.vetta.org/documents/Machine_Super_Intelligence.pdf , June 2008. [20] R. Loosemore and B. Goertzel, Why an intelligence explosion is probable , Singularity Hypotheses , Springer, 2012, pp. 83 - 98. [21] M. Losavio, O. Nasraoui, V. Thacker, J. Marean, N. Miles, R. Yampolskiy and I. Imam, Assessing the Legal Risks in Ne twork Forensic Probing , Advances in Digital Forensics V (2009), pp. 255 -266. [22] A. Majot and R. Yampolskiy, Global catastrophic risk and security implications of quantum computers , Futures, 72 (2015), pp. 17 -26. [23] A. M. Majot and R. V. Yampolskiy, AI safety engineering through introduction of self -reference into felicific calculus via artificial pain and pleasure , IEEE International Symposium on Ethics in Science, Technology and Engineering , IEEE, Chicago, IL, May 23 -24, 2014, pp. 1 -6. [24] R. Mcdaniel and R. V. Yampolskiy, Embedded non -interactive CAPTCHA for Fischer Random Chess , 16th International Conference on Computer Games (CGAMES) , IEEE, Louisville, KY, 2011, pp. 284 -287. [25] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Belle mare, A. Graves, M. Riedmiller, A. K. Fidjeland and G. Ostrovski, Human -level control through deep reinforcement learning , Nature, 518 (2015), pp. 529 -533. [26] C. Modi, D. Patel, B. Borisaniya, H. Patel, A. Patel and M. Rajarajan, A survey of intrusion detection techniques in cloud , Journal of Network and Computer Applications, 36 (2013), pp. 42 -57. [27] A. Mohamed, N. Baili, D. D’souza and R. V. Yampolskiy, Avatar Face Recognition Using Wavelet Transform and Hierarchical Multi -scale LBP , Tenth Internati onal Conference on Machine Learning and Applications (ICMLA'11) , Honolulu, USA, December 18 -21, 2011. [28] L. Muehlhauser and R. Yampolskiy, Roman Yampolskiy on AI Safety Engineering , Machine Intelligence Research Institute , Available at: http://intelligence.org/2013/07/15/roman -interview/ July 15, 2013. [29] D. Novikov, R. V. Yampolskiy and L. Reznik, Anomaly Detection Based Intrusion Detection , Third International Conference on Informatio n Technology: New Generations (ITNG 2006) , Las Vegas, Nevada, USA, April 10 -12, 2006., pp. 420 -425. [30] D. Novikov, R. V. Yampolskiy and L. Reznik, Artificial Intelligence Approaches for Intrusion Detection , Long Island Systems Applications and Technology Conference (LISAT2006). , Long Island, New York., May 5, 2006., pp. 1 -8. [31] S. M. Omohundro, The Basic AI Drives , Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, P. Wang, B. Goertzel, and S. Fr anklin (eds.) , IOS Press, February 2008. [32] S. M. Omohundro, The Nature of Self -Improving Artificial Intelligence , Singularity Summit , San Francisco, CA, 2007. [33] F. Pistono and R. V. Yampolskiy, Unethical Research: How to Create a Malevolent Artificia l Intelligence , 25th International Joint Conference on Artificial Intelligence (IJCAI -16). Ethics for Artificial Intelligence Workshop (AI -Ethics -2016) , 2016. [34] B. Potter and G. Mcgraw, Software security testing , IEEE Security & Privacy, 2 (2004), pp. 8 1-85. [35] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach , Prentice Hall, Upper Saddle River, NJ, 2003. [36] N. Soares, The value learning problem , Machine Intelligence Research Institute, Berkley, CA, USA (2015). [37] N. Soares, B. F allenstein, S. Armstrong and E. Yudkowsky, Corrigibility , Workshops at the Twenty -Ninth AAAI Conference on Artificial Intelligence , Austin, Texas, USA, January 25 -30, 2015. [38] R. J. Solomonoff, A formal theory of inductive inference. Part I , Information and control, 7 (1964), pp. 1 -22. 13 [39] R. J. Solomonoff, The Time Scale of Artificial Intelligence: Reflections on Social Effects , North -Holland Human Systems Management, 5 (1985), pp. 149 -153. [40] K. Sotala, Advantages of artificial intelligences, uploads , and digital minds , International Journal of Machine Consciousness, 4 (2012), pp. 275 -291. [41] K. Sotala and R. V. Yampolskiy, Responses to catastrophic AGI risk: a survey , Physica Scripta, 90 (2015), pp. 018001. [42] L. Spitzner, Know your enemy: Honeyn ets, Honeynet Project (2005). [43] A. M. Turing, Intelligent Machinery, A Heretical Theory , Philosophia Mathematica, 4(3) (1996), pp. 256 -260. [44] Tuxedage, I attempted the AI Box Experiment again! (And won - Twice!) , Available at: http://lesswrong.com/lw/ij4/i_attempted_the_ai_box_experiment_again_and_won/ , September 5, 2013. [45] V. Vinge, The Coming Technological Singularity: How to Survive in the Post -human Era , Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace , Cleveland, OH, March 30 -31, 1993, pp. 11 -22. [46] N. Wiener, Cybernetics or Control and Communication in the Animal and the Machine , MIT press, 1961. [47] R. Yampolskiy, G. Cho, R. Rosenthal and M. Gavrilova, Experiments in Artimetrics: Avatar Face Recognition , Transactions on Computational Science XVI (2012), pp. 77 -94. [48] R. Yampolskiy and J. Fox, Safety engineering for artificial general intelligence , Topoi, 32 (2013), pp. 217 -226. [49] R. V. Yampolskiy, Artificial intelligence safety engineering: Why machine ethics is a wrong approach , Philosophy and Theory of Artificial Intelligence , Springer, 2013, pp. 389 -396. [50] R. V. Yampolskiy, Artificial Superi ntelligence: a Futuristic Approach , Chapman and Hall/CRC, 2015. [51] R. V. Yampolskiy, Behavioral Modeling: an Overview , American Journal of Applied Sciences, 5 (2008), pp. 496 -503. [52] R. V. Yampolskiy, Computer Security: from Passwords to Behavioral Bio metrics , New Academic Publishing, 2008. [53] R. V. Yampolskiy, From Seed AI to Technological Singularity via Recursively Self -Improving Software , arXiv preprint arXiv:1502.06512 (2015). [54] R. V. Yampolskiy, Leakproofing Singularity - Artificial Intelligence Confinement Problem , Journal of Consciousness Studies (JCS), 19(1 -2) (2012), pp. 194 –214. [55] R. V. Yampolskiy, Taxonomy of Pathways to Dangerous Artificial Intelligence , Workshops at the Thirtieth AAAI Conference on Artificial Intelligence , 2016. [56] R. V. Yampolskiy, Turing Test as a Defining Feature of AI -Completeness , Artificial Intelligence, Evolutionary Computation and Metaheuristics - In the footsteps of Alan Turing. Xin -She Yang (Ed.) , Springer, 2013, pp. 3 -17. [57] R. V. Yampolskiy, The Universe of Minds , arXiv preprint arXiv:1410.0369 (2014). [58] R. V. Yampolskiy, Utility function security in artificially intelligent agents , Journal of Experimental & Theoretical Artificial Intelligence, 26 (2014), pp. 373 -389. [59] R. V. Yampolskiy, What to Do with the Singularity Paradox? , Philosophy and Theory of Artificial Intelligence , Springer, 2013, pp. 397 -413. [60] R. V. Yampolskiy and M. L. Gavrilova, Artimetrics: Biometrics for Artificial Entities , Robotics & Automation Magazine, IEEE, 19 ( 2012), pp. 48 -58. [61] R. V. Yampolskiy and V. Govindaraju, Behavioural biometrics: a survey and classification , International Journal of Biometrics, 1 (2008), pp. 81 -113. [62] R. V. Yampolskiy and V. Govindaraju, Strategy -based behavioural biometrics: a n ovel approach to automated identification , International Journal of Computer Applications in Technology, 35 (2009), pp. 29 -41. [63] R. V. Yampolskiy, J. D. Rebolledo -Mendez and M. M. Hindi, Password Protected Visual Cryptography via Cellular Automaton Rule 30, Transactions on Data Hiding and Multimedia Security IX , Springer Berlin Heidelberg, 2014, pp. 57 - 67. [64] R. V. Yampolskiy and M. Spellchecker, Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures , arXiv preprint arXiv:1610.0799 7 (2016). [65] L.-K. Yan, M. Jayachandra, M. Zhang and H. Yin, V2E: combining hardware virtualization and softwareemulation for transparent and extensible malware analysis , ACM Sigplan Notices, 47 (2012), pp. 227 -238. [66] E. Yudkowsky, Artificial Intellig ence as a Positive and Negative Factor in Global Risk , in N. Bostrom and M. M. Cirkovic, eds., Global Catastrophic Risks , Oxford University Press, Oxford, UK, 2008, pp. 308 -345. [67] E. Yudkowsky, Complex value systems in friendly AI , Artificial general in telligence , Springer, 2011, pp. 388 -393. [68] E. Yudkowsky and R. Hanson, The Hanson -Yudkowsky AI -foom debate , MIRI Technical Report , Available at: http://intelligence.org/files/AIFoomDebate.pd f, 2008. [69] E. S. Yudkowsky, The AI -Box Experiment , Available at: http://yudkowsky.net/singularity/aibox , 2002.
08d1db81-d56a-4d4f-9e75-32ac0f371e86
StampyAI/alignment-research-dataset/special_docs
Other
AI Education in China and the United States September 2021 AI Education in China and the United States A Comparative Assessment CSET Issue Brief AUTHORS Dahlia Peterson Kayla Goode Diana Gehlhaus Center for Security and Emerging Technology | 1 Executive Summary Many in the national security community are concerned about China’s rising dominance in artificial intelligence and AI talent. That makes leading in AI workforce competitiveness critical, which hinges on developing and sustaining the best and brightest AI talent. This includes top-tier computer scientists, software engineers, database architects, and other technical workers that can effectively create, modify, and operate AI-enabled machines and other products. This issue brief informs the question of strategic advantage in AI talent by comparing efforts to integrate AI education in China and the United States. We consider key differences in system design and oversight, as well as in strategic planning. We then explore implications for maintaining a competitive edge in AI talent. (This report accompanies an introductory brief of both countries’ education systems: “Education in China and the United States: A Comparative System Overview.”) Both the United States and China are making progress in integrating AI education into their workforce development systems, but are approaching education goals in different ways. China is using its centralized authority to mandate AI education in its high school curricula and for AI companies to partner with schools and universities to train students. Since 2018, the government also approved 345 universities to offer an AI major, now the country’s most popular new major, and at least 34 universities have launched their own AI institutes. The United States is experimenting with AI education curricula and industry partnership initiatives, although in a piecemeal way that varies by state and places a heavier emphasis on computer science education. Both countries’ approaches could result in uneven levels of AI workforce competitiveness, although for similar and different reasons. China’s centralized push could lead to widespread integration of AI education, but the resulting curricula could be shoddy for the sake of participating in the “AI gold rush.” This risk Center for Security and Emerging Technology | 2 is especially pronounced in under-resourced areas, which could produce underwhelming results. The United States’ varied, decentralized approach may allow for greater experimentation and innovation in how AI curricula are developed and implemented, but diverse approaches may exacerbate disparities in curriculum rigor, student achievement standards, and educator qualifications. As for similarities, the two countries share hurdles such as the rural-urban divide, equitable access to quality AI education, and teacher quality. Ultimately, this report suggests future U.S. science and technology education policy should be considered in a globally competitive context instead of viewing it exclusively as a domestic challenge. For the United States, that consideration includes recognizing and capitalizing on its enduring advantage in attracting and retaining elite talent, including Chinese nationals. While this brief does not make policy recommendations for the U.S. education system, the upcoming CSET report “U.S. AI Workforce: Policy Recommendations” addresses some of the direct implications of the findings presented. Center for Security and Emerging Technology | 3 Table of Contents Executive Summary .............................................................................................. 1 Introduction ............................................................................................................. 4 Overview of China’s Education System .......................................................... 5 Overview of the United States’ Education System ..................................... 6 Integration of AI Education in China ................................................................ 8 Primary and Secondary AI Education ...................................................... 10 Role of Talent Training Bases ................................................................ 10 Mandated High School AI Curriculum ................................................. 11 Teacher Conferences ................................................................................ 11 Role of Private Sector ............................................................................... 11 Issues Faced ................................................................................................ 12 Undergraduate AI Education ...................................................................... 13 AI Institutes ................................................................................................. 13 Standardized AI Major ............................................................................. 14 Role of Private Sector ............................................................................... 18 Graduate AI Education ................................................................................. 19 Integration of AI Education in the United States ...................................... 22 Primary and Secondary AI Education ...................................................... 22 AI and Computer Science Curriculum ................................................. 22 Teacher Training ........................................................................................ 24 Role of Private Sector ............................................................................... 24 Issues Faced ................................................................................................ 25 Postsecondary AI Education ....................................................................... 26 Undergraduate AI Education ................................................................. 26 Graduate AI Education ............................................................................. 27 Role of Private Sector ............................................................................... 28 Implications for U.S. AI Workforce Competitiveness .............................. 29 China’s Strengths and Shortcomings ...................................................... 29 U.S. Strengths and Shortcomings ............................................................. 33 Implications ...................................................................................................... 34 Conclusion ............................................................................................................. 38 Authors .................................................................................................................. 41 Acknowledgments ............................................................................................. 41 Endnotes ................................................................................................................ 42 Center for Security and Emerging Technology | 4 Introduction Much has been written on how the Chinese government recruits foreign artificial intelligence talent. However, little is known about China’s ongoing initiatives to build their own AI workforce. Existing scholarship also lacks a detailed examination of how Chinese and U.S. approaches differ when it comes to AI education. To fill this gap, this issue brief details how both countries are integrating AI education and training into every level of education. It discusses potential national security implications of each country’s strengths and weaknesses, and highlights improvement areas for future U.S. science and technology (S&T) education and workforce policy. We aim to provide a clear-eyed assessment of the U.S. approach to AI education as it exists within the country’s decentralized education system. A discussion of the strengths and weaknesses of these systemic realities, especially relative to China’s system, will help policymakers better address critical barriers to U.S. AI competitiveness. The research presented in this brief is based on primary source U.S. statistics, reports and assessments from education nonprofits, publicly available information from the private sector, and individual states’ departments of education, along with Chinese education plans and policies, official statistics, and translations. The data is often defined and categorized differently, making uniform comparisons difficult. We attempt to clarify such differences when they occur. Center for Security and Emerging Technology | 5 Overview of China’s Education System China’s education system is overall more centralized than its U.S. counterpart. Its education system includes 282 million students in 530,000 educational institutions across all levels. China’s Ministry of Education is the main authority overseeing China’s education system, and is responsible for certifying teachers, setting national education goals, curricula and teaching material, and providing limited funding assistance.1 While the MOE supervises provincial education departments, it has granted more implementation responsibility to the provincial and municipal levels over recent decades.2 Responsibilities at the provincial and major city level include following national guidelines to develop provincial curricula based on developing an implementation plan that incorporates local contexts and MOE national curriculum guidance, then sending the plan to the MOE for approval before implementation.3 Further local responsibilities include administering teaching materials, school programs, providing education subsidies, and setting additional standards for teacher training.4 The MOE establishes goals for its education system through five- to 15-year education strategies. The goals for 2010–2020 include universalizing preschool education; improving nine-year compulsory education; raising senior high school gross enrollment rate to 90 percent (which has already been exceeded); and increasing the higher education gross enrollment rate to 40 percent. Provinces then typically follow to create their own education plans.5 The MOE’s Bureau of Education Inspections monitors implementation and provides feedback to local governments.6 For details on China’s education system, see the accompanying brief “Education in China and the United States: A Comparative System Overview.” Center for Security and Emerging Technology | 6 Overview of the United States’ Education System The U.S. education system is more decentralized than its Chinese counterpart, especially for primary and secondary education. Each state’s department of education is the authority that determines the laws that finance schools, hire educators, mandate student attendance, and implement curricula. In contrast to China’s MOE, the U.S. federal government provides relatively minor education oversight through the compilation and reporting of education statistics, along with promoting equitable access to education and enforcing a prohibition on institutional discrimination.7 The U.S. Department of Education, the United States’ federal agency for education, proclaims that education is a “state and local responsibility,” and the federal government’s role in education is more of a “kind of emergency response system” to fill gaps when “critical national needs arise.”8 The most notable federal education initiatives, such as the Elementary and Secondary Education Act of 1965, the No Child Left Behind Act of 2002, and the Every Student Succeeds Act of 2015, reflect the U.S. government’s efforts to promote childrens’ equal access to quality public education. At the postsecondary level, the federal government has slightly more authority through its administration of student financial aid. The Department of Education supports programs that provide grants, financial aid (loans), and work-study assistance. Roughly 66 percent of students apply for federal financial assistance.9 The department’s student loan programs have more than 43 million outstanding borrowers, with outstanding student debt now over $1.7 trillion.10 The jurisdiction of the U.S. Department of Education is rooted in the U.S. Constitution. As a result of the division in constitutional authority, states develop curriculum guidelines and performance standards, license private elementary and secondary schools to operate within their jurisdictions, certify teachers and administrators, administer statewide student achievement tests, and distribute state and federal funding to school districts.11 Center for Security and Emerging Technology | 7 Additionally, education in the United States is segmented between public and private schools, including religious and nonsectarian institutions. For details on the United States’ education system, see the accompanying brief “Education in China and the United States: A Comparative System Overview.” Center for Security and Emerging Technology | 8 Integration of AI Education in China Since 2017, China has released several strategic plans relevant to AI education. The most well-known of the plans—the State Council’s seminal July 2017 New Generation AI Development Plan—called for implementing AI training at every level of education. Another major push in AI education is through the “Double First Class University” (双一流大学) initiative, a 2017 program under Chinese President Xi Jinping that built upon previous reforms such as Projects 211 and 985 to create world-class universities.12 Nearly all of the MOE’s directly supervised and funded 75 universities are also “Double First Class.” The initiative split universities into two tracks: 42 universities were selected as world-class universities, and split respectively into 36 “Class A” (already close to being world class) and 6 “Class B” (potential to be world class) universities.13 This initiative essentially pared down the number of top universities China was focusing on.14 Figure 1 contains a timeline of these plans and their associated goals. Center for Security and Emerging Technology | 9 Figure 1. Timeline of China’s AI Education Strategy and Implementation (2017–2020) Source: Chinese Ministry of Education, State Council. Center for Security and Emerging Technology | 10 Primary and Secondary AI Education China is actively integrating AI education into young students’ education. These efforts are primarily characterized at the primary school level with introductory Python courses, and access to labs featuring robotics, drones, and 3D printing. Local governments have recently begun awarding schools for excellence. Since 2018, the MOE has mandated high schools to teach AI coursework. Role of Talent Training Bases Talent training bases are one of the newer ways that AI education is gaining momentum at the primary and secondary level. Shandong Province and Beijing began awarding “National Youth AI Innovation Talent Training Base” (全国青少年人工智能创新人才培养基地) honors to their schools between November and December 2020.15 Schools are chosen for demonstrating excellence in AI education. Primary schools are rewarded for offering rudimentary “technology and society” and “maker” courses and access to 3D printing and drones. Junior high schools gain recognition for “AI clubs,” robotics rooms, 3D printers, open source AI frameworks, utilize Python, and have won robotics competitions.16 The certification, according to images of award placards, is for two years.17 Shandong awarded 10 schools the certification, while Beijing awarded 21, including a S&T museum for youth.18 Beijing appears to have had a strong start to its AI education program. As part of the talent training base initiative, one hundred teachers were awarded certificates to be “AI literacy assessors.”19 As part of the project, the Beijing Youth AI Literacy Improvement Project also laid out numerical goals for its next three to five years: it plans to support the creation of ten AI experimental areas, create one hundred AI education experimental schools, select thousands of AI education seed teachers, and train ten thousand young AI talents.20 As the program is only a few months old, it remains to be seen what quality control mechanisms will be used, and how well Beijing will implement its goals. Center for Security and Emerging Technology | 11 Mandated High School AI Curriculum At the high school level, the MOE in January 2018 revised its national education requirements to officially include AI, Internet of Things, and big data processing in its information technology curricula.21 The revision requires high school students enrolled in the fall of 2018 and beyond to take AI coursework in a compulsory information technology course.22 The coursework goals include data encoding techniques; collecting, analyzing, and visualizing data; and learning and using a programming language to design simple algorithms.23 Python is a popular choice, and is even being integrated into the Gaokao as testing material in Beijing as well as Zhejiang and Shandong provinces.24 This integration may incentivize high school students to develop Python expertise at an earlier age, and prepare them for further training at the university level and beyond. Further goals include understanding AI safety and security, and an emphasis on ethics. However, there is also a distinct emphasis on “learning to abide by relevant laws,” which could channel learning in directions considered suitable to the Party-state’s needs. Teacher Conferences At both the primary and secondary school level, a prominent planning and information sharing mechanism is the National Primary and Secondary School Artificial Intelligence Education Conference, which is sponsored by the nonprofit Chinese Association for Artificial Intelligence’s Primary and Secondary School Working Committee.25 The conference focuses on teacher training, use of education platforms, and curriculum design.26 Role of Private Sector Companies and schools often partner to create textbooks. In April 2018, SenseTime and the Massive Open Online Course Center of East China Normal University launched the first domestic AI textbook Artificial Intelligence Basics (High School Edition) for middle school students.27 According to SenseTime, it is currently Center for Security and Emerging Technology | 12 being taught in pilot programs in more than one hundred schools throughout the country in Shanghai and Beijing, as well as Guangdong, Heilongjiang, Jiangsu, Shandong, and Shanxi provinces. SenseTime is also training over nine hundred teachers to teach the material.28 In June 2018, Soochow University Press published the “Artificial Intelligence Series for Primary and Secondary Schools.” In November 2018, a Tencent-backed, globally active company named UBTech Robotics, and East China Normal University Press jointly released the “Future Intelligent Creator on AI-Series of Artificial Intelligence Excellent Courses for Primary and Secondary Schools.”29 More recently, Tsinghua University announced in January 2020 that Yao Qizhi—China’s first winner of the Turing Award, academician of the Chinese Academy of Sciences, and dean of Tsinghua’s Institute of Interdisciplinary Information Sciences—would be editing the textbook Artificial Intelligence (High School Edition).30 Chinese tech giants such as Baidu are also helping to introduce AI to vocational secondary schools. For example, in July 2019, Baidu Education and Beijing Changping Vocational School launched China’s first vocational school-enterprise cooperation initiative on AI education.31 The cooperation identified five dimensions: jointly building Baidu's artificial intelligence innovation space, jointly building artificial intelligence technology and application majors, jointly building training bases for vocational college instructors, jointly carrying out teacher-student skills exchange competitions, and jointly building small and medium-sized subject general experience bases.32 Issues Faced AI education at the primary and secondary school levels faces notable issues. Local education consultants note that one issue is an overly difficult curriculum for young children, especially when students require significant background knowledge to understand algorithms powered by deep learning.33 Additional issues include a lack of systematic and authoritative guidance on textbook Center for Security and Emerging Technology | 13 development, lack of professional training for teachers, and lack of equipment in underequipped school AI labs.34 Undergraduate AI Education China’s AI education push is most prominent at the postsecondary level. The following sections will examine the two main mechanisms for talent training: AI institutes, and the MOE’s standardized AI major. The AI major is now the country’s most studied field.35 AI Institutes AI institutes largely preceded the MOE’s development of the AI major. Both preceding and following the MOE’s release of the “AI Innovation Action Plan for Colleges and Universities” in 2018, at least 34 institutions launched their own AI institutes between 2017–2018 (see Figure 2).36 In 2018, three Seven Sons of National Defense universities joined these ranks. The Seven Sons are directly supervised by the Ministry of Industry and Information Technology (MIIT). Their core mission is to support the People’s Republic of China’s defense research, its industrial base, and military-civil fusion to merge civilian research into military applications.37 Aside from three Seven Sons in 2018, at least three universities even launched their institutes before the July 2017 Next Generation AI Plan from July 2017.38 However, it is common for institutes’ “About Us” pages to cite the “AI Innovation Action Plan” and the national AI plan as its reason for creation.39 Unlike the AI major, which is clearly targeted as an undergraduate major, institutes are significantly more heterogeneous in their research foci, which range from natural language processing to robotics, medical imaging, smart green technology, and unmanned systems.40 Likewise, they often train both undergraduate and graduate students, and in some cases offer the AI major within their institution.41 Companies also play an establishing role for AI institutes. For example, Chongqing University of Posts and Telecommunications Center for Security and Emerging Technology | 14 set up an AI institute in 2018 with AI unicorn iFlytek, while Tencent Cloud established AI institutes in 2018 with Shandong University of Science and Technology and Liaoning Technical University.42 It is beyond the scope of this paper to examine the AI institutes’ quality indicators. A forthcoming CSET data brief will examine the landscape of AI institutes, their research foci, the degree of overlap with the AI major, and their relationship to China’s key laboratories in greater detail. Standardized AI Major In March 2019, the MOE approved 35 colleges and universities to offer the four-year AI major as an engineering degree, including four of the Seven Sons of National Defense universities.43 Half of the institutes (17) that had previously launched AI institutes were later formally approved to launch the new AI major in the 2019–2020 range. The approval of a new AI major was a notable change from past curricula, when AI was available as a concentration within the computer science major at some universities. In February 2020, the MOE approved 180 more universities—a fivefold increase—to offer the AI major, bringing the total number of approved universities to 215. One of the approved was a fifth member of the Seven Sons.44 In March 2021, the MOE approved 130 universities, including the sixth and seventh of the Seven Sons, bringing the total university count to 345.45 In both 2020 and 2021, the AI major was the most popular new addition to universities’ curricula; in 2021, the next most popular majors included 84 universities offering intelligent manufacturing and engineering, and 62 offering data science and big data technology.46 Additionally, eight universities that had launched AI institutes between 2016–2018 have also begun offering the AI major. The vast majority of these universities are not well known or are business oriented, with the clear exception of Tsinghua. Center for Security and Emerging Technology | 15 Tsinghua University’s AI Offerings Tsinghua had an early foray into AI teaching. In 1979, the predecessor to today’s Department of Electronic Engineering opened the “Introduction to Artificial Intelligence” course, which was one of the earliest AI courses offered by any Chinese university.47 In the late 1980s, the Department of Computer Science and Artificial Intelligence of Tsinghua University established the State Key Laboratory of “Intelligent Technology and Systems,” also known as the “Institute of Human Intelligence.”48 While Tsinghua was only approved in March 2021 to offer the AI major, it already offered AI education via various channels and opened its interdisciplinary Institute of Artificial Intelligence in June 2018. In May 2020, Tsinghua announced the creation of an AI “smart class” (智班), which will be the 8th experimental class of the “Tsinghua Academy Talent Training Program.” Yao Qizhi, who also spearheaded the aforementioned high school AI textbook, will serve as the lead faculty in an interdisciplinary “AI + X” approach, which entails integrating AI with mathematics, computer science, physics, biology, psychology, sociology, law, and other fields.49 In 2019, Tsinghua also combined three minors—Robot Technology Innovation and Entrepreneurship (机器人技术创新创业), Intelligent Hardware Technology Innovation and Entrepreneurship (智能硬件技术创新创业), and Intelligent Transportation Technology Innovation and Entrepreneurship (智能交通技术创新创业)—into one new major titled Artificial Intelligence Innovation (人工智能创新).50 Center for Security and Emerging Technology | 16 Figures 2 and 3 breakdown China’s 345 universities with AI majors and 34 AI institutes, as well as the number that are “Double First Class,” Seven Sons, and the Ivy League-equivalent C9 League.51 Figure 2 presents the information geographically while Figure 3 provides a breakdown by type of institution. Figure 2. Map of Universities Offering the AI Major and AI Institutes Source: Ministry of Education, Synced Review, CSET calculations. Center for Security and Emerging Technology | 17 Figure 3. Share of Elite Universities Offering AI Major and AI Institutes Source: Ministry of Education, Synced Review, CSET calculations. A Chinese AI company called KXCY AI working with several elite Chinese universities suggests that the AI major’s goals are to meet national economic and technological development needs, develop knowledge of basic AI theories, learn research and development (R&D) skills, along with system design, management, and solving complex engineering problems in AI and related applications.52 Further, colleges and universities with existing AI programs were encouraged by the MOE in 2018 to expand their scope to establish “AI + X” majors.53 Beyond the AI major, AI-adjacent majors include data science (数据科学), a major in its fifth year of operation, while other majors Center for Security and Emerging Technology | 18 include big data technology (大数据技术), intelligent manufacturing (智能制造), robotics engineering (机器人工程), and intelligent science and technology (智能科学与技术).54 Role of Private Sector China’s AI enterprises play a significant role in developing AI talent and providing resources to universities through formalized partnerships. One such mechanism is the Information Technology New Engineering Industry-University-Research Alliance (信息技术新工科产学研联盟, or AEEE), founded in 2017 to bolster technological innovation within the industry-university-research nexus. Its founding members included the China Software Industry Association, 27 domestic universities, five research institutes, and 12 companies, with support from MOE’s Higher Education Department and MIIT.55 Chinese companies include Baidu, Alibaba Cloud, Tencent, Huawei, and China Telecom. Of the 27 Chinese universities, 21 offer either an AI major, institute, or both. The roster further includes all Seven Sons of National Defense, and the entire C9 League.56 The AEEE universities include China’s most elite institutions. Of note is that the AEEE also includes U.S. companies Cisco, IBM, and Microsoft.57 A blog post from Microsoft Research Asia (MSRA) from April 2019 indicates its founding role in the alliance’s AI Education Working Committee, its work towards implementing the MOE’s “industry-education integration” (产教融合理念), as well as the award it subsequently received from the alliance for its contributions in curriculum construction, resource sharing, and teacher training.58 MSRA further states it has partnerships with University of Science and Technology of China, Xi'an Jiaotong University, Harbin Institute of Technology (a Seven Sons university with its own AI institute and major), Nanjing University and Zhejiang University, while its AI education seminars have helped three hundred colleges and universities across the country and more than five hundred AI teachers.59 In March 2019, MSRA also worked with another Seven Sons institute, Beihang University, to open a course called “Real Combat in Artificial Intelligence,” which Center for Security and Emerging Technology | 19 attracted at least 30 Beihang students from across 10 majors that year.60 Baidu also plays a prominent role in both the alliance and beyond. It is one of China’s three internet giants, and an “AI champion” as appointed by the Ministry of Science and Technology, working on autonomous driving.61 Between 2018 and 2021, Baidu has signed at least nine AI partnerships with universities. These partnerships are designed for sharing case studies or challenging issues faced in the field, jointly constructing courses, teacher training, campus learning communities, launching competitions, and internship training.62 These partnerships also provide AI Studio training. AI Studio is an AI educational service offered by Baidu, based upon its deep learning platform PaddlePaddle. It provides an online programming environment, free GPU computing power, massive open source algorithms and open data to help developers quickly create and deploy models. AI Studio has 15 partner universities—all but four either offer the AI major, an AI institute, or both.63 Graduate AI Education As alluded to in Figure 1, China signaled it understood the importance of graduate students advancing AI development when the MOE, National Development and Reform Commission, and the Ministry of Finance released a plan in January 2020 calling for increased training for AI graduate students. The plan’s key goals are centered on using the aforementioned “Double First Class” program and interdisciplinary “AI + X” framework to bolster talent development and increase the number of grad students studying AI, especially at the doctoral level.64 Talents are called upon to apply AI to industrial innovation, social governance, national security, and other fields, and support the mission and needs of major national projects and major development plans.65 Additionally, the plan stated AI will be incorporated into the “Special Enrollment Plan for the Cultivation of High-level Talents in Center for Security and Emerging Technology | 20 Key Fields Urgently Needed by the State” (“国家关键领域急需高层次人才培养专项招生计划”). This incorporation has already led to increased supply of AI doctoral positions at institutes such as Nankai University.66 The plan also emphasizes quality control. The degree evaluation committee has established an artificial intelligence working group (高校学位评定委员会设立人工智能专门工作组) responsible for developing advanced AI talent training programs, degree standards and management norms, and performing random quality control inspections of AI dissertations. As with all levels of AI education, China’s AI companies are asked to partner with universities—partnerships focus both on training teachers and students. At the postgraduate level, there is meant to be a revolving door between industry and universities. First, at the instructor level, companies are encouraged to train university instructors in the latest cutting-edge methods. Instructors are asked to incorporate the latest research findings in AI into doctoral courses. From the company side, leading researchers at these companies can also do research through “double employment” at universities. Second, companies are asked to train graduate students by having them solve industry needs. For doctoral students, enterprises are also encouraged to open up "scenario-driven," application-oriented courses, as well as their data, case studies, tools, and training platforms. Enterprises can also utilize industry alliances, joint R&D labs, entrepreneurship and skills competitions, and certification training to help graduate students grow in the field. There is also an emphasis on increased and coordinated funding at the university and enterprise levels. Universities are encouraged to coordinate various resources, such as financial investment and scientific research income, increase support for graduate training, carry out basic frontier research and key common technology research efforts. With enterprises, methods such as angel investment, venture capital, and capital market financing can boost Center for Security and Emerging Technology | 21 major AI projects and applied research within universities, as well as assist with talent training. Although it remains too early to tell how well these policies will tangibly advance China’s talent training and AI capabilities, China’s efforts over the last 20 years in prioritizing and progressing in key areas such as hypersonics and biotechnology indicate that policies are not just wish lists, but backed up by measurable results. Nonetheless, we caution that widespread, haphazard curricula construction for the sake of participating in the “AI gold rush” could hinder these successes. If China is able to fulfill the tall order of ensuring rigorous quality control from its wealthy metropolises to its under-resourced regions, we assess that China could replicate its past successes in policy implementation. Center for Security and Emerging Technology | 22 Integration of AI Education in the United States Integration of AI curricula in the United States is uneven in both depth and scope as it remains the work of states and local school districts to finalize and implement curriculum standards. In recent years, various organizations and companies have helped facilitate progress through efforts to define and establish AI curricula, programs, learning standards, and resources for teachers and students who want to add AI to their education. Primary and Secondary AI Education AI and Computer Science Curriculum Numerous states have recently taken steps toward integrating AI into K-12 classrooms. This often starts with computer science curricula, when available at the school, as it is viewed as a first step towards an AI specialization. However, many schools still lack CS education. From 2019 to 2020, 28 states adopted policies to support K-12 CS education, with efforts disparate across and within states. According to a 2020 report from Code.org, the portion of public high schools teaching CS are as high as 89 percent in Arkansas and as low as 19 percent in Minnesota.67 Moreover, only 22 states have adopted policies that provide all high school students access to CS courses, and of those states, only nine give all K-12 students access.68 Only 47 percent of U.S. public high schools teach CS,69 although that is up from 35 percent in 2018.70 Within the movement among states to adopt CS curricula, the content and learning standards of such curricula will vary by state as each maintains authority over curriculum standards. Different organizations have formed unique versions of CS education standards that present schools with options or frameworks for teaching AI in the classroom. The professional association Computer Science Teachers Association in 2017 released a “core set of standards” to guide teachers in computer science.71 The International Society for Technology in Education, a nonprofit Center for Security and Emerging Technology | 23 headquartered in Washington, DC, collaborated with the CSTA to release its ISTE Standards for Computer Science Educators in 2018.72 CSTA also played a role in developing another set of K-12 CS guidelines, along with the Association for Computing Machinery, Code.org, the Cyber Innovation Center, CS4ALL, and the National Math and Science Initiative, to develop the K-12 Computer Science Framework. Nonprofit Code.org offers K-12 students and educators CS curricula embedded within STEM courses already offered.73 The initiative serves as a conceptual guide to inform curriculum development, and it prioritizes computing systems, cybersecurity, data science, algorithmic programming, and social and cultural impacts of computing.74 Not all STEM and CS education initiatives explicitly state the connection to AI education, but some initiatives do. AI4K12 is an initiative born from a partnership between the Association for the Advancement of AI and CSTA and was funded by the National Science Foundation (NSF) and Carnegie Mellon University.75 AI4K12’s goal is to develop national AI education guidelines for K-12. It currently serves as an online repository of resources for educators. The nonprofit AI Education Project is another example of an organization offering its own AI curriculum to schools and educators nationwide. At least two public schools have developed their own AI curricula. Seckinger High School in Gwinnett County, Georgia, will be the first to introduce a curriculum for AI education to an entire K-12 cohort.76 The North Carolina School of Science and Mathematics, a high school administered by the University of North Carolina system, announced The Ryden Program for Innovation and Leadership in AI to teach high schoolers how to design, use, and understand AI.77 Thus far, the content of these AI curricula can vary. Seckinger High School’s AI curriculum, for example, teaches coding, machine learning, design thinking, data science, and ethics and philosophical reasoning at each level of its K-12 education.78 The Ryden Program emphasizes that students understand “how to merge humanity with machine learning,” and how AI can solve complex problems affecting society.79 Center for Security and Emerging Technology | 24 Teacher Training For public school educators, each state governs the formal licensing and accreditation process. This process can vary widely by state, and even year to year, as states implement reforms or update testing, certification, or license requirements.80 Moreover, private or religious schools usually have more autonomy in setting requirements for educators, which introduces another degree of variation for how AI curricula might be taught.81 These variations could result in uneven standards for educators and therefore affect quality. Ultimately, multiple components factor into preparing educators to teach AI and CS, such as community support groups, training and professional development, or microcredentials or certificates. Therefore, an additional hurdle to competitive teacher training is adequate access to the resources required to teach AI effectively.82 Some states have already spent years integrating CS curriculum in schools and training its educators accordingly.83 Role of Private Sector The private sector is supporting CS and AI integration into primary and secondary education through a number of different initiatives with distinct goals. CSET analysis found over three dozen such programs; all are a mix of private companies, nonprofits, and public-private partnerships that offer AI curricula, learning materials, and conferences for students and educators in the United States.84 For example, Microsoft Philanthropies operates the Technology Education and Literacy in Schools (TEALS) program in roughly 455 high schools with the goal to “build sustainable CS programs” for underserved students and schools, especially rural ones.85 Amazon Future Engineer, a sub organization of Amazon, supports CS curriculum access to teachers in underserved public school districts and claims to reach more than 550,000 K-12 students.86 Center for Security and Emerging Technology | 25 In addition to working AI education into the classroom, the private sector is expanding opportunities to learn about AI outside of the classroom. An estimated three hundred different organizations now offer AI or CS summer camps to K-12 students. Private companies organize a number of these and host them at competitive universities such as Stanford, the Massachusetts Institute of Technology, and Carnegie Mellon. Companies and organizations have also expanded AI learning opportunities to K-12 students by offering after school programs, competitions, and scholarships. Some initiatives explicitly focus on reaching U.S. students that otherwise may not have access to AI education through formal schooling. Girls Who Code, TECHNOLOchicas, and Black Girls CODE are just three of many organizations that specifically service underrepresented groups in CS education to address gender and race disparities in CS education.87 Issues Faced Differences in the design and deployment of AI education across states make it difficult for U.S. schools to consistently define “AI education,” justify investment in AI education with limited resources, and provide adequate training to educators. Whereas some AI education initiatives prioritize CS, programming languages, math, and data science, others emphasize non-technical areas such as societal and ethical impacts of AI applications.88 For example, private company ReadyAI’s programs emphasize the nontechnical components of AI, such as art and multimedia, whereas Microsoft TEALS’ curriculum focuses on technical skills such as Java and Python programming languages. Investment in AI education also becomes a challenge since such a piecemeal approach makes it difficult for educators and education leaders to assess which learning programs are effective. Whichever approach states pursue, the result is a disjointed implementation of AI education. Accordingly, private and nonprofit sector organizations are taking on efforts to coordinate across state lines in conjunction with state governments and professional Center for Security and Emerging Technology | 26 teachers’ associations. Still, most of these efforts function as guides, resources, and suggested standards. Postsecondary AI Education For most U.S. colleges and universities, the closest thing to an AI major remains CS degrees with AI concentrations or specializations. Nevertheless, these postsecondary CS/AI course offerings are growing rapidly. According to the 2021 AI Index Report, the number of AI-related undergraduate courses increased by 102.9 percent and 41.7 percent at the graduate level in the last four years.89 Growth rates in undergraduate and master’s CS degrees were similarly high. Doctorate awards in CS have also increased, by 11 percent since 2015, as have those with a specialization in AI.90 In addition to the increase in AI course offerings, many federal agencies are also prioritizing investment in AI higher education. For example, the 2021 National Defense Authorization Act directs the NSF to fund AI initiatives for higher education, such as fellowships for faculty recruitment in AI, as well as AI curricula, certifications, and other adult learning and retraining programs.91 Undergraduate AI Education At the undergraduate level, integration of AI education varies and often depends on the type of institution or the availability of industry collaboration. In community colleges and technical schools, there are few AI-specific education initiatives with the exception of several recent industry partnerships with companies such as Amazon, Google, and IBM.92 Moreover, some states have better resourced community college CS programs that may be better equipped to integrate AI curricula. For example, over 10 percent of CS associate degree graduates come from Northern Virginia Community College, one of the community colleges partnering with Amazon to offer a cloud computing curriculum.93 Data from the 2021 AI Index Report show that many universities have augmented bachelor’s degrees in CS with a specialization in Center for Security and Emerging Technology | 27 AI or machine learning.94 For example, Stanford University and the California Institute of Technology both offer bachelor's of science degrees in CS with an AI specialization track. The University of Illinois confers a bachelor’s of science in computer engineering with a specialization in AI, robotics, and cybernetics. The University of Minnesota’s AI specialization track includes classes in AI, machine learning, data mining, robotic systems, and computer vision. The University of California San Diego offers a computer science degree with an area of interest in AI through its Jacobs School of Engineering. Moreover, some universities are making AI educational content available to everyone. CalTech and Stanford, for example, are opting to provide AI-related coursework online to both students and the general public. Harvard University and MIT, through their online learning platform edX, also provide both free and low-cost options for AI education. Graduate AI Education In contrast to undergraduate programs, more postgraduate institutions offer AI majors and degrees instead of AI specializations within a CS degree, particularly at the master’s level. For example, Colorado State University, UC San Diego, and Johns Hopkins are just a few universities now offering master’s of science degrees in AI. At the doctorate level, some programs offer a PhD in AI or ML, while many others mirror the undergraduate approach to offering a degree in CS with an AI specialization. Also at the graduate level, a number of globally competitive U.S. universities are leading in AI education and research. Such programs are known for industry and government collaboration: Carnegie Mellon University’s AI and CS programs collaborate with both the private sector and government in AI-related fields such as autonomy, robotics, and 3D printing. Amazon AWS is a sponsor of CalTech’s AI4Science Graduate and Postdoctoral Fellows program, a cross-disciplinary approach to AI research. MIT’s Computer Science and Artificial Intelligence Laboratory is recognized as a hub Center for Security and Emerging Technology | 28 for AI innovation and is home to nearly three dozen research groups, research centers, and communities of research in AI and machine learning.95 As previous CSET research has shown, at the graduate level, the United States is able to attract and train top-tier AI talent.96 Role of Private Sector The private sector is assisting in the development of course design for postsecondary AI education, from certificate and online learning programs to two- and four-year degrees. For example, IBM is collaborating with online learning platform Simplilearn for its certificate in AI. AWS offers machine learning courses on learning platform edX, and Google has a similar partnership with Udacity and offers deep learning courses on its online education platform. Partnerships also exist between industry and degree-granting institutions. For example, Amazon is partnering with the Virginia Community College System and six Virginia universities to teach students cloud computing. In Arizona, the Maricopa County Community College District is working with Intel on Arizona’s first AI certificate and degree program. IBM’s P-TECH program prepares high schoolers and community college students for careers in tech, also offering free online courses in AI and cloud computing. In 2020, the NSF announced its funding of a joint industry-government AI initiative that includes the U.S. Department of Agriculture, the National Institute of Food and Agriculture, the U.S. Department of Homeland Security Science and Technology Directorate, and the U.S. Department of Transportation Federal Highway Administration. The initiative established 18 new AI Research Institutes, which each have a component related to AI education.97 Industry cosponsors, Accenture, Amazon, Google, and Intel, put $160 million toward the institutes, which will support a range of interdisciplinary AI research ranging from cyber infrastructure, biology, ethics, and agriculture.98 Center for Security and Emerging Technology | 29 Implications for U.S. AI Workforce Competitiveness The relative efforts of the United States and China to build a globally competitive domestic AI workforce have potentially major and long-term national security implications. Policymakers at all levels of the U.S. government would be wise to consider these implications when designing future S&T education and workforce policies. We discuss strengths and shortcomings for each country followed by an assessment of implications. China’s Strengths and Shortcomings Over the last decade, China made significant progress in its short-, medium-, and long-term strategic plans for cultivating an S&T workforce. It has fairly effectively embedded AI education and training into each phase of the workforce development pipeline. China’s progress in incorporating AI education at all levels is notable given the large size of its student population relative to the United States. Figure 4 shows 2019 enrollment totals by level of education for China and the United States (2019 graduate totals by level of education are provided in the Appendix). China maintains a cumulative numerical advantage until the graduate level, after which the United States retains a slight lead. This lead disappears when not counting foreign-born students, who comprise about 14 percent of total graduate enrollment. However, as a share of total population in each country, the United States remains far ahead. Center for Security and Emerging Technology | 30 Figure 4. 2019 Total Educational Enrollment, United States and China Source: 2020 Digest of Education Statistics, U.S. National Center for Education Statistics; 2020 Chinese Statistical Yearbook. Although the United States leads in graduate enrollment, when looking at graduate breakouts in STEM, the lead reverses. In fact, China’s lead in advanced STEM education has only increased over the last five years. As shown in Figure 5, China is graduating more doctorate degrees in science and engineering than the United States in STEM. It is clear that advanced degrees in China emphasize these disciplines — 59 percent of doctorates and 41 percent of masters awarded in China in 2019 were in these two fields, compared to about 16 percent of U.S. graduate degrees being in STEM.99 Center for Security and Emerging Technology | 31 Figure 5. China Has More STEM Postsecondary Degrees Than the United States Source: 2020 Digest of Education Statistics, U.S. National Center for Education Statistics; China’s Ministry of Education.100 Center for Security and Emerging Technology | 32 Still, China’s progress is not without challenges. One of its largest hurdles to having a more uniformly educated AI workforce from all parts of the country is the household registration system, or 户口 (hukou), which controls intra-regional labor distributions by assigning legal residents permits based on the head of household’s birthplace.101 Despite ongoing reforms, the hukou system still provides stratified availability of social benefits based on individuals’ registration, rather than where they live.102 This makes it very difficult for youths with rural hukou to obtain education beyond the compulsory level.103 Inequities stemming from the urban-rural divide also bleed into China’s AI education pipeline. For example, while well-resourced primary and secondary schools in Beijing can afford to build 3D printing and robotics labs, poorer parts of the country may not have such advantages from an early age, or have adequately trained teachers. Educational access and quality in China largely depend on location and socioeconomic status—urban residents are far more likely to have positive intergenerational mobility versus those from rural areas. Despite national reforms, rural populations experienced “nonexistent” effects.104 The main drivers were poor policy enforcement and insufficient educational reforms.105 MOE data from 2020 also indicates additional challenges. Rural primary schools are closing and enrollment is declining.106 An underlying reason behind these numbers is the continual migration patterns of rural hukou holders into urban areas, leading to closure of village schools. Further, the MOE may have mandated AI education in high schools from fall 2018 onwards, but it is no trivial task to implement education of uniform quality or properly enforce quality control. If students in less well-resourced areas only have the means to memorize patriotic AI propaganda and do not receive the same caliber of AI education, it could exacerbate a deepening digital divide and economic inequality.107 It also remains to be seen whether China’s AI education will spur creative thinking and Center for Security and Emerging Technology | 33 teamwork, or plug students into carrying out repetitive tasks to design things such as “simple algorithms.”108 China also faces issues in teacher quality. There is a higher student to teacher ratio in poorer areas, and loss of teacher talent to more developed areas is common. Since poorer parts of the country lack funds to hire and properly train teachers, there are fewer, less-qualified teachers per student.109 However, the Chinese government has implemented policies such as recruiting graduates from universities to work for three years in rural schools in central or western China, and required teachers in large and medium cities to regularly work for short periods in rural schools.110 Rural teachers can also apply for continued learning and training at teacher training institutions.111 U.S. Strengths and Shortcomings U.S. efforts to embed AI education into its training pipeline have been mixed. A few noteworthy K-12 AI education efforts stand out, such as Gwinnett County’s first U.S. public AI high school and an increasing number of schools implementing CS curricula. Moreover, data shows that the NSF-funded 2016 Advanced Placement CS Principles (AP CSP) course brought in more diverse groups of students studying CS in high school and postsecondary education.112 Yet, across the U.S. K-12 education system as a whole, integration of AI education is lopsided as inequities in quality and access persist. At the postsecondary level, the United States shows more progress. NSF’s multi-agency program solicitation for 18 new AI institutes reflects strengthened collaboration between the government and private industry.113 Additionally, large public universities and private universities generally have more resources and greater motivations to remain competitive in providing high-quality education. A number of institutions have either supplemented CS education with specializations in AI or added AI degrees and majors. Still, while this is encouraging, we note this does not make up for what is lost in early K-12 exposure or for Center for Security and Emerging Technology | 34 youth who do not attend college. For youth that do attend college, research shows many have already decided against pursuing STEM fields before they even arrive.114 An additional challenge for AI education in the United States is the ability to recruit teachers with CS or AI expertise since industry is usually a more attractive option.115 This creates quality gaps when it comes to teaching CS or AI curricula in U.S. classrooms. Moreover, in certain school districts, paid teacher training is not offered for CS or AI, and even when accessible, CSTA has described the CS certification process as “confused, disparate and sometimes absurd.”116 Additionally, in lower-income and rural school districts, educators may already be teaching multiple subjects.117 Therefore, tasking those educators with an additional CS or AI course may not be feasible. Implications China has benefited from its centralized, systematic approach to implementing AI education at all levels. Provincial education bureaus reward primary and secondary schools for offering curricula on AI basics, the MOE mandated AI curriculum at the high school level, and the MOE approved 345 universities to offer the standardized AI major, China’s most popular new major today. At least 34 universities also have AI institutes, which provide undergraduate and graduate training and research. At the postgraduate level, the MOE called for increased support for AI students through company and venture capital funding, and for companies and universities to partner on training and solving industry needs. Coupled with a growing numerical advantage over the United States in STEM graduate degrees, China is well-equipped to develop a robust, medium-term AI workforce pipeline. However, one major hurdle to a longer-term pipeline to closely observe going forward is China’s rapidly aging population, which is not being replenished by sufficient births despite the abolished one-child policy.118 In July, China even moved to end all restrictions and fines for surpassing the three-child quota, effectively allowing limitless births.119 Center for Security and Emerging Technology | 35 China’s progress may also help its military-civil fusion (MCF) efforts, which could undermine U.S. national security if these efforts increase Chinese military competitiveness. As indicated in Figure 3 above, half of the 42 Double First Class universities offer the AI major, and 10 offer both the major and have their own AI institute. The DFC plan is designed to mesh these universities into the MCF R&D pipeline, and therefore originate innovations—including in AI—that help both the military and civilian sectors.120 Further, all of the Seven Sons universities offer the AI major, and about half have institutes. Previous CSET research has found three-fourths of graduates recruited by Chinese defense SOEs are from the Seven Sons, raising concerns that those equipped with AI skills and capabilities are directly entering the defense workforce.121 The United States benefits from greater freedom and flexibility in its education and training system. At its best, U.S. states and local governments have the freedom to innovate in education. Flexibility creates opportunities for integrating innovative curricula designs, new approaches in pedagogy, and experimentation in how students interact with and learn about AI. Similarly, braided funding models draw from a number of stakeholders, enabling schools to offer a greater variety of learning opportunities and experiences inside and outside of the classroom. This diversity contributes to the engine of innovation that has long been a hallmark of American society. However, our analysis suggests these strengths are also a weakness when it comes to quickly leveling up the U.S. workforce for AI and other emerging technologies. At its worst, differences in school districts’ funding create educational disparities, while funding structures limit the ability for long-term planning. Not all states can take advantage of the flexibility inherent in current governance and funding structures. Some states have more resources than others, better access to quality teachers, and different prioritization of AI and CS education. Flexibility can be a strategic advantage only if school districts use it, especially with a focus on quality and equity in opportunity. As an additional challenge, braided or fractured funding initiatives are difficult to Center for Security and Emerging Technology | 36 track, evaluate, and scale, limiting the reach of any programs that are particularly successful or innovative. These realities are exacerbated by a U.S. education and training system that relies on piecemeal or localized initiatives, especially when it comes to private sector involvement. For example, in contrast to China’s MOE directives that directly enlist the private sector in AI education, including in defense-relevant applications, the United States is far less systematic and has many one-off arrangements. While some are an advantage when they target and serve demographic groups that may be overlooked by public education, too many AI education initiatives could become a potential downside when it results in disunity for AI education standards. However, one of the issues the United States can claim as an enduring advantage is greater ethical and academic freedom. China’s high school curriculum reforms from January 2018 emphasized following relevant laws and ethical concepts. Despite China’s increasing emphasis on ethics, such as through the 2019 Beijing AI Principles, the previously mentioned SenseTime high school AI textbook does not include ethical quagmires such as the Trolley Problem, fake news, data privacy, or censorship.122 It is also possible that given these companies are involved in controversial fields of AI such as surveillance, they may also bias how students view AI.123 If students receive a limited view of how AI can be used in service of the state’s security interests and are discouraged from criticizing these uses as unethical, that carries implications for academic freedom and China’s status as a world leader in AI.124 Critics of the Chinese education system have argued that even with generous funding, greater academic freedoms and university autonomy will be needed to establish true world-class universities.125 The United States is actively encouraging private industry, non-profits, and academia to prioritize AI safety and ethics. For example, the NSF AI Research Institutes are designed to ensure AI is developed transparently, reliably, and in line with American values.126 Moreover, in 2021, the White House issued guidance to Center for Security and Emerging Technology | 37 federal agencies on principles and policies to ensure AI protects privacy and civil rights.127 Ultimately, China’s approach to AI education is incompatible with the values, design, and structure of the U.S. education system. The U.S. would instead be best served to leverage the strengths—and mitigate the weakness—of its education system to produce the AI workforce of the future. As this report shows, a key factor in U.S. AI workforce competitiveness will be how it addresses these challenges to grow and sustain the AI talent pipeline going forward. It is not a foregone conclusion that the United States cannot compete in cultivating a leading AI workforce and we advise caution with that interpretation. For example, some critics of the U.S. education system lament that the United States is only at an inherent disadvantage. In this view, China will out-educate and train AI talent relative to the United States given its ability to mandate curricula and education offerings, and given its ability to fund and execute national education strategic plans. Meanwhile, the United States is stuck, unable to agree on core curriculum, equitably fund access to quality education, and perform well on international mathematics assessments let alone compete in AI education. Several key advantages for the United States should factor into any discussion of system trade-offs. The potential value of the pockets of innovation in AI education throughout the U.S. education system are large, especially when appropriately targeted. This would not be possible in a more centralized system. At the graduate level, which educates a critical segment of the AI workforce, the United States remains in first place. The United States also remains the destination of choice for top-tier foreign-born AI talent working towards a doctorate. Previous CSET research showed that despite China’s two decades of talent recruitment drives, nationals either do not return or do so part-time, mostly due to workplace politics.128 Meanwhile, 91 percent of top Chinese students with U.S. AI doctorates are still in the United States five years after graduating.129 Center for Security and Emerging Technology | 38 Conclusion Both the United States and China have made significant progress in adopting AI education over the last five years. Their efforts show ambition in growing and cultivating a globally competitive AI workforce; however, they also reveal the structural challenges resulting from each country’s education systems that potentially hinder widespread implementation of quality AI education. This brief sheds new light on these efforts by offering a comparative assessment. We discuss the structural characteristics of the two education systems and the resulting barriers or strategic advantages the two systems lend to the adoption of AI education. China’s efforts to increase AI education at all levels bears important implications. Standardized curricula, centralized plans for implementing AI education, and explicitly calling upon companies to help universities all grant China higher likelihood of developing a robust talent pipeline for solving AI challenges. In addition, Western companies such as Microsoft Research Asia have worked with some Seven Sons and other Chinese universities through formalized partnerships involving curricula development. The United States is working to integrate AI education into its classrooms, but the decentralized nature of its education system means it has a more piecemeal approach. Moreover, the United States is still heavily focused on tackling CS education over AI. In the same year that China laid out its AI education goals in its New Generation AI Development Plan, the United States’ main CS teacher’s association released new curriculum standards for CS at the primary and secondary education levels.130 Recent years have seen a flurry of initiatives, programs, and private companies emerge in the AI education space, but there is neither a cogent vision nor a cohesive national standard guiding the focus of such efforts. However, this does not mean AI education and training in the United States is inherently at a disadvantage. A greater degree of Center for Security and Emerging Technology | 39 educational autonomy in the United States gives breathing room for experimentation, creativity, and innovation among U.S. companies and educational institutions. The challenge is for these experiments and initiatives to be evaluated and scaled successfully and inclusively throughout the entire education system. To leverage this advantage, U.S. states will need to engage in targeted and coordinated efforts with unprecedented levels of support for long-term AI educational and workforce policies. Currently, each presidential administration and congressional legislative session ushers in new funding priorities coupled with new visions of federal roles and responsibilities. Similarly, each state has a different vision for K-12 education and associated curriculum, along with different resources and ability to make major changes. U.S. AI education efforts will be the most effective with consistency over time, unaffected by the election cycle, with assured state and local access to the requisite resources for schools, educators, and students. Ultimately, if there is no consensus between states or at the federal level on the best way forward in AI curriculum, it is unclear who is determining what “successful” or “comprehensive” AI education looks like. A similar problem already afflicts CS education,131 and resulting achievement gaps from inconsistent curriculum could hinder U.S. AI workforce competitiveness. The U.S. education system risks continuing along the same path for AI education, that misses certain demographics and low-income or rural schools.132 Our assessment suggests that to effectively face the competition presented by China for AI leadership, the United States needs to address some of the challenges inherent with its decentralized system and approach. We also suggest future U.S. S&T education and workforce policy should be considered in a globally competitive context, instead of viewing it myopically as a domestic challenge. That consideration includes recognizing and capitalizing upon the United States’ enduring advantage in attracting elite talent, including Chinese nationals. While this brief does not make policy recommendations for the U.S. education system, the Center for Security and Emerging Technology | 40 forthcoming CSET report, “U.S. AI Workforce: Policy Recommendations,” will. It is no small task to take a national vision for AI education and implement it effectively in thousands of school districts across the country. This will require collaboration and coordination across federal, state, and local levels and appropriate resourcing. If not made a priority, AI education shortcomings could have long lasting consequences for the global competitiveness of the U.S. AI workforce. Center for Security and Emerging Technology | 41 Authors Dahlia Peterson and Kayla Goode are research analysts at CSET, where Diana Gehlhaus is a research fellow. Acknowledgments For feedback and assistance, we would like to thank Igor Mikolic- Torreira, Remco Zwetsloot, Anna Puglisi, Dakota Cary, Ryan Fedasiuk, Charles Goldman, Sidney D’Mello, and Denis Simon. We would also like to thank Melissa Deng, Shelton Fitch, and Matt Mahoney for their editorial support, and Farhana Hossain for support with visualizations. © 2021 by the Center for Security and Emerging Technology. This work is licensed under a Creative Commons Attribution -Non Commercial 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc/4.0/. Document Identifier: doi: 10.51593/ 20210005 Center for Security and Emerging Technology | 42 Endnotes 1 Center on International Education Benchmarking, “Shanghai-China: Governance and Accountability,” National Center on Education and the Economy, https://ncee.org/what-we-do/center-on-international-education-benchmarking/top-performing-countries/shanghai-china/shanghai-china-system-and-school-organization. 2 Center on International Education Benchmarking, “Shanghai-China.” 3 Organization for Economic Cooperation and Development, Education in China: A Snapshot (Paris: OECD, 2016), https://www.oecd.org/education/Education-in-China-a-snapshot.pdf, 24. 4 Center on International Education Benchmarking, “Shanghai-China.” 5 Center on International Education Benchmarking, “Shanghai-China.” 6 OECD, Education in China. 7 U.S. Department of Education, “An Overview of the U.S. Department of Education,” September 2010, https://www2.ed.gov/about/overview/focus/what.html. 8 U.S. Department of Education, “The Federal Role in Education,” accessed May 25, 2017, https://www2.ed.gov/about/overview/fed/role.html. 9 Melanie Hanson, “Financial Aid Statistics” (EducationData.org, November 28, 2020), https://educationdata.org/financial-aid-statistics. 10 Abigail Johnson Hess, “U.S. student debt has increased by more than 100% over the past 10 years,” CNBC, December 22, 2020, https://www.cnbc.com/2020/12/22/us-student-debt-has-increased-by-more-than-100percent-over-past-10-years.html. 11 Bryce Loo, “Education in the United States of America” (World Education News and Reviews, June 12, 2018), https://wenr.wes.org/2018/06/education-in-the-united-states-of-america. 12 “What is Project 211 and Project 985 universities,” Study in China, http://usa.ciss.org.cn/Admissions/7017; Shengbing Li, “From 985 to World Class 2.0: China's Strategic Move,” Inside Higher Ed, February 7, 2018, https://www.insidehighered.com/blogs/world-view/985-world-class-20-chinas-strategic-move. Center for Security and Emerging Technology | 43 13 Department of Education, Skills and Employment, “Double First-Class university and discipline list policy update,” Government of Australia, December 14, 2017, https://internationaleducation.gov.au/international-network/china/PolicyUpdates-China/Pages/Double-First-Class-university-and-discipline-list-policy-update.aspx. 14 Li, “From 985 to World Class 2.0.” 15 The awarding body in both Beijing and Shandong Province is the Education Branch of the China Information Industry Association (CIIA), a nonprofit organization supervised by the National Development and Reform Commission (NDRC) and the Ministry of Civil Affairs. “协会简介” [“Association Introduction”], 中国信息化网 [China Information Network], http://www.ciia.org.cn/xkjj. 16 “怀柔二中成为北京市首批“全国青少年人工智能创新人才培养基地” [“Huairou No. 2 Middle School becomes Beijing’s first of ‘National Youth Artificial Intelligence Innovative Talent Training Base’”], 怀柔区教委 [Huairou District Board of Education], November 13, 2020, https://archive.ph/gab9g; “全省仅十所!潍坊市一小学获评“全国青少年人工智能创新人才培养基地”称号,” [“Only ten in the province! Weifang No. 1 Primary School awarded ‘National Youth Artificial Intelligence Innovative Talent Training Base’”], 海报新闻 [Poster News], December 17, 2020, https://www.163.com/dy/article/FU288P1E055061FK.html. 17 Poster News, “Only ten in the province!” 18 Poster News, “Only ten in the province!”; “北京:“十四五”计划打造百所人工智能教育实验校” [“Beijing: "The 14th Five-Year Plan" plans to build 100 artificial intelligence education experimental schools”], 中国教育新闻网 [China Education News], November 9, 2020, https://www.sohu.com/a/430696943\_243614. 19 China Education News, “14th Five-Year Plan.” 20 China Education News, “14th Five-Year Plan”; “全国幼儿及青少年人工智能素养等级测评活动启动” [“National Children's and Adolescents' Artificial Intelligence Literacy Level Evaluation Activity Launched”], 中国教育新闻网 [China Education News], July 19, 2020, https://archive.ph/g2BTe; Poster News, “Only ten in the province!”; Huairou District Board of Education, “Huairou No. 2 Middle School.” 21 “教育部关于印发《普通高中课程方案和语文等学科课程标准(2017年版)》的通知” [“The Ministry of Education Issues the ‘General High School Curriculum Program and Curriculum Standards for Chinese Language, etc. (2017 Edition)’”], Chinese Ministry of Education, December 29, 2017, https://archive.ph/RirvZ. Center for Security and Emerging Technology | 44 22 “人工智能进入全国高中新课标, 2018秋季学期执行,” [“Artificial intelligence enters the new national high school curriculum standard, implemented in 2018 fall semester”], 新智元 [AI Era], January 22, 2018, https://web.archive.org/web/20210311124354/https://baijiahao.baidu.com/s?id=1590224076123918695𝔴=spider&for=pc. 23 AI Era, “Artificial intelligence.” 24 AI Era, “Artificial intelligence.” 25 “2019全国中小学人工智能教育大会圆满收官” [“2019 National Primary and Secondary School Artificial Intelligence Education Conference successfully concludes”], Xinhua, October 29, 2019, https://archive.vn/FPnW4. 26 Xinhua, “2019 National.” 27 “教育部最新发布:180所高校本科新增AI课程,138所高校新增大数据课程” [“The Ministry of Education's latest release: 180 colleges and universities have added AI programs for undergraduates, and 138 colleges and universities have added big data programs”] Tencent Cloud, March 12, 2020, https://web.archive.org/web/20201217144740/https://cloud.tencent.com/developer/article/1597694. 28 Yi-Ling Liu, “China’s AI Dreams Aren’t for Everyone,” Foreign Policy, August 13, 2019, https://foreignpolicy.com/2019/08/13/china-artificial-intelligence-dreams-arent-for-everyone-data-privacy-economic-inequality. 29 Tencent Cloud, “Ministry of Education”; Ma Si, “UBTech set to ramp up overseas presence,” China Daily, December 12, 2019, https://archive.ph/VhBRE. 30 刘俊寰 [Liu Junhuan], “高中生福利!清华姚班首发高中AI教材,姚期智院士主编,秋季出版,” [“High school students benefit! Tsinghua’s first high school AI textbook — edited by Academician Yao Qizhi — to be published in the fall semester”], 大数据文摘 [Big Data Digest], January 7, 2020, https://archive.ph/V55f4. 31 “首家中职人工智能专业落地——中职教育创新专业建设新思路,” [“Secondary vocational education launches its first artificial intelligence specialization, launching new lines of thinking for innovative specialization construction”], 湖北AI智慧教育 [Hubei AI Smart Education], July 15, 2019, https://archive.ph/CIIUS. 32 Hubei AI Smart Education, “Secondary vocational.” Center for Security and Emerging Technology | 45 33 “中小学人工智能教育发展现况” [“Current status of artificial intelligence education in primary and secondary schools”] 面对面互动教育 [Face-to-Face Interactive Education], January 27, 2020, https://archive.ph/xtEoj. 34 Face-to-Face Interactive Education, “Current status.” 35 AI Era, “Artificial intelligence”; Zou Shuo, “AI now most favored major at universities,” China Daily, March 3, 2021, https://archive.ph/jKNYz. 36 “China’s AI Schools Are Accepting Applications: Here’s a List,” Synced Review, June 30, 2018, https://archive.ph/9I58a. 37 The Seven Sons include the Beijing Institute of Technology, Beijing University of Aeronautics and Astronautics (Beihang), the Harbin Institute of Technology, Harbin Engineering University, Northwestern Polytechnical University, Nanjing University of Aeronautics and Astronautics and Nanjing University of Science and Technology. See Ryan Fedasiuk and Emily Weinstein, “Universities and the Chinese Defense Technology Workforce” (Center for Security and Emerging Technology, December 2020), https://cset.georgetown.edu/wp-content/uploads/CSET-Universities-and-the-Chinese-Defense-Technology-Workforce-1.pdf. 38 “大盘点:中国33所高校人工智能研究院” [“Inventory: China's 33 universities with artificial intelligence research institutes”], 新智元 [AI Era], November 6, 2018, https://archive.vn/CdVRm. 39 Example from Dalian University of Technology’s School of AI (focused on graduate research) at “学院概况” [“About Our Institute”], Dalian University of Technology, http://ai.dlut.edu.cn/xygk/xygk.htm. 40 Synced Review, “China’s AI Schools.” 41 Synced Review, “China’s AI Schools”; “人工智能学院本科招生专业介绍,” [“Introduction to Undergraduate Admissions of the School of Artificial Intelligence”], Xidian University, updated May 14, 2020, https://archive.ph/G0X44. 42 Live CSET tracker forthcoming. Center for Security and Emerging Technology | 46 43 Chinese Ministry of Education, “教育部关于公布2018年度普通高等学校本科专业备案和审批结果的通知” [“MOE Announces Filing and Approval Results for Undergraduate Majors at Higher Education Institutions in 2018”], March 21, 2019, https://archive.ph/wimIn; 李帅飞 [Li Shifei], “全面出击!我国 180 家高校新增 AI 本科专业,研究生扩招也瞄准 AI” [“Comprehensive attack! 180 colleges and universities in my country add AI undergraduate majors, and graduate enrollment also targets AI”], Leiphone, March 4, 2020, https://web.archive.org/web/20201026183948/https://www.leiphone.com/news/202003/pV0Sijb1BTNmkCcK.html. 44 Chinese Ministry of Education, “教育部关于公布2019年度普通高等学校本科专业备案和审批结果的通知” [“MOE Announces Filing and Approval Results for Undergraduate Majors at Higher Education Institutions in 2019”], February 21, 2020, https://archive.vn/EHX5c; searchable list at “教育部:2019年度普通高等学校本科专业备案和审批结果,” 高考频道 [Gaokao Channel], April 26, 2020, https://archive.ph/og8pU. 45 Chinese Ministry of Education, “教育部关于公布2020年度普通高等学校本科专业备案和审批结果的通知,” [“MOE Announces Filing and Approval Results for Undergraduate Majors at Higher Education Institutions in 2020,”] February 10, 2021, https://archive.ph/IOiQH. 46 Zou, “AI now.” 47 “想来清华学人工智能吗?这份攻略请收下” [“Want to come to Tsinghua University to learn artificial intelligence? Please read this guide”], 清小华 [Qing Xiaohua], May 18, 2019, https://web.archive.org/web/20201209023245/https://www.sohu.com/a/314939565\_397252. 48 Qing, “Tsinghua.” 49 “清华大学成立人工智能学堂班” [“Tsinghua University establishes artificial intelligence class”], 中国教育报 [China Education News], May 20, 2019, https://archive.vn/UKOXP; Xiaozhe Yang, “Accelerated Move for AI Education in China,” ECNU Review of Education 2, no. 3 (September 2019). 50 China Education News, “Tsinghua University establishes.” 51 C9 contains the universities Tsinghua, Peking, Fudan, Shanghai Jiaotong, Nanjing, Zhejiang, Xi'an Jiaotong, University of Science and Technology of China, and Harbin Institute of Technology. “China's Ivy League: C9 League,” People’s Daily, http://en.people.cn/203691/7822275.html. Center for Security and Emerging Technology | 47 52 “人工智能专业(080717T)学习路径及人才培养模型” [“Learning path and talent training model for AI major (080717T)”], 广州跨象乘云软件技术 [Guangzhou Kuaxiang Chengyun Software Technology], January 25, 2020, https://archive.ph/8Hpo0. 53 Yang, “Accelerated Move.” 54 Tencent Cloud, “Ministry of Education”; 张倩 and 杜伟 [Zhang Qian and Du Wei], “180所高校获批新增人工智能专业,你建议报考吗?” [“180 colleges and universities have been approved for the new AI major. Do you recommend applying for it?”], Synced Review, March 4, 2020, https://web.archive.org/web/20201217141025/https://www.jiqizhixin.com/articles/2020-03-04-5. 55 “信息技术新工科产学研联盟” [“Information Technology New Engineering Industry-University-Research Alliance”], Baidu Encyclopedia, https://baike.baidu.com/item/%E4%BF%A1%E6%81%AF%E6%8A%80%E6%9C%AF%E6%96%B0%E5%B7%A5%E7%A7%91%E4%BA%A7%E5%AD%A6%E7%A0%94%E8%81%94%E7%9B%9F/23176171?fr=aladdin. 56 Baidu Encyclopedia, “Information Technology.” 57 Baidu Encyclopedia, “Information Technology”; “微软亚洲研究院荣获新工科联盟“2018 年新工科建设卓越贡献企业” [“Microsoft Research Asia won the ‘2018 Outstanding Enterprise Contribution to New Engineering Curriculum Construction’ award from the New Engineering Alliance”], Microsoft Research Asia, April 2, 2019, https://archive.vn/A62Uu. 58 Microsoft Research Asia, “Microsoft Research Asia won.” 59 Microsoft Research Asia, “Microsoft Research Asia won”; “比酷暑更炙热:人工智能师资研讨会火热进行中” [“Hotter than sweltering: The AI teachers seminar is in full swing”], 微软学术合作 [Microsoft Academic Collaboration], July 27, 2018, https://archive.vn/G5HPx. 60 Microsoft Research Asia, “Microsoft Research Asia won.” Further examples of Western companies’ collaboration in Ryan Fedasiuk and Emily Weinstein, “Universities and the Chinese Defense Technology Workforce” (Center for Security and Emerging Technology, December 2020), https://cset.georgetown.edu/wp-content/uploads/CSET-Universities-and-the-Chinese-Defense-Technology-Workforce-1.pdf. Center for Security and Emerging Technology | 48 61 Meng Jing and Sarah Dai, “China recruits Baidu, Alibaba and Tencent to AI ‘national team’,” South China Morning Post, November 21, 2017, https://www.scmp.com/tech/china-tech/article/2120913/china-recruits-baidu-alibaba-and-tencent-ai-national-team. 62 “我校与百度公司签署人工智能人才培养合作协议” [“Our school and Baidu signed an artificial intelligence talent training cooperation agreement”], 南开大学新闻网 [Nankai University News Network], November 25, 2020, https://archive.ph/HiTXR. Request authors for further sources. 63 “飞桨AI Studio” [“Paddle Paddle AI Studio”], https://aistudio.baidu.com/aistudio/index. 64 Chinese Ministry of Education, the National Development and Reform Commission, and the Ministry of Finance, “关于“双一流”建设高校促进学科融合 加快人工智能领域研究生培养的若干意见” [“Notice on the Publication of "Certain Opinions on Promoting Curricula Merging at 'Double First-Class' Institutes of Higher Education and on Accelerating the Cultivation of Graduate Students in the AI Field”], January 21, 2020, https://archive.vn/f7iGx. Translation at: https://cset.georgetown.edu/wp-content/uploads/t0257\_more\_AI\_grad\_students\_EN.pdf. 65 Chinese Ministry of Education, the National Development and Reform Commission, and the Ministry of Finance, “Notice.” 66 “人工智能学院第三批“申请考核制”博士招生公告” [“Announcement of the third batch of doctoral admissions under the "application and examination system”], 研究生院招生办公室 [Nankai University Graduate School Admissions Office], May 15, 2020, https://archive.ph/1G9QE. 67 Code.org, CSTA, and ECEP Alliance, “2020 State of Computer Science Education: Illuminating Disparities” (Code.org, 2020), 6, https://advocacy.code.org/2020\_state\_of\_cs.pdf. 68 “K–12 Computer Science Policy and Implementation in States,” Code.org, accessed May 25, 2021, https://code.org/advocacy/landscape.pdf. 69 Code.org, CSTA, and ECEP Alliance, “2020 State of Computer Science Education,” 2. 70 Code.org, “Summary of source data for Code.org infographics, and stats,” accessed May 25, 2021, https://docs.google.com/document/d/1gySkItxiJn\_vwb8HIIKNXqen184mRtzDX12cux0ZgZk/pub. Center for Security and Emerging Technology | 49 71 Computer Science Teachers Association, “CSTA K-12 Computer Science Standards” (CSTA, 2017), https://www.csteachers.org/page/about-csta-s-k-12-nbsp-standards. 72 “CSTA / ISTE Standards for Computer Science Educators,” CSTA and ISTE, 2018, https://docs.google.com/document/d/e/2PACX-1vSliI5aJWAvEtfo8Ers8eYJ14DE-C0FbhYqEduWYKW34OWRR4zvm4G1PcMFIQdIYcNrebzMozSRSvM\_/pub. 73 “Computer Science in Science,” Code.org, accessed May 25, 2021, https://code.org/educate/science. Code.org also crosswalks their standards with the Next Generation Science Standards to better integrate CS education standards with STEM education standards. 74 “K12 Computer Science Framework,” K12CS.org, accessed May 25, 2021, https://k12cs.org. 75 “Working Group and Advisory Board Members,” AI4K12.org, accessed May 25, 2021, https://ai4k12.org/working-group-and-advisory-board-members. 76 Taylor Denman, “5 things to know about Seckinger’s AI-themed cluster,” Gwinnett Daily Post, February 26, 2020, https://www.gwinnettdailypost.com/local/5-things-to-know-about-seckingers-ai-themed-cluster/collection\_7c34838e-58cb-11ea-82c5-4bfb22c0a4d0.html#1. 77 “Record $2-million gift launches AI program at NCSSM,” North Carolina School of Science and Mathematics, December 6, 2018, https://www.ncssm.edu/news/2018/12/06/record-2-million-gift-launches-ai-program-at-ncssm. 78 Denman, “5 things to know about Seckinger’s AI-themed cluster.” 79 North Carolina School of Science and Mathematics, “Record $2-million gift.” 80 National Research Council, Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality (Washington, DC: The National Academies Press, 2001), 34-69, https://www.nap.edu/read/10090/chapter/5#69. Center for Security and Emerging Technology | 50 81 OECD, Public and Private Schools: How Management and Funding Relate to their Socio-economic Profile (Paris: OECD Publishing, 2012), 7, http://dx.doi.org/10.1787/9789264175006-en; National Center for Education Statistics, Public and private schools: How do they differ? (Findings from the Condition of education 1997) (Washington, DC: U.S. Department of Education, 1997), https://nces.ed.gov/pubs97/97983.pdf. 82 OECD, Equity and Quality in Education: Supporting Disadvantaged Students and Schools (Paris: OECD Publishing, 2012), 113-119, 146-148, https://www.oecd.org/education/school/50293148.pdf. 83 Brandon Paykamian, “Rhode Island’s Compsci Education Project: How Is It Going?” GovTech.com, March 5, 2021, https://www.govtech.com/education/k-12/rhode-islands-compsci-education-project-how-is-it-going.html; Code.org, CSTA, and ECEP Alliance, “2020 State of CS Education,” 89-90. 84 Data from forthcoming CSET AI Education Catalog. 85 “TEALS Program,” Microsoft, accessed May 25, 2021, https://www.microsoft.com/en-us/teals/schools. 86 “Amazon Future Engineer is more than doubling its reach,” Amazon, September 15, 2020, https://www.aboutamazon.com/news/community/amazon-future-engineer-is-more-than-doubling-its-reach. 87 A 2016 Gallup report shows black and 25ispanic students are less likely than white students to have access to CS classes or use a computer at home. Additionally, another Gallup report shows “girls are only one-third as likely as boys to say they expect to pursue a career in computer science and only half as likely to be interested in learning computer science.” (“Diversity Gaps in Computer Science: Exploring the Underrepresentation of Girls, Blacks and Hispanics” (Google and Gallup, 2016), https://services.google.com/fh/files/misc/diversity-gaps-in-computer-science-report.pdf) 88 Data from forthcoming CSET AI Education Catalog. 89 Zhang, “AI Index Report,” 5. 90 Shana Lynch, “State of AI in 10 Charts,” Stanford University Human-Centered Artificial Intelligence, March 3, 2021, https://hai.stanford.edu/news/state-ai-10-charts. Center for Security and Emerging Technology | 51 Overall, STEM graduates have been on the rise, with a 23 percent increase in STEM bachelor’s degrees, 23 percent increase in STEM master’s degrees, and 6 percent increase in STEM PhDs from 2015 to 2019. For the number and percentage distribution of STEM degrees and certificates conferred by postsecondary institutions and selected student characteristics, see data from the National Center for Education Statistics: https://nces.ed.gov/programs/digest/d20/tables/dt20\_318.45.asp. 91 “Summary of AI Provisions from the National Defense Authorization Act 2021,” Stanford University Human-Centered Artificial Intelligence, accessed May 25, 2021, https://hai.stanford.edu/policy/policy-resources/summary-ai-provisions-national-defense-authorization-act-2021. 92 A forthcoming CSET brief on the latent potential of community colleges will explore this in greater detail. 93 “Computer Science: Institutions,” Datausa.io, accessed August 12, 2021, https://datausa.io/profile/cip/computer-science-110701#tmap\_institutions\_grads. 94 Daniel Zhang, “Chapter 4: AI Education” in “Artificial Intelligence Index Report 2021” (Stanford Institute for Human-Centered Artificial Intelligence, 2021), 8, https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report-\_Chapter-4.pdf. 95 “Research: Research Areas,” MIT Computer Science and Artificial Intelligence Lab, accessed August 12, 2021, https://www.csail.mit.edu/research?f[0]=research\_area:9. 96 Catherine Aiken, James Dunham, and Remco Zwetsloot, “Career Preferences of AI Talent” (Center for Security and Emerging Technology, June 2020), cset.georgetown.edu/research/career-preferences-of-ai-talent/. 97 Brandi Vincent, “NSF Adds 11 New AI Research Institutes to Its Collaborative, Nationwide Network,” August 4, 2021, NextGov.com, https://www.nextgov.com/emerging-tech/2021/08/nsf-adds-11-new-ai-research-institutes-its-collaborative-nationwide-network/184293/. 98 National Science Foundation, National Artificial Intelligence (AI) Research Institutes (Washington, DC: National Science Foundation, 2020), https://www.nsf.gov/funding/pgm\_summ.jsp?pims\_id=505686; Dave Nyczepir, “NSF’s new AI Institutes will leverage industry resources to expand network,” FedScoop, September 10, 2020, https://www.fedscoop.com/nsf-ai-institutes-second-solicitation/. Center for Security and Emerging Technology | 52 99 For the United States, STEM includes biological and biomedical sciences, computer and information sciences, engineering and engineering technologies, mathematics and statistics, and physical sciences and science technologies. Our definition for Chinese STEM is based on a list from the MOE that provides science and engineering breakdowns for Chinese graduate degrees. See China Academic Degrees and Graduate Education Information, “学科、专业目录” [“Catalog of Academic Disciplines and Majors”], https://archive.ph/iAP71. 100 Note: U.S. doctorate totals include JD, MD, and EdD. In China, medical degrees are awarded at both the Master’s and MD level, making an exact comparison more difficult. 101 Shu Zhou and Monit Cheung, “Hukou system effects on migrant children’s education in China: Learning from past disparities,” International Social Work, 60, no. 6 (August 2017). 102 Zhou and Cheung, “Hukou system effects on migrant children’s education in China.” 103 Martin King Whyte, “China’s Hukou System: How an Engine of Development Has Become a Major Obstacle,” China-US Focus, April 24, 2019, https://www.chinausfocus.com/society-culture/chinas-hukou-system-how-an-engine-of-development-has-become-a-major-obstacle. 104 Yumei Guo, Yang Song, and Qianmiao Chen, “Impacts of education policies on intergenerational education mobility in China,” China Economic Review 55 (June 2019): 124-142. 105 Guo et al., “Impacts of education policies.” 106 Chinese Ministry of Education, “中国教育概况: 2019年全国教育事业发展情况” [“General Development Summary of Chinese Education in 2019”], August 31, 2020, https://archive.ph/kiVkS. 107 Liu, “China’s AI Dreams.” 108 Liu, “China’s AI Dreams.” 109 China Power Project, “How Does Education in China Compare with Other Countries?” (Center for Strategic and International Studies, November 15, 2016, updated August 26, 2020), https://chinapower.csis.org/education-in-china. 110 OECD, Education in China. 111 OECD, Education in China. Center for Security and Emerging Technology | 53 112 The College Board, “New Data: AP Computer Science Principles Course Bringing More Diverse Set Of Students Into Computer Science Pipeline,” PRNewswire.com, December 10, 2020, https://www.prnewswire.com/news-releases/new-data-ap-computer-science-principles-course-bringing-more-diverse-set-of-students-into-computer-science-pipeline-301190501.html. 113 National Science Foundation, “National Artificial Intelligence (AI) Research Institutes,” accessed May 25, 2021, https://www.nsf.gov/funding/pgm\_summ.jsp?pims\_id=505686. 114 Brian Kennedy, Meg Hefferon, and Cary Funk, “Half of Americans think young people don’t pursue STEM because it is too hard,” Pew Research Center, January 17, 2018, https://www.pewresearch.org/fact-tank/2018/01/17/half-of-americans-think-young-people-don’t-pursue-stem-because-it-is-too-hard/. 115 Data for the last 10 years shows new AI doctorates increasingly choose industry roles over academic ones, for example. Shana Lynch, “State of AI in 10 Charts,” Stanford University Human-Centered Artificial Intelligence, March 3, 2021, https://hai.stanford.edu/news/state-ai-10-charts. 116 The Computer Science Teachers Association, “Bugs in the System: Computer Science Teacher Certification in the U.S.” (Association for Computing Machinery, 2013), https://www.csteachers.org/documents/en-us/3b4a70cd-2a9b-478b-95cd-376530c3e976/1. 117 National Center for Education Statistics, Are High School Teachers Teaching Core Subjects Without College Majors or Minors in Those Subjects? (Washington, DC: U.S. Department of Education, 2016), https://nces.ed.gov/pubs/web/96839.asp. 118 John Ruwitch, “China’s Birthrate Drops, As Census Data Warn Of Aging Population,” NPR, May 11, 2021, https://www.npr.org/2021/05/11/995490687/chinas-birth-rate-drops-as-census-data-warn-of-aging-population. 119 “China to remove fines, revise family planning law to boost birthrate,” Global Times, July 20, 2021, https://perma.cc/CBU5-Q6YD. 120 Audrey Fritz, “How China’s military–civil fusion policy ties into its push for world-class universities,” The Strategist (blog) on Australian Strategic Policy Institute, May 19, 2021, https://www.aspistrategist.org.au/how-chinas-military-civil-fusion-policy-ties-into-its-push-for-world-class-universities. 121 Fedasiuk and Weinstein, “Universities and the Chinese Defense Technology Workforce.” Center for Security and Emerging Technology | 54 122 Liu, “China’s AI Dreams.” 123 Liu, “China’s AI Dreams.” 124 Mini Gu, Rachel Michael, Claire Zheng, and Stefan Trines, “Education in China” (World Education News and Reviews, December 17, 2019), https://wenr.wes.org/2019/12/education-in-china-3. 125 Gu et al., “Education in China.” 126 “Portman, Heinrich Urge National Science Foundation To Prioritize Safety and Ethics in Artificial Intelligence Research, Innovation,” Senator Rob Portman, May 13, 2021, https://www.portman.senate.gov/newsroom/press-releases/portman-heinrich-urge-national-science-foundation-prioritize-safety-and. 127 The White House, “Memorandum for the Heads of Executive Departments and Agencies: Guidance for Regulation of Artificial Intelligence Applications,” The White House, 2021, https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf?utm\_source=morning\_brew. 128 Remco Zwetsloot and Dahlia Peterson, “The US-China Tech Wars: China’s Immigration Disadvantage,” The Diplomat, December 31, 2019, https://thediplomat.com/2019/12/the-us-china-tech-wars-chinas-immigration-disadvantage. 129 Remco Zwetsloot, James Dunham, Zachary Arnold and Tina Huang, “Keeping Top AI Talent in the United States: Findings and Policy Options for International Graduate Student Retention” (Center for Security and Emerging Technology, December 2019), https://cset.georgetown.edu/wp-content/uploads/Keeping-Top-AI-Talent-in-the-United-States.pdf. 130 CSTA released its new curriculum standards for K-12 CS education in 2017. 131 Emiliana Vegas and Brian Fowler, “What do we know about the expansion of K-12 computer science education?” (The Brookings Institution, August 4, 2020), https://www.brookings.edu/research/what-do-we-know-about-the-expansion-of-k-12-computer-science-education/. 132 “Data shows that in the nation as a whole, schools in rural communities and schools with higher percentages of economically disadvantaged students are less likely to teach computer science.” Code.org, CSTA, and ECEP Alliance, “2020 State of CS Education,” 15.
e360a591-1062-41e9-9d38-238497a6058f
trentmkelly/LessWrong-43k
LessWrong
Taboo "rationality," please. Related on OB: Taboo Your Words I realize this seems odd on a blog about rationality, but I'd like to strongly suggest that commenters make an effort to avoid using the words "rational," "rationality," or "rationalist" when other phrases will do.  I think we've been stretching the words to cover too much meaning, and it's starting to show. Here are some suggested substitutions to start you off. Rationality: * truth-seeking * probability updates under bayes rule * the "winning way" Rationalist: * one who reliably wins * one who can reliably be expected to speak truth Are there any others?
21085400-e2fa-4495-b0fd-3a34ec33740a
trentmkelly/LessWrong-43k
LessWrong
Introducing: Meridian Cambridge's new online lecture series covering frontier AI and AI safety This is a linkpost for [https://www.meridiancambridge.org/language-models-course] Meridian Cambridge, in partnership with Cambridge University's Center for Data Driven Discovery (C2D3),  has produced a 16-part lecture series entitled "Language Models and Intelligent Agentic Systems" and the recordings are now online! The LMaIAS course provides an introduction to core ideas in AI safety. Throughout, we build up from introductory ideas about language modelling and neural networks to discussions of risks posed by advanced AI systems. Course Structure The course is divided into four parts: Part 1: What is a Language Model? To start the course, we give three lectures covering generative models and next token prediction, the transformer architecture, and scaling laws for large models. Part 2: Crafting Agentic Systems Now the foundations are in place, the next four lectures go into details on LLM post-training, reinforcement learning, reward modelling, and agent architectures. Part 3: Agentic Behaviour Here we take four lectures to discuss optimisation and reasoning, reward hacking and goal misgeneralisation, out-of-context reasoning and situational awareness, and finally deceptive alignment and alignment faking. Part 4: Frontiers For the remainder of the lecture series, we give five lectures covering risks from advanced AI, AI evaluations, AI control and safety cases, AI organisations and agendas, and conclude with a discussion on the future of language models. You can find the first lecture [here], and the whole course is available [here]. The lectures were created and delivered by Edward James Young, Jason R. Brown, and Lennie Wells, in partnership with Cambridge University's C2D3. The hope is that this material can be used to help educate people new to the field and provide them with the background knowledge required to effectively contribute to AI safety. Please share this with anybody you think might be interested! - The Meridian Team
1aebc3a3-131e-4910-b7c6-3a390c4827aa
trentmkelly/LessWrong-43k
LessWrong
Something Was Wrong Previously (elsewhere): The Order of the Soul (Compass Rose) Last week we went to see a classroom for our son. The school in question had quite a good reputation. This was a place people wanted their kids to go, and when we had been there previously to see a different type of classroom, we understood why, or thought we did. It was doing the thing it was doing and hitting all the bases. We had just been to a place with a less good reputation, and seen a run-down, not-well-equipped, kids-not-under-control version of this classroom, and were hoping that this version would be better. We walked in and it looked nice. It had the things such places have. There were things to play with and kids were scattered around playing quietly. Our son walked towards one of the things. One of the kids already there said hi to him. We and the school’s people watched. I surveyed the room. Everything looked like it was supposed to look. This was a nice classroom. Kids were playing. Something was wrong. I told myself it was nothing. We kept watching. We hoped our son would make a good impression. This was a place people wanted their kids to go. We wanted him to be allowed to go here. Our son did not want to play much with the other children. We knew this was bad. They would not like it. They would think it meant something was wrong; that he was not ready. He was supposed to want to play with them. We were told he needed to play with them. We felt like we were failing somehow. We hoped he would reconsider. One of the two school employees that went in with us tested our son, seeing if he knew his numbers and his colors and his shapes. He did ace the numbers and colors and shapes. That felt good. Why was the test even there? I wondered why the older kids needed to be learning single digit numbers. The employee finished the quizzing and started furiously writing things in boxes on a piece of paper. I hoped they were good, positive things. I looked around the room more. There were
abfe7fb4-586e-4fc8-affd-69ab0501f9fe
trentmkelly/LessWrong-43k
LessWrong
Why We Launched LessWrong.SubStack (This is a crosspost from our new SubStack. Go read the original.) Subtitle: We really, really needed the money. We’ve decided to move LessWrong to SubStack. Why, you ask? That’s a great question. 1. SubSidizing LessWrong is important We’ve been working hard to budget LessWrong, but we’re failing. Fundraising for non-profits is really hard. We’ve turned everywhere for help. We decided to follow Clippy’s helpful advice to cut down on server costs and also increase our revenue, by moving to an alternative provider. We considered making a LessWrong OnlyFans, where we would regularly post the naked truth. However, we realized due to the paywall, we would be ethically obligated to ensure you could access the content from Sci-Hub, so the potential for revenue didn't seem very good. Finally, insight struck. As you’re probably aware, SubStack has been offering bloggers advances on the money they make from moving to SubStack. Outsourcing our core site development to SubStack would enable us to spend our time on our real passion, which is developing recursively self-improving AGI. We did a Fermi estimate using numbers in an old Nick Bostrom paper, and believe that this will produce (in expectation) $75 trillion of value in the next year. SubStack has graciously offered us a 70% advance on this sum, so we’ve decided it’s relatively low-risk to make the move. 2. UnSubStantiated attacks on writers are defended against SubStack is known for being a diverse community, tolerant of unusual people with unorthodox views, and even has a legal team to support writers. LessWrong has historically been the only platform willing to give paperclip maximizers, GPT-2, and fictional characters a platform to argue their beliefs, but we are concerned about the growing trend of persecution (and side with groups like petrl.org in the fight against discrimination). We also find that a lot of discussion of these contributors in the present world is about how their desires and utility func
ee283c62-6ea0-4d53-9238-82fd816ae219
trentmkelly/LessWrong-43k
LessWrong
energy landscapes of experts Suppose you're a choosing an expert for an important project. One approach is to choose a professor at a prestigious university whose research is superficially related to the project, and ask them to recommend someone. People have a better understanding of some conceptual and social area that's close to their position, so this is like a gradient descent problem, where we can find gradients at points but don't have global knowledge. Gradient descent typically uses more than 2 steps, but people tend to pass along references to people they respect, so because of social dynamics, each referral is like multiple gradient descent steps. Considering that similarity to gradient descent, for a given topic, we can model people as existing on an energy landscape. If we repeatedly get referrals to another expert, does that process eventually choose the best expert? In practice, it definitely doesn't: there are many local minima. If you want to choose a medical expert starting from a random person, that process could give you an expert on crystal healing, traditional Chinese medicine, Ayurveda, etc. If you choose a western medical doctor, you'll probably end up with a western medical doctor, but there are still various schools of practice, which tend to be local minima. Within each school of some topic, whether it's medicine or economics or engineering, people tend to refer to others deeper in that local minima, and over time they tend to move deeper into it themselves. The result is multiple clusters of people, and while each may be best at some subproblem, for any particular thing, most of those clusters are mistaken about being the best. From recent research into artificial neural networks, we know that high dimensionality is key to good convergence being possible. Adding dimensions creates paths between local minima, which makes moving between them possible. If this applies to communities of experts, it's better to evaluate experts with many criteria than with few criteria
d0fc90c6-0f7c-4a09-a7ff-fa8e00bfcdc9
trentmkelly/LessWrong-43k
LessWrong
An even more modest search engine proposal How much AI technique could it possibly take for google (or something better) to do a decent job with > speechby:obama   attitude:positive   "Saul Alinsky". I.e. "speechby:" and "attitude:" don't exist, but could, I believe be implemented pretty accurately, to see in this case if we can find any instances of Obama praising Saul Alinsky. An article: "Bill Ayers and Obama Both Quote Alinsky" claims such quotes exist, but their one attempt to demonstrate it is laughable -- something vaguely like a paraphrase of an Alinsky statement, but which has, in fact the reverse sense of what the supposed "original" meant.  Yet I think most of the world, and not just conservatives, if they have any idea who Alinsky is, will tend not to question Obama's "debt" to Alinski -- just for the sheer number of times it's been said or implied.  For the other shoe dropping, false quotes that help demonize Alinsky, see tinyurl.com/qa6fglk. The point isn't to defend Obama.  It is that I think the world would work better if the ratio of          ability to find verifiable facts pertinent to political discussion ----------------------------------------           supply of highly opinionated and slanted "news".    could be raised by, say, an order of magnitude. So many assertions are made that are likely not true, but are incredibly difficult for the average person to disprove.  In this Internet era, the personal cost to write some almost free associative screed about a political point is very low, while the personal cost of finding quite a lot of pertinent facts is awfully high. This is not to say the "average person" will look for facts to confirm or contradict what they read, but much of what they read is written by bloggers some of whom are sincere and would become users of such resources, and I do believe the emotional rewards of finding a nugget of truth versus the current pain of often fruitless search would have an effect on people's thinking habits -- maybe small at first bu
1715ff13-8c63-4613-b8d1-8d50058bd0f4
trentmkelly/LessWrong-43k
LessWrong
Representing Irrationality in Game Theory * August 23, 2023 * Version 1.0.0 Abstract: In this article, I examine the discrepancies between classical game theory models and real-world human decision-making, focusing on irrational behavior. Traditional game theory frames individuals as rational utility maximizers, functioning within pre-set decision trees. However, empirical observations, as pointed out by researchers like Daniel Kahneman and Amos Tversky, reveal that humans often diverge from these rational models, instead being influenced by cognitive biases, emotions, and heuristic-based decision making. Introduction Game theory provides a mathematically rigorous model of human behavior. It has been applied to fields as diverse as Economics and literary Critical Theory. Within game theory, people are represented as rational actors seeking to maximize their expected utility in a variety of situations called "games." Their actions are represented by decision trees whose leaves represent outcomes labeled with utility values and whose nodes represent decision points. Game theory tells us how to back-propogate these leaf values to assign intermediate values to each node. These intermediate node values represent the expected utility of the associated situation. Game theory then tells us how to map these decision trees onto other representations, such as matrices; defines a taxonomy of standard games; and provides methods for determining the optimal strategies available to actors within these situations. Unfortunately, for all of its mathematical elegance, game theory does not accurately model human behavior. As researchers such as Daniel Kahneman and Amos Tversky have pointed out, people are not rational utility maximizers. Instead, people rely on heuristics that are, at best, approximately logical and exhibit emotionality that leads them to act differently than game theory would predict. Over the years, these researchers have developed myriad alternative models to represent human decisions. This
fdf45d2d-8a12-460a-bea8-963be1ae9ea2
trentmkelly/LessWrong-43k
LessWrong
AI #4: Introducing GPT-4 LEMON, IT’S TUESDAY Somehow, this was last week: (Not included: Ongoing banking crisis threatening global financial system.) Oh, also I suppose there was Visual ChatGPT, which will feed ChatGPT’s prompts into Stable Diffusion, DALL-E or MidJourney. Current mood, perhaps. The reason to embark on an ambitious new AI project is that you can actually make quite a lot of money, also other good things like connections, reputation, expertise and so on, along the way, even if you have to do it in a short window. The reason not to embark on an ambitious new AI project is if you think that’s bad, actually, or you don’t have the skills, time, funding or inspiration. I’m not not tempted. TABLE OF CONTENTS What’s on our plate this time around? 1. Lemon, It’s Tuesday: List of biggest events of last week. 2. Table of Contents: See table of contents. 3. Executive Summary: What do you need to know? Where’s the best stuff? 4. Introducing GPT-4: GPT-4 announced. Going over the GPT-4 announcement. 5. GPT-4 The System Card Paper: Going over GPT-4 paper about safety (part 1). 6. ARC Sends In the Red Team: GPT-4 safety paper pt. 2, ARC’s red team attempts. 7. Ensuring GPT-4 is No Fun: GPT-4 safety paper pt. 3. Training in safety, out fun. 8. GPT-4 Paper Safety Conclusions + Worries: GPT-4 safety paper pt.4, conclusion. 9. A Bard’s Tale (and Copilot for Microsoft 365): Microsoft and Google announce generative AI integrated into all their office products, Real Soon Now. Huge. 10. The Search for a Moat: Who can have pricing power and take home big bucks? 11. Context Is That Which Is Scarce: My answer is the moat is your in-context data. 12. Look at What GPT-4 Can Do: A list of cool things GPT-4 can do. 13. Do Not Only Not Pay, Make Them Pay You: DoNotPay one-click lawsuits, ho! 14. Look What GPT-4 Can’t Do: A list of cool things GPT-4 can’t do. 15. The Art of the Jailbreak: How to get GPT-4 to do what it doesn’t want to do. 16. Chatbots Versus Search Bars:
a4b86513-c6d5-4a82-a237-2dd86eb94523
trentmkelly/LessWrong-43k
LessWrong
Deep Dives: My Advice for Pursuing Work in Research Context [optional]:  I’m a Ph.D student at MIT doing research in technical AI safety. I mostly do work related to interpretability, adversaries, and robust reinforcement learning (see more here). Often, I am referred either by some Effective Altruist friend of mine or 80k to talk with someone who is interested in technical AI safety research – usually an undergrad (feel free to email me if you’d like to talk – scasper@mit.edu). One piece of advice that I give to almost everyone is to do a “deep dive.”  What’s a “Deep Dive?” Sometimes there is a catch-22 when it comes to pursuing research work – you need opportunities to gain experience, and you need experience to get opportunities. A deep dive is one way I recommend to gain experience and demonstrate initiative on one’s own. It’s also a good way to explore an area if you’re not sure that you’re interested in it or not. Having done several of them in the past, I believe that they are the best academic experiences I’ve had aside from research projects themselves.  More concretely, a “deep dive” is a procedure that I recommend for reading papers that I think allows you to learn a lot about an area of research that you are interested in. There are 5 steps. 1. Get a broad level of familiarity with active research in the field that you’re interested in. This could include looking through high-level survey papers, survey posts, reading lists, etc. For AI safety, there is no shortage of resources like this. Textbooks usually won’t be so useful for this because they typically cover established concepts in a field rather than active areas of work.  2. Pick a fairly niche subfield of active research that you want to learn a lot about. The exact scope should be up to you. One possibility could be to let the citations of a survey paper define your subfield. Another could be to focus on the research area that a particular professor or organization you’re interested in works in. Getting advice on subfields from someone exper
584d7f72-f6d4-47eb-aac1-25d593688873
trentmkelly/LessWrong-43k
LessWrong
Monotonous Work In the process of running an organization, you have to break down the things that have to be done into tasks, and delegate them to the people available to you. You're in a better position if most of your tasks take non-variable work to complete, because they can be delegated easily. For example, a SAHM can easily hand off sweeping, dusting, etc, but can't hand off things like buying groceries, something that's more variable and that requires decisions. As a volunteer, the easiest way to give work would be "sweeping," not "grocery shopping." What are some of those tasks? Also, is there a way to systematically convert tasks from one type to another?
c8992392-7cc1-4bc7-a3af-bf244b25e8ac
trentmkelly/LessWrong-43k
LessWrong
Empathetic vs Intrinsic Oof Crossposted from Figuring Figuring. Some people are empathetic oofers, and some people are intrinsic oofers. (“Oof” as in the sound one makes when told a story about someone’s unintentional, but unfortunate gaffe.) Empathetic oofers, like myself, only experience awkwardness if they think someone else around them might be feeling awkward. Intrinsic oofers can feel awkward, or cringe, or other related things, even if they know for sure nobody else around them feels that way.  Empathetic oofers do this thing that drives intrinsic oofers crazy. The way we deal with awkward situations is creating common knowledge about how we feel about the awkwardness inducing situation, and asking others to do the same. An overly simplified model of this explains why that strategy makes sense if everybody in the room is an empathetic oofer.  If Alice and Bob are the only ones in the room, and they are both empathetic oofers, and they both know that the other is an empathetic oofer, but they do not know that they both know, then they will both feel awkward.  Alice will know that Bob will only feel awkward if he does not know that Alice does not feel awkward, but she does not know if Bob knows that Alice is also an empathetic oofer, so she does not know that Bob does not feel awkward, and so she in fact feels awkward. The same argument switching “Alice” and “Bob” shows that Bob feels awkward.  Now suppose that Alice and Bob are the only ones in the room, they both know that the other is an empathetic oofer, and they both know that they both know. Like before, Alice knows that Bob will only feel awkward if he does not know that Alice does not feel awkward, but she also knows that Bob knows that Alice will not feel awkward unless she does not know that Bob does not feel awkward, so she doesn’t have to worry about Bob worrying that she might feel awkward for some other reason, but since they don’t have common knowledge, she doesn’t know if Bob knows that he doesn’t have to worry about
fc1b74bc-7b95-4eea-bfe5-3b4157417dbc
trentmkelly/LessWrong-43k
LessWrong
Covid 12/17: The First Dose The Pfizer vaccine is being deployed to health care workers and long-term care facility residents. The Moderna vaccine is close behind, with the full FDA report already out. There were some small extra delays thrown in for good measure, on the order of a few days, that doubtless killed a few people but shouldn’t delay the overall path of events. We are now in the vaccination stage of the pandemic. If the trial results are to be believed in detail, by the end of next week those getting the first dose will largely already be immune, and the population immunity effects can begin to compound and help turn the tide. They will start out small, but soon start growing faster, and every little bit helps.  There even seems to be a good chance that overall new infections would have already peaked if not for worries about Christmas and New Year’s coming up soon. Positive test percentages seem to be starting to even out or slowly decline as we get clear of Thanksgiving, and death rates are not rising as much as they would be rising in the scenario where data snags and testing issues were the only reason we didn’t notice things getting even worse than they are. If anything, death rates suggest a better picture than the positive test counts.  In other great news, Over the counter $30 Covid-19 test approved by FDA. It’s official. Woo-hoo! Three million over the counter tests this month, then they ramp up production. We could have done this a long time ago and solved the whole pandemic, and instead it’s going to be a drop in the bucket that shows up late to the party, but every little bit helps.  Alas, as always, none of that changes the short term situation much. There’s lots of Covid-19 out there, and if anything there are even more reasons to play it safe right now. Large Christmas gatherings or New Year’s parties are a profoundly bad idea. In many ways, it seemed like this week was mostly ‘cut to one week later’ with few if any surprises, except for the approval of the first
7b913ac6-88d3-4943-b2e7-a1d1d95c9a9c
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Is Infra-Bayesianism Applicable to Value Learning? My impression, as someone just starting to learn Infra-Bayesianism, is that it's about caution, lower bounds on utility (which is exactly [the way anything trying to overcome the Optimizer's Curse should be reasoning](https://www.lesswrong.com/posts/ZqTQtEvBQhiGy6y7p/breaking-the-optimizer-s-curse-and-consequences-for-1), especially in an environment already heavily optimized by humans where utility will have a lot more downside than upside uncertainty), so the utility score is vital in the argmax min process, and in the relationship between sa-measures and a-measures. However, this does make it intuitively inobvious how to apply Infra-Bayesianism to Value Learning, where the utility function from physical states of the environment to utility values is initially very uncertain, and is an important part of what the AI is trying to do (Infra-)Bayesian updates on. So, a question for people who already understand Infra-Bayesianism: is it in fact applicable to Value Learning? If so, does it apply in the following way: the (a-priori unknown, likely quite complex, and possibly not fully computable/realizable to the agent) human effective utility function that maps physical states to human (and thus also value-learner-agent) utility values is treated as (an important) part of the environment, and thus the min over environments('Murphy') part of the argmax min process includes making the most pessimistic still-viable assumptions about this? To ask a follow-on question, if so, would cost-effectively reducing uncertainty in the human effective utility function (i.e. doing research on the alignment problem) to reduce Murphy's future room-to-maneuver on this be a convergent intermediate strategy for any value-learner-agents that were using Infra-Bayesian reasoning? Or would such a system automatically assume that learning more about the human effective utility function is pointless, because they assume Murphy will always ensure that they live in the worst of all possible environments, so decreasing uncertainty on utility will only ever move the upper bound on it not the lower one?  [I'm trying to learn Infra-Bayesianism, but my math background is primarily from Theoretical Physics, so I'm more familiar with functional analysis, via field-theory Feynman history integrals, than with Pure Math concepts like Banach spaces. So the main Infra-Bayesianism sequence's Pure Math approach is thus rather heavy going for me.]
aae93add-9226-424c-b0ff-43e8f9a25326
trentmkelly/LessWrong-43k
LessWrong
GPT-4 developer livestream The detailed analysis of the screenshot and the interpretation of chicken scratch to a working website were both extremely impressive.  As a human I found the hand drawing hard to read.  I do not know how it was able to determine this was a "discord UI window", what did they train it on? To me I think what is interesting is not the delta in performance over GPT-3, but the reality that GPT-4 is almost capable enough to research more advanced AI systems.  Being able to read error messages, write code, read docs, read visual input would seem to be the minimum capabilities for Recursive Self Improvement.   
ef2f9fd1-2fec-4c66-a939-b61e6766308b
trentmkelly/LessWrong-43k
LessWrong
Meetup : Fort Collins, Colorado Meetup Wedneday 7pm Discussion article for the meetup : Fort Collins, Colorado Meetup Wedneday 7pm WHEN: 15 February 2012 07:00:00AM (-0700) WHERE: 144 North College Avenue, Fort Collins, CO 80524 Come hang out with interesting people, drink tea, and maybe go for dinner. Discussion article for the meetup : Fort Collins, Colorado Meetup Wedneday 7pm
5b0a0753-00ef-4820-8620-5a8bd0cd5483
trentmkelly/LessWrong-43k
LessWrong
Meetup : LW Cologne meetup Discussion article for the meetup : LW Cologne meetup WHEN: 27 February 2016 05:00:00PM (+0100) WHERE: Marienweg 43, 50858 Köln If you are new and did not fill in our Doodle, please PM me. Otherwise everyone is welcome Discussion article for the meetup : LW Cologne meetup
f9d4fd0f-7cb9-4cce-bca9-78187b569347
StampyAI/alignment-research-dataset/arxiv
Arxiv
Responsible-AI-by-Design: a Pattern Collection for Designing Responsible AI Systems 1 Introduction --------------- Artificial intelligence (AI) is one of the major driving forces to transform society and industry and has been successfully adopted in applications across a wide range of data-rich domains. The global market value of AI was assessed at USD $62.35 billion in 2020 and is expected to annually grow with a rate of 40.2% from 2021 to 2028. Although AI has significant potential and capacity to stimulate economic growth and improve productivity across a growing range of domains, there are serious concerns about the AI systems’ ability to behave and make decisions in a responsible manner. Many ethical regulations, principles, and guidelines have been recently issued by governments, research institutions, and companies, which responsible AI technologies and systems are supposed to adhere to. However, these principles are high-level and can hardly be used in practice by technologists, AI experts and software engineers. On the other hand, responsible AI research has been focusing on algorithm solutions limited to a subset of issues such as fairness. The major challenge in achieving responsible AI is not just algorithms and data. Issues can enter at any point of the software engineering lifecycle and are often at the system-level crosscutting many components of AI systems. There is a lack of system-level responsible-AI-by-design guidance on how to design the architecture of responsible AI systems. Therefore, this paper presents a summary of design patterns based on the results of a systematic literature review to deal with the system-level design challenges of responsible AI systems and build responsible-AI-by-design into AI systems. Rather than staying at the ethical principle-level or AI algorithm-level, this paper focuses on the system-level design patterns to operationalizing responsible AI. We perform a systematic literature review (SLR) on software engineering for responsible AI to summarize the patterns that can be embedded into the design of AI systems as product features to contribute to responsible-AI-by-design. The remainder of the paper is organized as follows. Section 2 introduces the methodology. Section 3 describes the state diagram of a provisioned AI system. Section 4 presents a summary of the patterns for responsible-AI-by-design. Section 5 concludes the paper. 2 Methodology -------------- To identify patterns for responsible AI, we performed an SLR.The two research questions defined for the SLR are: * What responsible AI principles are addressed by the study? * What solutions for responsible AI can be identified? The main data sources include ACM Digital Library, IEEE Xplore, Science Direct, Springer Link, and Google Scholar. The study only includes the papers that present concrete design or process solutions for responsible AI, and excludes the papers that only discuss high-level frameworks. A set of 159 primary studies was identified. The complete SLR protocol is available as online material 111<https://drive.google.com/file/d/1Ty4Cpj_GzePzxwov5jGKJZS5AvKzAy3Q/view?usp=sharing>. We use the ethical principles listed in Harvard University’s mapping study  [[1](#bib.bib1)]: Privacy, Accountability (professional responsibility is merged into accountability due to the overlapping definitions), Safety & Security, Transparency & Explainability, Fairness and Non-discrimination, Human Control of Technology, Promotion of Human Values. 3 State diagram of a provisioned AI System ------------------------------------------- Fig. [1](#S3.F1 "Figure 1 ‣ 3 State diagram of a provisioned AI System ‣ Responsible-AI-by-Design: a Pattern Collection for Designing Responsible AI Systems") illustrates the state diagram of a provisioned AI system and highlights the patterns associating with relevant states or transitions, which show when the design patterns could take effect. We have limited the scope of the design patterns in this paper to the patterns that can be embedded into the AI systems provisioned as product features. The engineering best practices of the development process including some patterns related to model training is out of the scope of this paper. Once the AI system starts serving, it can be requested to execute a certain task. Decision-making may be needed before executing the task. Both the behaviors and decision-making outcomes of the AI system are monitored and validated. If the system is failed to meet the requirements (including ethical requirements) or a near-miss is detected, the system need to be updated. The AI system may need to be audited regularly or when major-failures/near-misses occur. The stakeholders can determine to abandon the AI system if it no longer fulfils the requirements. ![State diagram of a provisioned AI system.](https://media.arxiv-vanity.com/render-output/7816200/lifecycle.png) Figure 1: State diagram of a provisioned AI system. ![Template of design patterns for responsible AI.](https://media.arxiv-vanity.com/render-output/7816200/template.png) Figure 2: Template of design patterns for responsible AI. ![Operationalized design patterns for responsible AI systems.](https://media.arxiv-vanity.com/render-output/7816200/x1.png) Figure 3: Operationalized design patterns for responsible AI systems. 4 Design Patterns ------------------ To operationalize responsible AI, as shown in Fig. [2](#S3.F2 "Figure 2 ‣ 3 State diagram of a provisioned AI System ‣ Responsible-AI-by-Design: a Pattern Collection for Designing Responsible AI Systems"), we define a pattern template based on the extended pattern form from [[2](#bib.bib2)], which includes the pattern name, the context defining the impacted stakeholders, the problem explaining the type of objective, the forces showing the degree of relevance to each principle which might be conflicting to each other, the solution listing the fine-grained mechanisms, and its consequences including benefits and drawbacks (e.g. complexity or cost). Due to the space and reference limitations, we exclude known uses from the template, but include one reference for each pattern in the discussion. Fig. [3](#S3.F3 "Figure 3 ‣ 3 State diagram of a provisioned AI System ‣ Responsible-AI-by-Design: a Pattern Collection for Designing Responsible AI Systems") lists a collection of patterns for responsible-AI-by-design. * Bill of materials: From a software supply chain angle, AI product vendors often create AI systems by assembling commercial or open source AI and/or non-AI components from third parties. Bill of materials  [[3](#bib.bib3)] keeps a formal machine-readable record of the supply chain details of the components used in building an AI system, such as component name, version, supplier, dependency relationship, author and timestamp. In addition to supply chain details of the components, context documents (such as model cards) can also be recorded by the bill of materials. The purpose of bill of material is to provide traceability and transparency into the components that make up AI systems so that ethical issues can be tracked and addressed. Immutable data infrastructure is needed to store the data of bill of materials. For example, the manufacturers of autonomous vehicles could maintain a material registry contract on blockchain to track their components’ supply chain information, e.g., the version and supplier of the third-party navigation component. * Verifiable ethical credential: To improve human trust in AI systems, verifiable ethical credentials can be used as evidence of ethical compliance for AI systems, components, models, developers, operators, users, organizations, and development processes. Before using AI systems, users may verify the systems’ ethical credential to check if the systems are compliant with AI ethics principles or regulations [[4](#bib.bib4)]. On the other hand, the users may be required to provide the ethical credentials to use and operate the AI systems. Publicly accessable data infrastructure needs to built to support the generation and verification for ethical credentials. For example, before driving an vehicle, the driver may be requested to scan her/his ethical credential to show she/he has the capability to drive safely, while verifying the ethical credential of the vehicle’s automated driving system shown on the center console. * Ethical digital twin: Before running AI system in real-world, it is important to perform system-level simulation through an ethical digital twin running on a simulation infrastructure to understand the behaviors of AI systems and assess ethical risk in a cost effective way. Ethical digital twin can also be used during operation of the AI system to assess the systems’ runtime behaviors and decisions based on the abstract simulation model using the real-time data. The assessment results can be sent back to alert the system or user before the unethical behavior or decision takes effect. For example, vehicle manufacturers can use the ethical digital twin to explore the limits of autonomous vehicles based on the collected real-time data [[5](#bib.bib5)]. * Ethical sandbox: Given AI is a high stake technology, ethical sandbox can be applied to isolate AI components from non-AI components by running the AI component separately [[6](#bib.bib6)], e.g. sandboxing the unverified visual perception component. Maximal tolerable probability of violating the ethical requirements should be defined as ethical margin for the sandbox. A watch dog can be used to limit the execution time of the AI component to reduce the ethical risk, e.g. only activating the visual perception component for 5 mins on the bridges built especially for autonomous vehicles. * AI mode switcher: Adopting AI or not can be considered as a major architectural design decision when designing a software system. For example, AI mode switcher offers users efficient invocation and dismissal mechanisms for activating or deactivating the AI component when needed. Kill switch is a special type of invocation mechanism which immediately shuts down the AI component and its negative effects, e.g. turning off the automated driving system and disconnect it from the internet [[7](#bib.bib7)] . The decisions made by the AI component can be executed automatically or reviewed by a human expert in critical situations. The human expert serves to approve or override the decisions (e.g. skipping the path generated by the navigation system). Human intervention can also happen after acting the AI decision through the fallback mechanism that reverses the system back to the state before executing the AI decision. A built-in guard ensures that the AI component is only activated within the predefined conditions (such as domain of use, boundaries of competence). Users can ask questions or report complaints/failures/near misses through a recourse channel. * Multi-model decision-maker: Multi-model decision-maker employs different models to perform the same task or enable a single decision [[8](#bib.bib8)], e.g., deploying different algorithms for visual perception. This pattern can improve the reliability by deploying different models under different context (e.g., different regions) and enabling fault-tolerance by cross-validating ethical requirements for a single decision (e.g., only accepts the same results from the employed models). * Homogeneous redundancy: Ethical failures in AI systems may cause serious damage to the humans or environment. Deploying redundant components (e.g., two brake control components) is considered as a solution to deal with the highly uncertain AI components that may make unethical decisions or the adversary hardware components that produce malicious data or behave unethically [[9](#bib.bib9)]. A cross-check can be done for the outputs provided by multiple components of a single type. * Incentive registry: Incentives are effective in motivating AI systems to execute tasks in a responsible manner. An incentive registry records the rewards that correspond to the AI system’s ethical behavior and outcome of decisions [[10](#bib.bib10)], e.g., rewards for path planning. There are various ways to formulate the incentive mechanism, e.g., reinforcement learning, publicly accessible data infrastructure using blockchain. However, it is challenging to formulate the form of rewards as the ethical impact of AI systems’ decisions and behaviors might hardly to be measured for some of the ethical principles (such as human values). Furthermore, the incentive mechanism needs to be agreed by all the stakeholders who may have different views on the ethical impact. In addition, there may be trade-offs between different principles, which makes the design harder. * Continuous ethical validator: AI systems often require continual learning based on new data and have higher degree of risk that is caused by the autonomy of AI component. Rather than assessing ethical risk at a particular development step, continuous ethical validator continuously monitors and validates the outcomes of AI systems (e.g., the path recommended by the navigation system) against the ethical requirements  [[11](#bib.bib11)]. The outcomes of AI systems mean the consequences of decisions and behaviors of the AI systems, i.e., whether the AI system provides the intended benefits and behaves appropriately given the situation. The time and frequency of validation should be configured within the continuous validator. Version-based feedback and rebuild alert should be sent when the predefined conditions are met. Incentive registry can be adopted to reward/punish the ethical/unethical behavior or decisions of AI systems. * Ethical knowledge base: AI systems involve broad ethical knowledge, such as AI ethics principles, regulations, and guidelines. Unfortunately, such ethical knowledge is scattered in different documents (e.g., self-driving regulation) and is usually implicit or even unknown to developers who primarily focus on the technical aspects of AI systems. This results in negligence or ad-hoc use of relevant ethical knowledge in AI system development. Ethical knowledge base is built upon a knowledge graph to make meaningful entities, concepts and their rich semantic relationships are explicit and traceable across heterogeneous documents so that the ethical knowledge can be systematically accessed, analysed, used to support the use of AI systems [[12](#bib.bib12)]. For example, there may be ethical quality issues with APIs (e.g., data privacy breaches or fairness issues). Thus, ethical compliance checking for APIs is needed to detect if any ethics violation exists. Ethical knowledge graphs can be built based on the ethical principles and guidelines (e.g. privacy knowledge graph based on privacy act) to automatically examine whether APIs are compliant with regulations for AI ethics. Call graph might also be needed for code analysis as there might be interactions between different APIs. * Co-versioning registry: Compared with traditional software, AI systems involve different levels of dependencies and require more frequent evolution due to their data-dependent behaviors. Co-versioning of the components of AI systems or AI assets provides end-to-end provenance guarantees across the entire lifecycle of AI systems 222<https://dvc.org/>. Co-versioning registry can track the co-evolution of components or AI assets. There are different levels of co-versioning: co-versioning of AI components and non-AI components, co-versioning of the assets within the AI components (i.e., co-versioning of data, model, code, configurations, and co-versioning of local models and global models in federated learning). A publicly accessible data infrastructure (e.g. using blockchain) can be used to maintain the co-versioning registry to provide a trustworthy trace for dependencies. For example, a co-versioning registry contract can be built on blockchain to manage different versions of visual perception models and the corresponding training datasets. * Federated learner: Despite the widely deployed mobile or IoT devices generating massive amounts of data, data hungriness is still a challenge given the increasing concern in data privacy. Federated learner preserves the data privacy by training models locally on the client devices and formulating a global model on a central server based on the local model updates [[13](#bib.bib13)], e.g., train the visual perception model locally in each vehicle. Decentralized learning is an alternative to federated learning, which uses blockchain to remove the single point of failure and coordinate the learning process in a fully decentralized way. In the event of negative outcomes, the responsible humans can be traced and identified by an ethical black box for accountability. * Ethical black box: Black box was introduced initially for aircraft several decades ago for recording critical flight data. The purpose of embedding an ethical black box in an AI system is to investigate why and how an AI system caused an accident or a near miss. The ethical black box continuously records sensor data, internal status data, decisions, behaviors (both system and operator) and effects [[14](#bib.bib14)]. For example, an ethical black box could be built into the automated driving system to record the behaviors of the system and driver and their effects. All of these data need to be kept as evidence with the timestamp and location data. Designing the ethical black box is challenging as the ethical metrics need to be identified for data collection. Also, design decisions need to be made on what data should be recorded and where the data should be stored (e.g. using a blockchain-based immutable log or a cloud-based data storage). * Global-view auditor: When an accident happens, there might be more than one AI systems involved (e.g. multiple autonomous vehicles in an accident). The data collected from each of the involved AI systems might be conflicting to each other. Global-view auditor provides global-view accountability by finding discrepancies among the data collected from a set of AI systems and identifying liability when negative events occur. This pattern can also adapted to improve the decision-making of an AI system by taking the knowledge from other systems. For example, an autonomous vehicle may increase their visibility using the perceptions of others to make better decisions at runtime [[15](#bib.bib15)]. 5 Conclusion ------------- AI ethics principles are usually high-level and do not provide concrete guidance and engineering methods to developers on how to develop AI systems responsibly. To operationalize responsible AI, we collect a set of patterns that can be embedded into an AI system as product features to enable responsible-AI-by-design. The patterns are associated to the states or state transitions of a provisioned AI system, serving as an effective guidance for architects to design a responsible AI system.
167e7c7c-eb16-4a77-ad6c-f341f58e9750
trentmkelly/LessWrong-43k
LessWrong
One Year of Pomodoros In December of 2019, I started recording how many pomodoros I did each day. I had no expectation that this would last a long time, but it's now been over a year! I've kept a spreadsheet of my daily pomodoros since then, and here's the graph. I did 2171 pomodoros between 2019-12-01 and 2020-12-29, for an average of 5.5 per day. My pomodoros per day over the last 13 months. The line is a moving 6-day average. To see the full size, right-click and open image in a new tab. This post is a retrospective. As such, there's not much of a narrative, just a lot of little details. Hopefully it will be useful to someone. Definition In case you're unfamiliar, pomodoros (sometimes abbreviated to pomos or poms) are an extremely simple work technique. You set a timer, usually for 25 minutes. You do focused work during that time. When the timer goes off, you take a break, usually for 5 minutes. You repeat this however much you want. This almost sounds too simple to take seriously, but LessWrong is no stranger to using literal timers to solve problems. The purpose of doing pomodoros is to increase productivity by controlling your focus. Some people are naturally bad at taking breaks, and some people (like me) are bad at focusing on work. The former can lead to mental fatigue or general attentional inefficiency; the breaks force a person to rest their mind. For the latter, setting a timer makes it easier to "stick with it" for that duration. There are a more specific ideas and recommendations from the official inventor of the technique, but honestly it's so simple that you're better off just experimenting with it for a few days, and picking the parameters that work best for you. For the purpose of my own data recording, I defined a pomodoro as a 20-50 minute period, where I actually set a timer, and worked on an endorsed productive activity. If I zoned out for too much of the time, I didn't count it. Disclaimers about the data The graph definitely encodes some broad trends, but
77745be0-b2ca-41c1-ad7c-67fbfcd8fff8
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Disagreement with bio anchors that lead to shorter timelines *This would have been a submission to the*[*FTX AI worldview prize*](https://ftxfuturefund.org/announcing-the-future-funds-ai-worldview-prize/)*. I’d like to thank Daniel Kokotajlo, Ege Erdil, Tamay Besiroglu, Jaime Sevilla, Anson Ho, Keith Wynroe, Pablo Villalobos and Simon Grimm for feedback and discussions. Criticism and feedback are welcome. This post represents my personal views.* The causal story for this post was: I first collected my disagreements with the bio anchors report and adapted the model. This then led to shorter timelines. I did NOT only collect disagreements that lead to shorter timelines. If my disagreements would have led to longer timelines, this post would argue for longer timelines.  I think the bio anchors report (the one from 2020, not Ajeya’s personal updates) puts too little weight on short timelines. I also think that there are a lot of plausible arguments for short timelines that are not well-documented or at least not part of a public model. The bio anchors approach is obviously only one possible way to think about timelines but it is currently the canonical model that many people refer to. I, therefore, think of the following post as “*if bio anchors influence your timelines, then you should really consider these arguments and, as a consequence, put more weight on short timelines if you agree with them*”. I think there are important considerations that are hard to model with bio anchors and therefore also added my personal timelines in the table below for reference. My best guess bio anchors adaption suggests a median estimate for the *availability of compute to train TAI*of 2036 (10th percentile: 2025, 75th percentile: 2052). Note that this is not the same as predicting the widespread deployment of AI. Furthermore, I think that the time “when AI has the potential to be dangerous” is earlier than my estimate of TAI because I think that this poses a lower requirement than the potential to be economically transformative (so even though the median estimate for TAI is 2036, I wouldn’t be that surprised if, let’s say 2033 AIs, could deal some severe societal harm, e.g. > $100B in economic damage). You can find all material related to this piece including the colab notebook, the spreadsheets and the long version in [this google folder](https://drive.google.com/drive/folders/1wpPzQS4LqRx_3Qe__DkXMLvwMFSjOhXJ?usp=sharing). Executive summary ================= I think some of the assumptions in the bio anchors report are not accurate. These disagreements still apply to Ajeya’s personal [updates on timelines](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines). In this post, I want to lay out my disagreements and provide a modified alternative model that includes my best guesses.  **Important:**To model the probability of transformative AI in a given year, the bio anchors report uses the *availability of compute*(e.g. see [this summary](https://epochai.org/blog/grokking-bioanchors))*.* This means that the bio anchors approach is NOT a prediction for when this AI has been trained and rolled out or when the economy has been transformed by such an ML model, it merely predicts when such a model *could* be trained. I think it could take multiple (I guess 0-4) years until such a model is engineered, trained and actually has a transformative economic impact.  My disagreements ---------------- You can find the long version of all of the disagreements in [this google doc](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit?usp=sharing), the following is just a summary.  * I think the baseline for human anchors is too high since humans were “trained” in very inefficient ways compared to NNs. For example, I expect humans to need less compute and smaller brains if we were able to learn on more data or use parallelization. Besides compute efficiency, there are further constraints on humans such as energy use, that don’t apply to ML systems. To compensate for the data constraint, I expect human brains to be bigger than they would need to be without them. The energy constraint could imply that human brains are already very efficient but there are alternative interpretations. [[jump to section](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit#heading=h.8115vduq3kf0)] * I think the report does not include a crucial component of algorithmic efficiency which I call “software for hardware” for lack of a better description. It includes progress in AI accelerators such as TPUs (+the software they enable), software like PyTorch, compilers, libraries like DeepSpeed and related concepts. The current estimate for algorithmic efficiency does not include the progress coming from this billion-dollar industry. [[jump to section](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit#heading=h.d2082ybt7m3o)] * I think the report’s estimate for algorithmic progress is too low in general. It seems like progress in transformers is faster than in vision models and the current way of estimating algorithmic progress doesn’t capture some important components. [[jump to section](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit#heading=h.edx3ianqw79b)] * I think the report does not include algorithmic improvements coming from more and more powerful AI, e.g. AI being used to improve the speed of matrix multiplication, AI being used to automate prompt engineering or narrow AIs that assist with research. This automation loop seems like a crucial component for AI timelines. [[jump to section](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit#heading=h.wqg6yoclj48v)] * I think the evolution anchor is a bit implausible and currently has too much weight. This is mostly because SGD is much more efficient than evolutionary algorithms and because ML systems can include lots of human knowledge and thus “skip” large parts of evolution. [[jump to section](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit#heading=h.930e878nxc7l)] * I think the genome anchor is implausible and currently has too much weight. This is mostly because the translation from bytes in the genome to parameters in a NN seems implausible to me. [[jump to section](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit#heading=h.hjh74z480nyz)] * I think the report is too generous with its predictions about hardware progress and the GPU price-performance progress will get worse in the future. *This is my only disagreement that makes timelines longer*. [[jump to section](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit#heading=h.lg8y1j9tpup6)] * Intuitions: Things in AI are moving faster than I anticipated and people who work full-time with AI often tend to have shorter timelines. Both of these make me less skeptical of short or very short timelines. [[jump to section](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit#heading=h.kis2yib4lt5g)] * I think the report’s update against current levels of compute is too radical. A model like GPT-5 or Gato 3 could be transformative IMO (this is not my median estimate but it doesn’t seem completely implausible). I provide reasons for why I think that these “low” levels of compute could already be transformative. [[jump to section](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit#heading=h.9icdgv8u5hov)] * There are still a number of things I’m not modeling or am unsure about. These include regulation and government interventions, horizon length (I’m not sure if the concept captures exactly what we care about but don’t have a better alternative), international conflicts, pandemics, financial crises, etc. All of these would shift my estimates back by a bit. However, I think AI is already so powerful that these disruptions can’t stop its progress--in a sense, the genie is out of the bottle. [[jump to section](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit#heading=h.k57hqhyfj97b)] The resulting model ------------------- The main changes from the original bio anchors model are 1. Lowering the FLOP/s needed for TAI compared to Human FLOP/s 2. Lowering the doubling time of algorithmic progress 3. Changing the weighing of some anchors 4. Some smaller changes; see [here](https://docs.google.com/document/d/1EKI7nU1LiknojKm68SIUBnP35iZzJPO1WG8eLPiZTk0/edit#heading=h.hk5y9vko4n0a) for details. I tried to change as few parameters as possible from the original report. You can find an overview of the different parameters in the table below and the resulting best guess in the following figure. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1674448263/mirroredImages/Q3fesop6HKnemJ5Jc/qzlenfniobrl4bdlqxex.png) | | | | | | | --- | --- | --- | --- | --- | | | **Aggressive - bio anchors (Marius)** | **Best guess - bio anchors (Marius)** | **Independent impression (Marius)** | **Ajeya’s best guess (2020)** | | Algorithmic progress doubling time | 1.-1.3 years | 1.3-1.6 years | 1.3-1.6 years | 2-3.5 years | | Compute progressdoubling time | 2.5 years | 2.8 years | 3 years | 2.5 years | | Model FLOPS vs. brain FLOPS (=1e15) median | -1 | -0.5 | -0.2 | +1 | | Lifetime anchor | 16% | 10% | 10% | 5% | | Short NN anchor | 40% | 24% | 30% | 20% | | Medium NN anchor | 20% | 35% | 31% | 30% | | Long NN anchor | 10% | 17% | 13% | 15% | | Evolution anchor | 3% | 3% | 5% | 10% | | Genome anchor | 1% | 1% | 1% | 10% | | 10th percentile estimate | <2025 | ~2025 | ~2028 | ~2032 | | Median(=50%) estimate | ~2032 | ~2036 | ~2041 | ~2052 | | 75th percentile estimate | ~2038 | ~2052 | ~2058 | ~2085 |   My main takeaways from the updates to the model are 1. If you think that the compute requirements for AI is lower than that for humans (which I think is plausible for all of the reasons outlined below) then most of the probability mass for TAI is between 2025 and 2035 rather than 2035 and 2045 as Ajeya’s model would suggest. 2. In the short timeline scenario, there is some probability mass before 2025 (~15%). I would think of this as the “transformer TAI hypothesis”, i.e. that just scaling transformers for one more OOM and some engineering will be sufficient for TAI. Note that this doesn’t mean there will be TAI in 2025 just that it would be possible to train it. 3. Thinking about the adaptions to the inputs of the bio anchor model and then computing the resulting outputs made my timelines shorter. I didn’t actively intend to produce shorter timelines but the more I looked at the assumptions in bio anchors, the more I thought “This estimate is too conservative for the evidence that I’m aware of”. 4. The uncertainty from the model is probably too low, i.e. the model is overconfident because core variables like compute price halving time and algorithmic efficiency are modeled as static singular values rather than distributions that change over time. This is one reason why my personal timelines (as shown in the table) are a bit more spread than my best guess bio anchors adaption. To complete the picture, this is the full distribution for my **aggressive estimate**: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1674448263/mirroredImages/Q3fesop6HKnemJ5Jc/qpanhzl5ucgslm1vdk1f.png) And for my **independent impression**: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1674448263/mirroredImages/Q3fesop6HKnemJ5Jc/z26jucptlkvqenj2ayn1.png) Note that the exact weights for my personal estimate are not that meaningful because I adapted them to include considerations that are not part of the current bio anchors model. Since it is hard to model something that is not part of the model, you should mostly ignore the anchor weights and only look at the final distribution. These “other considerations” include unknown unknowns, disruptions of the global economy, pandemics, wars, etc. all of which add uncertainty and lengthen timelines.  As you can see, I personally still think that 2025-2040 is the timespan that is most likely for TAI but I have more weight on longer timelines than the other two estimates. My personal timelines (called independent impression in the table) are also much more uncertain than the other estimates because I find arguments for very short and for much longer timelines convincing and don’t feel like I have a good angle for resolving this conflict at the moment. Additionally, compute halving time and algorithmic progress halving time are currently static point estimates and I think optimally they would be distributions that change over time. This uncertainty is currently not captured and I, therefore, try to introduce it post-hoc. Furthermore, I want to emphasize that I think that relevant dangers from AI arise before we get to TAI, so don’t mistake my economic estimates for an estimate of “when could AI be dangerous”. I think this can happen 5-15 years before TAI, so somewhere between 2015 and 2035 in my forecasts. **Update:** After more conversations and thinking, my timelines are best reflected by the aggressive estimate above. 2030 or earlier is now my median estimate for AGI and I'm mostly confused about TAI because I have conflicting beliefs about how exactly the economic impact of powerful AI is going to play out.  Final words =========== There is a chance I misunderstood something in the report or that I modified Ajeya’s code in an incorrect way. Overall, I’m still not sure how much weight we should put on the bio anchors approach to forecasting TAI in general, but it currently is the canonical approach for timeline modeling so its accuracy matters. Feedback and criticism are appreciated. Feel free to reach out to me if you disagree with something in this piece.
4a3d72a8-f5ed-4eeb-ae8e-b6dea1ddb74d
trentmkelly/LessWrong-43k
LessWrong
Opening Facebook Links Externally When I'm on my phone and click a link in Facebook I want it to open in my normal browser, but it defaults to an "in-app" browser. I really don't like in-app browsers, and I think Alex Russell's explanation of their problems is very good. I've turned this behavior off, but Facebook regularly forgets my decision, most recently a few weeks ago. The setting had moved since the last time I did this, so here are screenshots showing how to turn it off on Android: Click on the icon of your profile image overlaid with a hamburger menu in the upper right, and scroll to the bottom: Click "Settings & privacy": Click "Settings": Click "Profile settings": Click "Media and contacts" settings: Toggle "Links open externally":
c3e14381-b6bd-4478-9441-073c1aad61cd
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Subsets and quotients in interpretability Summary ======= Interpretability techniques often need to throw away some information about a neural network's computations: the entirety of the computational graph might just be too big to understand, which is part of why we need interpretability in the first place. In this post, I want to talk about two different ways of simplifying a network's computational graph: 1. Fully explaining parts of the computations the network performs (e.g. identifying a subcircuit that fully explains a specific behavior we observed) 2. Approximately describing how the entire network works (e.g. finding meaningful modules in the network, whose internals we still don't understand, but that interact in simple ways) These correspond to the idea of subsets and quotients in math, as well as many other instances of this duality in other areas. I think lots of interpretability at the moment is 1., and I'd be excited to see more of 2. as well, especially because I think there are synergies between the two. The entire post is rather hand-wavy; I'm hoping to point at an intuition rather than formalize anything (that's planned for future posts). Note that a distinction like the one I'm making isn't new (e.g. it's intuitively clear that circuits-style research is quite different from neural network clusterability). But I haven't seen it described this explicitly before, and I think it's a useful framing to keep in mind, especially when thinking about how different interpretability techniques might combine to yield an overall understanding. ETA: An important clarification is that for both 1. and 2., I'm only discussing interpretability techniques that try to understand the *internal structure* of a network. In particular, 2. talks about approximate descriptions of the algorithm the network is *actually* using, not just approximate descriptions of the function that's being implemented. This excludes large parts of interpretability outside AI existential safety (e.g. any method that treats the network as a black box and just fits a simpler function to the network). Subsets vs quotients ==================== In math, if you have a set X.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , there are two "dual" ways to turn this into a smaller set: 1. You can take a *subset* Y⊂X. 2. You can take a *quotient* of X/∼ by some equivalence relation ∼. (The quotient X/∼ is the set of all equivalence classes under ∼.) I think this is also a good framing to distinguish interpretability techniques, but before elaborating on that, I want to build intuitions for subsets and quotients in contexts other than interpretability. * Maps f:Y→X *into* X induce subsets of X (namely their image f(Y)). For this subset, it doesn't matter how many elements in Y where mapped to a given element in X, so we can assume f is injective without loss of generality. Thus, subsets are related to *injective maps*. Dually, maps g:X→Z *out of* X induce quotients of X: we can define two elements in X to be equivalent if they map to the same element in Z. The quotient X/∼ is then the set of all preimages g−1({z}) for z∈g(X). Again, we can assume that g is surjective if we only care about the quotient itself, so quotients correspond to *surjective maps*. * A chapter of a book is a subset of its text. A summary of the book is like a quotient. Note that both throw away information, and that for both, there will be many different possible books we can't distinguish with only the subset/quotient. But they're very different in terms of *which* information they throw away and which books become indistinguishable. Knowing only the first chapter leaves the rest of the book entirely unspecified, but that one chapter is nailed down exactly. Knowing a summary restricts choices for the entire book somewhat, but leaves local freedom about word choice etc. everywhere. * If I have a dataset, then samples from that dataset are like subsets, summary statistics are like quotients. Again, both throw away information, but in very different ways. * If I want to communicate to you what some word means, say "plant" then I can either give examples of plants (subset), or I can describe properties that plants have (quotient). In the context of interpretability ================================== The subset/quotient framework can be applied to mechanistic interpretability as follows: fully explaining part of a network is analogous to subsets, abstracting the entire network and describing how it works at a high level is analogous to quotients. Both of these are ways of simplifying a network that would otherwise be unwieldy to work with, but again, they simplify in quite different ways. These subsets/quotients of the *mechanism/computation* of the network seem to somewhat correspond to subsets/quotients of the *behavior* of the network: * If we interpret a subset of the neurons/weights/... of the network in detail, that subset is often chosen to explain the network's behavior on a subset of inputs very well (while we won't get much insight into what happens on other inputs). * A rough high-level description of the network could plausibly be similarly useful to predict behavior on many different inputs. But it won't let us predict behavior *exactly* on any of them—we can only predict certain properties the outputs are going to have. So this leads to a quotient on outputs. To make this a bit more formal: if we have a network that implements some function f:X→Y, then simplifying that network using interpretability tools might give us two different types of simpler functions: * A restriction of f to some subset X′⊂X * A composition of f with a quotient map on Y, i.e. a function X→Y/∼ Interpreting part of the network seems related to the first of these, while abstracting the network to a high-level description seems related to the second one. For now, this is mostly a vague intuition, rather than a formal claim (and there are probably exceptions, for example looking at some random subset of neurons might just give us no predictive power at all). Existing interpretability work ============================== I'll go through some examples of interpretability research and describe how I think they fit into the subset/quotient framework: * The work on [indirect object identification in GPT-2 small](https://arxiv.org/abs/2211.00593) is a typical example of the subset approach: it explains GPT-2's behavior on a very specific subset of inputs, by analyzing a subset of its circuits in a lot of detail. * [Induction heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) are similar in that they still focus on precisely understanding a small part of the network. However, they help understand behavior on a somewhat broader range of inputs, and they aren't specific to one model in particular (which is a dimension I've ignored in this post). * The [analysis of early vision in InceptionV1](https://distill.pub/2020/circuits/early-vision/) has some aspects that feel quotient-y (namely grouping neurons by functionality), but it focuses entirely on the subset of early layers and mostly explains what individual neurons do. Overall, I'd put this mostly in the subset camp. * The general idea that early layers of a CNN tend to detect low-level features like curves, which are then used to compute more complicated features, which are finally turned into an output label, is a clear example of a quotient explanation of how these models work. This is also a good example of how the approaches can interact: studying individual neurons can give strong evidence that this quotient explanation is correct. * [Clusterability in neural networks](https://arxiv.org/abs/2103.03386) and [other work](https://www.lesswrong.com/s/ApA5XmewGQ8wSrv5C/p/XKwKJCXgSKhSr9bZY) on [modularity](https://www.lesswrong.com/posts/rp4CiJtttvwFNHkhL/searching-for-modularity-in-large-language-models) are other typical examples of quotient approaches to interpretability. * [Acquisition of chess knowledge in AlphaZero](https://arxiv.org/abs/2111.09259) combines elements of a subset and a quotient approach. Figuring out that AlphaZero represents lots of human chess concepts is part of a quotient explanation: it lets us explain at a very high level of abstraction how AlphaZero evaluates positions (presumably by using those concepts, e.g. recognizing that a position where you have an unsafe king is bad). On the other hand, the paper certainly doesn't provide a *complete* picture of how AlphaZero thinks, not even at such a high level of abstraction (e.g. it's unclear how these concepts are actually being used, we can only make reasonable guesses as to what a full explanation at this level would look like). * The [reverse-engineered algorithm for modular addition](https://www.lesswrong.com/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking) seems to me to be an example of a subset-based approach (i.e. my impression is that the algorithm was discovered by looking at various parts of the network and piecing together what was happening). The unusual thing about it is that the "subset" being explained is ~everything the network does. So you could just as well think of the end product as a quotient explanation (at a rather fine-grained level of abstraction). This is an example of how both approaches converge as the subset increases in size and the abstraction level becomes more and more fine-grained. * The [polytope lens](https://www.lesswrong.com/posts/eDicGjD9yte6FLSie/interpreting-neural-networks-through-the-polytope-lens) itself feels like a quotient technique (reframing the computations a network is doing at a specific level of abstraction, talking about subunits as groups of polytopes with similar spline codes). However, given that it abstracts a network at a very fine-grained level, I'd expect it to be combined with subset approaches in practice. Similar things apply to the [mathematical transformer circuits framework](https://transformer-circuits.pub/2021/framework/index.html). * [Causal scrubbing](https://static1.squarespace.com/static/6114773bd7f9917b7ae4ef8d/t/6364a036f9da3316ac793f56/1667539011553/causal-scrubbing) focuses on testing subset explanations: it assumes that a hypothesis is an embedding of a smaller computational graph into the larger one.[[1]](#fn7fvk3yp9qyc) Combining subset and quotient approaches ======================================== I've already mentioned two examples of how both types of techniques can work in tandem: * A subset analysis can be used to test a quotient explanation (e.g. if I conjecture that early CNN layers detect low-level features like curves, that are then combined to compute increasingly high-level concepts like dog ears, I can test that by looking at a bunch of example neurons) * Good fine-grained quotients can make it easier to explain subsets of a network (e.g. the polytope lens, the mathematical transformers framework, or other abstractions that are easier to work with then thinking about literal multiplications and additions of weights and activations). Some more hypothetical examples: * Understanding a network in terms of submodules might point us to interesting subsets to study in detail. For example, a submodule that reasons about human psychology might be more important to study than one that does simple perception tasks. * A high-level understanding of a network should make it easier to understand low-level details in subsets. E.g. if I suspect that the neurons I'm looking at are part of a submodule that somehow implements a learned tree search, it will be much easier to figure out *how* the implementation works than if I'm going in blind. * Conversely, subset-based techniques might be helpful for identifying submodules and their functions. If I figure out what a specific neuron or small group of neurons is doing, that puts restrictions on what the high-level structure of the network can be. * We can try to first divide a network into submodules and then understand each of them using a circuits-style approach. Combining the abstraction step with the low-level interpretation lets us *parallelize* understanding the network. Without the initial step of finding submodules, it might be very difficult to split up the work of understanding the network between lots of researchers. I'm pretty convinced that combining these approaches is more fruitful than either one on its own, and my guess is that this isn't a particularly controversial take. At the same time, my sense is that most interpretability research at the moment is closer to the "subset" camp, except for frameworks like transformer circuits that are about very *fine-grained* quotients (and thus mainly tools to enable better subset-based research). The only work I'm aware of that I would consider clear examples of quotient research at a high level of abstraction are Daniel Filan's [Clusterability in neural networks](https://arxiv.org/abs/2103.03386) line of research and some work on [modularity](https://www.lesswrong.com/tag/modularity) by John Wentworth's SERI MATS cohort. Some guesses as to what's going on: * I missed a bunch of work in the quotient approach. * People think the subset approach is more promising/we don't need more research on submodules/... * Subset-style research is currently quite tractable using empirical approaches and easier to scale, whereas quotient-style research needs more insights that are hard to find. * Maybe the framework I'm using here is just confused? But even then, I'd still think that the "finding high-level structure in neural networks" is clearly a sensible distinct category, and neglected compared to circuits-style work. I'd be very curious to hear to hear your thoughts (especially from people working on interpretability: why did you pick the specific approach you're using?) 1. **[^](#fnref7fvk3yp9qyc)**A way to fit in quotient explanations would be to make the larger graph itself a quotient of the neural network, i.e. have its nodes perform complex computations. But causal scrubbing doesn't really discuss what makes such a quotient explanation a good one (except for extensional equality).
f4bda0c1-0b3f-48fe-b5ad-3183d1358f4b
trentmkelly/LessWrong-43k
LessWrong
When Someone Tells You They're Lying, Believe Them Some people refuse to admit they're wrong, but there's other clues a pretzel acrobat apparently Paul Ehrlich became well-known for his 1968 book The Population Bomb, where he made many confidently-stated but spectacularly-wrong predictions about imminent overpopulation causing apocalyptical resource scarcity. As illustration for how far off the mark Ehrlich was, he predicted widespread famines in India at a time when its population was around 500 million people, and he wrote “I don't see how India could possibly feed two hundred million more people by 1980.” He happened to have made this claim right before India’s Green Revolution in agriculture. Not only is India able to feed a population that tripled to 1.4 billion people, it has long been one of the world’s largest agricultural exporter. Ehrlich is also known for notoriously losing a bet in 1990 to one of my favorite humans ever, the perennial optimist (and business professor) Julian Simon. Bryan Caplan brings up some details to the follow-up that never was: > We’ve all heard about the Ehrlich-Simon bet. Simon the cornucopian bet that resources would get cheaper, Ehrlich the doomsayer bet that they would get pricier, and Simon crushed him. There’s a whole book on it. What you probably don’t know, however, is that in 1995, Paul Ehrlich and Steve Schneider proposed a long list of new bets for Simon - and that Simon refused them all. The first bet was fairly straight-forward: Ehrlich picked 5 commodities (copper, chromium, nickel, tin, & tungsten) and predicted that their price would be higher in 1990 compared to 1980 as the materials become scarcer. Instead of rising, the combined price went down. Ehrlich’s decade-spanning obstinance and unparalleled ability to step on rakes make him an irresistible punching bag but despite his perennial wrongness, his responses have ranged from evasion to outright denials: > Anne and I have always followed U.N. population projections as modified by the Population Reference Bu
919ea039-57bc-44a2-8a8e-1f58e879ecc1
trentmkelly/LessWrong-43k
LessWrong
Bounded Solomonoff Induction: Three Difficulty Settings I am starting another (probably) long sequence of posts on asymptotic logical uncertainty. Here, we will move past the Benford test to a more general goal. The purpose of this series will be to investigate the following question: What is the correct resource bounded analogue of Solomonoff induction? This series is a continuation of the Asymptotic Logical Uncertainty series, but I will start fresh with the notation, and try to keep it self contained. Consider an environment, which is a Turing machine E. E takes as input a single natural number n, and outputs a bit E(n). Our goal is to construct a Turing machine M, which takes as input an environment E and a natural number n, and outputs a probability M(n). This probability is meant to represent the probability that E outputs an 1. Let A(n) be a fast growing function. We restrict M to run in time O(A(n)2) on input n. In all of our proposals, M will sample and simulate Turing machines which run it tame O(A(n)) on input n. We will judge M by comparing it to other turing machines which run in O(A(n)) time on input n. We will score M based on how close its probabilities are to the true environment, E. I will talk later about different ways we can score M, but first, I want to describe three different difficulty settings for this problem: Level 1: E(n) is guaranteed to output in time at most A(n+1). Note that we assume that A(n+1)>>A(n)2, so this does not make the question completely trivial. However, this assumption does allow M to compute all E(i) for all i<n when asked to assign a probability to n. As we will soon see, this assumption make the problem much easier. Unfortunately, it also makes the problem much less important, as this is not a very realistic assumption to make. Bounded Solomonoff induction is almost completely solved at this difficulty level by the "Subsequence Induction" algorithm, which I will present in detail later in the series. There may be some further work extending the existing algorithm
9e75a3d9-69eb-4dd2-87a2-f57a6a416b06
trentmkelly/LessWrong-43k
LessWrong
Arguments for/against scheming that focus on the path SGD takes (Section 3 of "Scheming AIs") This is Section 3 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?”. There’s also a summary of the full report here (audio here). The summary covers most of the main points and technical terms, and I’m hoping that it will provide much of the context necessary to understand individual sections of the report on their own. Audio version of this section here, or search for "Joe Carlsmith Audio" on your podcast app. Arguments for/against scheming that focus on the path that SGD takes In this section, I'll discuss arguments for/against scheming that focus more directly on the path that SGD takes in selecting the final output of training. Importantly, it's possible that these arguments aren't relevant. In particular: if SGD would actively favors or disfavor schemers, in some kind "direct comparison" between model classes, and SGD will "find a way" to select the sort of model it favors in this sense (for example, because sufficiently high-dimensional spaces make such a "way" available),[1] then enough training will just lead you to whatever model SGD most favors, and the "path" in question won't really matter. In the section on comparisons between the final properties of the different models, I'll discuss some reasons we might expect this sort of favoritism from SGD. In particular: schemers are "simpler" because they can have simpler goals, but they're "slower" because they need to engage in various forms of extra instrumental reasoning – e.g., in deciding to scheme, checking whether now is a good time to defect, potentially engaging in and covering up efforts at "early undermining," etc (though note that the need to perform extra instrumental reasoning, here, can manifest as additional complexity in the algorithm implemented by a schemer's weights, and hence as a "simplicity cost", rather than as a need to "run that algorithm for a longer time").[2] I'll say much more about this below. Here, though, I want to note that if SGD
b9f5da48-fcf8-4e90-933f-9e8b96aa52a3
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post3021 Here I apply my "If I were a well-intentioned AI" filter to mesa-optimising Now, I know that a mesa-optimiser need not be a subagent (see 1.1 here ), but I'm obviously going to imagine myself as a mesa-optimising subagent. An immediate human analogy springs to mind: I'm the director of a subdivision of some corporation or agency, and the "root optimiser" is the management of that entity. There is a lot of literature on what happens if I'm selfish in this position; but if I'm well-intentioned, what should I be doing? One thing that thinking this way made me realise: there is a big difference between "aligned with management" and "controlled by management". We'll consider each one in turn, but to summarise: aligned mesa-optimisers are generally better than controlled mesa-optimisers, but it is hard to tell the difference between an aligned and a dangerous unaligned mesa-optimiser. Control vs alignment First let's flesh out the corporate/management example a bit. Me-AI is in charge of making widgets, that are used by the company for some purpose. That purpose is given by - the base utility for the corporation. My role is to make as many widgets as possible within my budget; this is , the mesa-objective I have been given by management. My true utility function is U me . Management don't fully know what U me is - or at least don't fully understand it, or all of its implications. This is needed, of course, because if management fully understood the implications of U me , there would be no uncertainty at all on their part, and they could make me do exactly what they wanted - or they would turn me off. Because of this uncertainty, management have added some extra levels of control over me. Let's assume one typical precaution: if I underspend the budget, the budget is cut next year. If I overspend, I'm reprimanded (and fired if I do it too often), but the budget is not cut. There are three possible situations I could confront: S 1 : I've made as many widgets as I can this year, and spent 90 % of my budget. I predict that, next year, I will only need 90 % of this year's budget. S 2 : I've made as many widgets as I can this year, and spent 90% of my budget. I predict that, next year, I will need 100 % of this year's budget. S 3 : I've spent 100 % of my budget on widgets. I predict that widgets are particularly valuable to the company this year, much moreso than next year. Aligned mesa-optimiser If I'm aligned with the goals of the company, U me = U base , the base utility. Management has assigned U mesa to me, but I have actually no intrinsic desire to maximise it. I am a U base -maximiser, with the extra information that management, who are also U base -maximisers, have assigned U mesa to me. My behaviour should be that of a U mesa maximiser in most circumstances. If I know management is much smarter than me (or have an information advantage; maybe I'm not even sure what U base is!), then I will tend to behave like a blind U mesa -maximiser, trusting to management to sort things out properly. In S 1 and S 2 , I will not spend anything more, and in S 3 , I will do nothing. If I know I'm smarter than management, at least in some areas, then I will start making moves that are U base -maximising but not necessarily U mesa -maximising. In S 1 , I won't spend any more (since I don't need a large budget next year), but in S 2 , I'll waste the extra 10 % to ensure my budget isn't cut. In S 3 , I'll go over budget to get the extra-valuable widgets. Dangerous mesa-optimiser Note that, from a management perspective, the S 2 and S 3 actions of a smart aligned mesa-optimiser... look like an dangerous, unaligned mesa-optimiser defecting. Let U dan be the utility function of an unaligned (and non-controlled) agent. How can management distinguish between U me = U base and U me = U dan ? In both cases, the subagent will take actions that go beyond U mesa , actions whose ramifications management cannot immediately see. Of course, I would explain to management why I'm doing this, and why they are compatible with U base . But a U dan -maximiser would similarly explain its actions, trying to trick the management into thinking it was... me. Thus, as in the classical principal-agent problem , there is a problem with imperfect information. Even if I am aligned, management can't trust me if they don't know I'm aligned. I would try and signal this; an unaligned agent would try and fake the same signals. The very fact that aligned-but-not-controlled agents could exist, make it easier for unaligned-and-not-controlled agents to go undetected. Controlled mesa-optimiser A controlled mesa-optimiser is the obedient servant. They will maximise U mesa , while ensuring that they remain under management control. If I'm in that case, I'll try and ensure that management has accurate information about what I'm doing [1] , and that they can change my objectives as needed. I'm safely interruptible and corrigible . In S 3 , I will not do anything; it's not my role to maximise U base , so it's not my role to take advantage of this opportunity. Depending on how I'm designed, I may report this information to management. In S 1 , I will not spend any extra, and I'll let my budget be cut next year. In S 2 , I face a dilemma: my U mesa goal that wants me to preserve my budget next year, vs my desire to be informative. Again, this varies depending on how I'm designed. So I'll likely spend my entire budget (to prevent the automatic cutting of my budget), while reporting this to management so they have full information. If I can't report this, I may decide not to spend any extra, so that management gets better information. Note the "depending on how I'm designed". Because unlike the aligned mesa-optimiser, the controlled agent has two goals, in tension with each other: maximising U mesa , and being informative and controlled by management. If we call the "controlled" goal U co , then I'm maximising U me = U mesa ⊕ U co , where ⊕ means that U me is some function of U mesa and U co that increases whenever either of those functions increase. I'm not aligned, so I'm not a U base maximiser. I would gladly sacrifice U base completely, in order to maximise U co and/or U mesa . Hopefully, U co is sufficiently well-designed to allow management to predict this and preclude it. And, hopefully, the tradeoff between U mesa and U co is also sufficiently well designed to preclude it. Since I am well-intentioned, I can make use of the fact that U base is the outcome desired by management, in order to better figure out the ideal versions of U co , U mesa , and of U me . Defining "the human H understands what the mesa-optimiser is doing" is not easy . So a well-intentioned controlled mesa-optimiser, is akin to a "control optimiser" which is trying to figure out what control means (see previous posts in this sequence) and avoiding Goodharting both U co and U me . Aligned and controlled I could instead be maximising U me = U base ⊕ U mesa ⊕ U co , or, more simply U me = U base ⊕ U co , with the extra piece of information that management have assigned me the role of maximising U mesa . The ⊕ might be a smoothed lexicographic ordering , with U base the primary utility. So, here I'm aligned and controlled. So that I can trade off some of what management really want - U base - to increase U co , their control over me. But not arbitrary amounts of it. So, being aligned and controlled, I will do nothing in S 1 , just as both other agents do. I will overspend in S 3 , to take advantage of the U base -increasing potential, just like the aligned agent. The S 2 situation is more complicated. I have a tendency to want to waste my budget (alignment) but also want to communicate good information to management (control). Unlike the controlled agent, where control will dictate "do nothing" if there is no other way of communicating information, this is a genuine tension. I am therefore more likely to choose "waste" than the controlled agent. In general, this seems the best design, if management is unsure of me. Not U base -optimal, but the probability of dramatic U base -failure is reduced, while management keep some of the control that they think is desirable. Summary To summarise the three types of agent; if they are smart and powerful compared to management, they would tend to do the following actions (a ? indicates strong uncertainty; if one option is more likely, despite the uncertainty, that option is noted before the ? ): S 1 S 2 S 3 Aligned Nothing Waste Overspend Controlled Nothing ? Nothing Both Nothing Waste ? Overspend An unaligned agent will pretend to be one of the other types of agents; it will have most success at its own objectives if it pretends to be an aligned agent. If management misclassifies me Aligned, believed controlled If I'm aligned, but management thinks I'm controlled, this isn't much of a problem. When I'm weak or ignorant, I should behave like a controlled agent anyway, since management have a much better handle on what is needed than me. Once I become powerful and smart, I would most likely still want management to believe I'm controlled (or else they might intervene to constrict me or turn me off). So I'd generally behave exactly as if I was controlled, only taking advantage of grey zones of management misunderstanding, to push the outcome more towards the U base -optimal position, even at the cost of U mesa and U co . Controlled, believed aligned A controlled agent that is believed to be aligned is more of a problem. If the U mesa ⊕ U co provides an incentive for me to signal management "hey, you misclassified me, I'm actually controlled, not aligned", then the problem is self-correcting. If not, the problems of controlled agents are exacerbated; management will let me get on with my own stuff, ignoring any "weird" or "anti- U base " stuff that I might be doing, because they believe that I'm acting for U base and they just lack information to fully understand why I'm taking these actions. Their more "rubber-stamp" approach increases the risk that I may sacrifice their U base to my own U mesa ⊕ U co . Note that even a controlled mesa-optimiser is not necessarily honest. My goal is to give the management good information and be under their control; not to be honest. If management expects that every director will pad their budget requests by 10 % , then I will do so as well; to refrain from doing so would be misleading. ↩︎
ec157719-f18b-4c7f-909d-c8ddece611fe
trentmkelly/LessWrong-43k
LessWrong
Project proposal: Testing the IBP definition of agent Context Our team in SERI MATS needs to choose a project to work on for the next month. We spent the first two weeks discussing the alignment problem and what makes it difficult, and proposing (lots of) projects to look for one that we think would directly address the hard parts of the alignment problem. We're writing this post to get feedback and criticism of this project proposal. Please let us know if you think this is a suboptimal project in any way. Project Disclaimer: We've probably misunderstood some things, don't assume anything in this post accurately represents Vanessa's ideas. Our project is motivated by Vanessa Kosoy’s PreDCA proposal. We want to understand this proposal in enough detail that we can simplify it, as well as see and patch any holes.  IBP gives us several key tools:  1. A "Bridge Transform", that takes in a hypothesis about the universe and tells us which programs are running in the universe.  2. An "Agentometer"[1] that takes in a program and tell us how agentic it is, which is operationalized as how well the agent does according to a fixed loss function relative to a random policy.  3. A "Utiliscope"[1] that, given an agent, outputs a distribution over the utility functions of the agent.  Together these tools could give a solution to the pointers problem, which we believe is a core problem in alignment. We will start this by understanding and testing Vanessa’s definition of agency. Definition of Agency The following is Vanessa's definition of the intelligence of an agent, where an agent is a program, denoted by G, that outputs policies (as described in Evaluating Agents in IBP). This can be used to identify agents in a world model. Definition 1.6: Denote G∗:H→A the policy actually implemented by G. Fix ξ∈Δ(AH). The physicalist intelligence of G relative to the baseline policy mixture ξ, prior ζ and loss function L is defined by: g(G∣ξ;ζ,L):=−logPrπ∼ξ[Lpol(┌G┐,π,ζ)≤Lpol(┌G┐,G∗,ζ)] In words, this says that the intelligence of t
9edec361-84d4-4154-a587-2124847ffce1
trentmkelly/LessWrong-43k
LessWrong
SlateStarCodex Online Meetup: Dr David Manheim on Cooperation for AI On August 30, 17:30 GMT, 20:30 IDT, 10:30 PDT, Dr David Manheim will speak on "Cooperation for AI: From formal models in AI safety to geopolitics". He will discuss multiagent dynamics, and also competition and race dynamics for AI takeoff safety. David is a researcher with the University of Haifa, who works with Future of Humanity Institute and other organizations on investigating large-scale risks to humanity. Read his related paper here. Sign up here and we'll send you a link to the online meetup
50332555-01ae-476e-bb89-128ccb1a7474
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Announcing the Future Fund's AI Worldview Prize *Update:* *I, Nick Beckstead, no longer work at the Future Fund am writing this update purely in a personal capacity. Since the*[*Future Fund team has resigned*](https://forum.effectivealtruism.org/posts/xafpj3on76uRDoBja/the-ftx-future-fund-team-has-resigned-1?commentId=YNH2Gj35cueyT35tp#YNH2Gj35cueyT35tp) *and FTX has*[*filed for bankruptcy*](https://www.nytimes.com/2022/11/11/business/ftx-bankruptcy.html)*, it now seems very unlikely that these prizes will be paid out. I'm very sad about the disruption that this may cause to contest participants.* *I would encourage participants who were working on entries for this prize competition to save their work and submit it to*[*Open Philanthropy's own AI Worldview Contest in 2023*](https://forum.effectivealtruism.org/posts/3kaojgsu6qy2n8TdC/pre-announcing-the-2023-open-philanthropy-ai-worldviews)*.*   Today we are announcing a competition with prizes ranging from $15k to $1.5M for work that informs the Future Fund's fundamental assumptions about the future of AI, or is informative to a panel of [superforecaster](https://en.wikipedia.org/wiki/Superforecaster) judges selected by [Good Judgment Inc](https://goodjudgment.com/). These prizes will be open for three months—until Dec 23—after which we may change or discontinue them at our discretion. We have two reasons for launching these prizes. First, we hope to expose our assumptions about the future of AI to intense external scrutiny and improve them. We think artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century, and it is consequently one of our top funding priorities. Yet our philanthropic interest in AI is fundamentally dependent on a number of very difficult judgment calls, which we think have been inadequately scrutinized by others.  As a result, we think it's really possible that: * all of this AI stuff is a misguided sideshow, * we should be even more focused on AI, or * a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem. If any of those three options is right—and we strongly suspect at least one of them is—we want to learn about it as quickly as possible because it would change how we allocate hundreds of millions of dollars (or more) and help us better serve our mission of improving humanity's longterm prospects. Second, we are aiming to do [bold and decisive tests](https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1#Our_2022_plans) of prize-based philanthropy, as part of our more general aim of testing highly scalable approaches to funding. We think these prizes contribute to that work. If these prizes work, it will be a large update in favor of this approach being capable of surfacing valuable knowledge that could affect our prioritization. If they don't work, that could be an update against this approach surfacing such knowledge (depending how it plays out). The rest of this post will: * Explain the beliefs that, if altered, would dramatically affect our approach to grantmaking * Describe the conditions under which our prizes will pay out * Describe in basic terms how we arrived at our beliefs and cover other clarifications Prize conditions ---------------- On our [areas of interest](https://ftxfuturefund.org/area-of-interest/) page, we introduce our core concerns about AI as follows: > We think artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century. AI is already posing serious challenges: transparency, interpretability, algorithmic bias, and robustness, to name just a few. Before too long, advanced AI could automate the process of scientific and technological discovery, leading to economic growth rates well over 10% per year (see [Aghion et al 2017](https://scholar.harvard.edu/files/aghion/files/artificial_intelligence.pdf), [this post](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/), and [Davidson 2021](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/)). > > As a result, our world could soon look radically different. With the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease. But two formidable new problems for humanity could also arise: > > 1. **Loss of control to AI systems** > Advanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future. > 2. **Concentration of power** > Actors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity’s long-term future. > > For more on these problems, we recommend Holden Karnofsky’s “[Most Important Century](https://www.cold-takes.com/most-important-century/),” Nick Bostrom’s [Superintelligence](https://www.amazon.com/dp/B00LOOCGB2/ref=dp-kindle-redirect/132-7264950-9107823?_encoding=UTF8&btkr=1), and Joseph Carlsmith’s “[Is power-seeking AI an existential risk?](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#heading=h.pwdbumje5w8r)”. > > Here is a table identifying various questions about these scenarios that we believe are central, our current position on the question (for the sake of concreteness), and alternative positions that would significantly alter the Future Fund's thinking about the future of AI[[1]](#fnww4u5bouauc)[[2]](#fntdlosh6fi9o): | | | | | | --- | --- | --- | --- | | Proposition | Current position | Lower prize threshold | Upper prize threshold | | “P(misalignment x-risk|AGI)”: Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI | 15% | 7% | 35% | | AGI will be developed by January 1, 2043 | 20% | 10% | 45% | | AGI will be developed by January 1, 2100 | 60% | 30% | N/A |   **Future Fund will award a prize of $500k to anyone that publishes analysis that moves these probabilities to the lower or upper prize threshold.**[[3]](#fnkb1fhmtm4fs) To qualify, please please publish your work (or publish a post linking to it) on the [Effective Altruism Forum](https://forum.effectivealtruism.org/), the [AI Alignment Forum](https://www.alignmentforum.org/), or [LessWrong](https://www.lesswrong.com/) with a "Future Fund worldview prize" tag. You can also participate in the contest by publishing your submission somewhere else (e.g. [arXiv](https://arxiv.org/) or your blog) and filling out [this submission form](https://docs.google.com/forms/d/e/1FAIpQLSdflvhfJ76r6ZSnBuXdQFMMHuz16cSs8bf9l7GCqyrbEqGCCw/viewform). We will then linkpost/crosspost to your submission on the EA Forum. We will award larger prizes for larger changes to these probabilities, as follows: * **$1.5M for moving “P(misalignment x-risk|AGI)” below 3% or above 75%** * **$1.5M for moving “AGI will be developed by January 1, 2043” below 3% or above 75%** We will award prizes of intermediate size for intermediate updates at our discretion. We are also offering: * **A $200k prize for publishing any significant original analysis**[[4]](#fn0fpixu5jj1pb)**which we consider the new canonical reference on any one of the above questions**, even if it does not move our current position beyond a relevant threshold. Past works that would have qualified for this prize include: [Yudkowsky 2008](https://intelligence.org/files/AIPosNegFactor.pdf), [*Superintelligence*](https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742), [Cotra 2020](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines), [Carlsmith 2021](https://arxiv.org/abs/2206.13353), and Karnofsky's [Most Important Century](https://www.cold-takes.com/most-important-century/) series. (While the above sources are lengthy, we'd prefer to offer a prize for a brief but persuasive argument.) * **A $200k prize for publishing any analysis which we consider the canonical critique of the current position highlighted above on any of the above questions**, even if it does not move our position beyond a relevant threshold. Past works that might have qualified for this prize include: [Hanson 2011](https://intelligence.org/ai-foom-debate/), [Karnofsky 2012](https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si), and [Garfinkel 2021](https://docs.google.com/document/d/1FlGPHU3UtBRj4mBPkEZyBQmAuZXnyvHU-yaH-TiNt8w/edit#heading=h.7650wn3rs7yv). * **At a minimum, we will award $50k to the three published analyses that most inform the Future Fund's overall perspective on these issues, and $15k for the next 3-10 most promising contributions to the prize competition.** (I.e., we will award a minimum of 6 prizes. If some of the larger prizes are claimed, we may accordingly award fewer of these prizes.) As a check/balance on our reasonableness as judges, **a panel of superforecaster judges will independently review** a subset of highly upvoted/nominated contest entries with the aim of identifying any contestant who did not receive a prize, but would have if the superforecasters were running the contest themselves (e.g., an entrant that sufficiently shifted the superforecasters’ credences). * For the $500k-$1.5M prizes, if the superforecasters think an entrant deserved a prize but we didn’t award one, we will award $200k (or more) for up to one entrant in each category (existential risk conditional on AGI by 2070, AGI by 2043, AGI by 2100), upon recommendation of the superforecaster judge panel. * For the $15k-200k prizes, if the superforecasters think an entrant deserved a prize but we didn’t award one, we will award additional prizes upon recommendation of the superforecaster judge panel. The superforecaster judges will be selected by Good Judgment Inc. and will render their verdicts autonomously. While superforecasters have only been demonstrated to have superior prediction track records for shorter-term events, we think of them as a lay jury of smart, calibrated, impartial people. Our hope is that potential applicants who are confident in the strength of their arguments, but skeptical of our ability to judge impartially, will nonetheless believe that the superforecaster jury will plausibly judge their arguments fairly. After all, entrants could reasonably doubt that people who have spent tens of millions of dollars funding this area would be willing to acknowledge it if that turned out to be a mistake. Details and fine print ---------------------- * Only original work published after our prize is announced is eligible to win. * We do not plan to read everything written with the aim of claiming these prizes. We plan to rely in part on the judgment of other researchers and people we trust when deciding what to seriously engage with. We also do not plan to explain in individual cases why we did or did not engage seriously. * If you have questions about the prizes, please ask them as comments on this post. We do not plan to respond to individual questions over email. * All prizes will be awarded at the final discretion of the Future Fund. Our published decisions will be final and not subject to appeal. We also won't be able to explain in individual cases why we did not offer a prize. * Prizes will be awarded equally to coauthors unless the post indicates some other split. At our discretion, the Future Fund may provide partial credit across different entries if they together trigger a prize condition. * If a single person does research leading to multiple updates, Future Fund may—at its discretion—award the single largest prize for which the analysis is eligible (rather than the sum of all such prizes). * We will not offer awards to any analysis that we believe was net negative to publish due to [information hazards](https://en.wikipedia.org/wiki/Information_hazard), even if it moves our probabilities significantly and is otherwise excellent. * At most one prize will be awarded for each of the largest prize categories ($500k and $1.5M). (If e.g. two works convince us to assign < 3% subjective probability in AGI being developed in the next 20 years, we’ll award the prize to the most convincing piece (or split in case of a tie).) For the first two weeks after it is announced—until October 7—the rules and conditions of the prize competition may be changed at the discretion of the Future Fund. After that, we reserve the right to clarify the conditions of the prizes wherever they are unclear or have wacky unintended results. ### Information hazards Please be careful not to publish [information that would be net harmful](https://en.wikipedia.org/wiki/Information_hazard) to publish. We think people should not publish very concrete proposals for how to build AGI (if they know of them), and or things that are too close to that. If you are worried publishing your analysis would be net harmful due to information hazards, we encourage you to a) write your draft and then b) ask about this using the “REQUEST FEEDBACK” feature on the Effective Altruism forum or LessWrong pages (appears on the draft post page, just before you would normally publish a post): The moderators have agreed to help with this. If you feel strongly that your analysis should not be made public due to information hazards, you may submit your prize entry through [this form](https://docs.google.com/forms/u/3/d/16CX2lqdAJwW0EUPvwHmt4CRR5JXByuan0khqAT6YzQ8/edit). Some clarifications and answers to anticipated questions -------------------------------------------------------- *What do you mean by AGI?* Imagine a world where cheap AI systems are fully substitutable for human labor. E.g., for any human who can do any job, there is a computer program (not necessarily the same one every time) that can do the same job for $25/hr or less. This includes entirely AI-run companies, with AI managers and AI workers and everything being done by AIs. * How large of an economic transformation would follow? Our guess is that it would be pretty large (see [Aghion et al 2017](https://scholar.harvard.edu/files/aghion/files/artificial_intelligence.pdf), [this post](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/), and [Davidson 2021](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/)), but - to the extent it is relevant - we want people competing for this prize to make whatever assumptions seem right to them. For purposes of our definitions, we’ll count it as AGI being developed if there are AI systems that power a comparably profound transformation (in economic terms or otherwise) as would be achieved in such a world. Some caveats/clarifications worth noticing: * A comparably large economic transformation could be achieved even if the AI systems couldn’t substitute for literally 100% of jobs, including providing emotional support. E.g., Karnofsky’s notion of [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/) would probably count (though that is an empirical question), and possibly some other things would count as well. * If weird enough things happened, the metric of GWP might stop being indicative in the way it normally is, so we want to make sure people are thinking about the overall level of weirdness rather than being attached to a specific measure or observation. E.g., causing human extinction or drastically limiting humanity’s future potential may not show up as rapid GDP growth, but automatically counts for the purposes of this definition. *Why are you starting with such large prizes?* We really want to get closer to the truth on these issues quickly. Better answers to these questions could prevent us from wasting hundreds of millions of dollars (or more) and years of effort on our part. We could start with smaller prizes, but we’re interested in running bold and decisive tests of prizes as a philanthropic mechanism. A further consideration is that sometimes people argue that all of this futurist speculation about AI is really dumb, and that its errors could be readily explained by experts who can't be bothered to seriously engage with these questions. These prizes will hopefully test whether this theory is true. *Can you say more about why you hold the views that you do on these issues, and what might move you?* I (Nick Beckstead) will answer these questions on my own behalf without speaking for the Future Fund as a whole. For "Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI." I am pretty sympathetic to the analysis of Joe Carlsmith [here](https://arxiv.org/abs/2206.13353). I think Joe's estimates of the relevant probabilities are pretty reasonable (though the bottom line is perhaps somewhat low) and if someone convinced me that the probabilities on the premises in his argument should be much higher or lower I'd probably update. There are a number of [reviews](https://www.lesswrong.com/posts/qRSgHLb8yLXzDg4nf/reviews-of-is-power-seeking-ai-an-existential-risk) of Joe Carlsmith's work that were helpful to varying degrees but would not have won large prizes in this competition. For assigning odds to AGI being developed in the next 20 years, I am blending a number of intuitive models to arrive at this estimate. They are mostly driven by a few high-level considerations: * I think computers will eventually be able to do things brains can do. I've believed this for a long time, but if I were going to point to one article as a reference point I'd choose [Carlsmith 2020](https://www.openphilanthropy.org/blog/new-report-brain-computation). * Priors that seem natural to me ("beta-geometric distributions") start us out with non-trivial probability of developing AGI in the next 20 years, before considering more detailed models. I've also believed this for a long time, but I think [Davidson 2021](https://www.openphilanthropy.org/blog/report-semi-informative-priors)'s version is the best, and he gives 8% to AGI by 2036 through this method as a central estimate. * I assign substantial probability to continued hardware progress, algorithmic progress, and other progress that fuels AGI development over the coming decades. I'm less sure this will continue many decades into the future, so I assign somewhat more probability to AGI in sooner decades than later decades. * Under these conditions, I think we'll pass some limits—e.g. approaching hardware that's getting close to as good as we're ever going to get—and develop AGI if we're ever going to develop it. * I'm extremely uncertain about the hardware requirements for AGI (at the point where it's actually developed by humans), to a point where my position is roughly "I dunno, log uniform distribution over anything from the amount of compute used by the brain to a few orders of magnitude less than evolution." [Cotra 2020](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)—which considers this question much more deeply—has a similar bottom line on this. (Though her [updated timelines](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines) are shorter.) * I'm impressed by the progress in deep learning to the point where I don't think we can rule out AGI even in the next 5-10 years, but I'm not impressed enough by any positive argument for such short timelines to move dramatically away from any of the above models.. (I'm heavily citing reports from Open Philanthropy here because a) I think they did great work and b) I'm familiar with it. I also recommend [this piece](https://www.cold-takes.com/where-ai-forecasting-stands-today/) by Holden Karnofsky, which brings a lot of this work—and other work—together.) In short, you can roughly model me as having roughly trapezoidal [probability density function](https://en.wikipedia.org/wiki/Probability_density_function) over developing AGI from now to 2100, with some long tail extending beyond that point. There is about 2x as much weight at the beginning of the distribution as there is at the end of the century. The long tail includes a) insufficient data/hardware/humans not smart enough to solve it yet, b) technological stagnation/hardware stagnation, and c) reasons it's hard that I haven't thought of. The microfoundation of the probability density function could be: a) exponentially increasing inputs to AGI, b) log returns to AGI development on the key inputs, c) pricing in some expected slowdown in the exponentially increasing inputs over time, and d) slow updating toward increased difficulty of the problem as the time goes on, but I stand by the distribution more than the microfoundation. *What do you think could substantially alter your views on these issues?* We don't know. Most of all we'd just like to see good arguments for specific quantitative answers to the stated questions. Some other thoughts: * We like it when people state cleanly summarizable, deductively valid arguments and carefully investigate the premises leading to the conclusion (analytic philosopher style). See e.g. [Carlsmith 2021](https://arxiv.org/abs/2206.13353). * We also like it when people quantify their subjective probabilities explicitly. See e.g. [Superforecasting](https://smile.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136718) by Phil Tetlock. * We like a lot of the features described [here](https://forum.effectivealtruism.org/posts/SdQYLKzpDFQsdW9bn/features-that-make-a-report-especially-helpful-to-me) by Luke Muehlhauser, though they are not necessary to be persuasive. * We like it when people represent opposing points of view [charitably](https://books.google.com/books?id=9SduAwAAQBAJ&pg=PA34&dq=%22You+should+mention+anything+you+have+learned+from+your+target.%22+%22Intuition+Pumps+and+Other+Tools+for+Thinking%22&hl=en&sa=X&ved=0ahUKEwi03JKkpPTiAhXSOn0KHSh1CYYQ6AEIKTAA#v=onepage&q=%22You%20should%20mention%20anything%20you%20have%20learned%20from%20your%20target.%22%20%22Intuition%20Pumps%20and%20Other%20Tools%20for%20Thinking%22&f=false), and avoid appeals to authority. * We think it could be pretty persuasive to us if some (potentially small) group of relevant technical experts arrived at and explained quite different conclusions. It would be more likely to be persuasive if they showed signs of comfort thinking in terms of subjective probability and calibration. Ideally they would clearly explain the errors in the best arguments cited in this post. These are suggestions for how to be more likely to win the prize, but not requirements or guarantees. *Who do we have to convince in order to claim the prize?* Final decisions will be made at the discretion of the Future Fund. We plan to rely in part on the judgment of other researchers and people we trust when deciding what to seriously engage with. Probably, someone winning a large prize looks like them publishing their arguments, those arguments getting a lot of positive attention / being flagged to us by people we trust, us seriously engaging with those arguments (probably including talking to the authors), and then changing our minds. *Are these statistically significant probabilities grounded in detailed published models that are confirmed by strong empirical regularities that you're really confident in?* No. They are what we would consider fair betting odds.  This is a consequence of the conception of subjective probability that we are working with. As stated above in a footnote: "We will pose many of these beliefs in terms of subjective probabilities, which represent betting odds that we consider fair in the sense that we'd be roughly indifferent between betting in favor of the relevant propositions at those odds or betting against them." For more on this conception of probability I recommend [*The Logic of Decision*](https://www.amazon.com/Logic-Decision-Richard-C-Jeffrey/dp/0226395820) by Richard Jeffrey or [this SEP entry](https://plato.stanford.edu/entries/probability-interpret/#SubPro). Applicants need not agree with or use our same conception of probability, but hopefully these paragraphs help them understand where we are coming from. *Why do the prizes only get awarded for large probability changes?* We think that large probability changes would have much clearer consequences for our work, and be much easier to recognize. We also think that aiming for changes of this size is less common and has higher expected upside, so we want to attract attention to it. *Why is the Future Fund judging this prize competition itself?* Our intent in judging the prize ourselves is not to suggest that our judgments should be treated as correct / authoritative by others. Instead, we're focused on our own probabilities because we think that is what will help us to learn as much as possible. Additional terms and conditions ------------------------------- * Employees of FTX Foundation and contest organizers are not eligible to win prizes. * Entrants and Winners must be over the age of 18, or have parental consent. * By entering the contest, entrants agree to the Terms & Conditions. * All taxes are the responsibility of the winners. * The legality of accepting the prize in his or her country is the responsibility of the winners. Sponsor may confirm the legality of sending prize money to winners who are residents of countries outside of the United States. * Winners will be notified in a future blogpost. * Winners grant to Sponsor the right to use their name and likeness for any purpose arising out of or related to the contest. Winners also grant to Sponsor a non-exclusive royalty-free license to reprint, publish and/or use the entry for any purpose arising out of related to the contest including linking to or re-publishing the work. * Entrants warrant that they are eligible to receive the prize money from any relevant employer or from a contract standpoint. * Entrants agree that FTX Philanthropy and its affiliates shall not be liable to entrants for any type of damages that arise out of or are related to the contest and/or the prizes. * By submitting an entry, entrant represents and warrants that, consistent with the terms of the Terms and Conditions: (a) the entry is entrant’s original work; (b) entrant owns any copyright applicable to the entry; (c) the entry does not violate, in whole or in part, any existing copyright, trademark, patent or any other intellectual property right of any other person, organization or entity; (d) entrant has confirmed and is unaware of any contractual obligations entrant has which may be inconsistent with these Terms and Conditions and the rights entrant is required to have in the entry, including but not limited to any prohibitions, obligations or limitations arising from any current or former employment arrangement entrant may have; (e) entrant is not disclosing the confidential, trade secret or proprietary information of any other person or entity, including any obligation entrant may have in connection arising from any current or former employment, without authorization or a license; and (f) entrant has full power and all legal rights to submit an entry in full compliance with these Terms and Conditions. 1. **[^](#fnrefww4u5bouauc)**We will pose many of these beliefs in terms of subjective probabilities, which represent betting odds that we consider fair in the sense that we'd be roughly indifferent between betting in favor of the relevant propositions at those odds or betting against them. 2. **[^](#fnreftdlosh6fi9o)**For the sake of definiteness, these are Nick Beckstead’s subjective probabilities, and they don’t necessarily represent the Future Fund team as a whole or its funders. 3. **[^](#fnrefkb1fhmtm4fs)**It might be argued that this makes the prize encourage people to have views different from those presented here. This seems hard to avoid, since we are looking for information that changes our decisions, which requires changing our beliefs. People who hold views similar to ours can, however, win the $200k canonical reference prize. 4. **[^](#fnref0fpixu5jj1pb)**A slight update/improvement on something that would have won the prize in the past (e.g. [this update](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines) by Ajeya Cotra) does not automatically qualify due to being better than the existing canonical reference. Roughly speaking, the update would need to be sufficiently large that the new content would be prize-worthy on its own.
5aebee7e-0b3b-4f97-890a-bd219f8f16e8
trentmkelly/LessWrong-43k
LessWrong
200 COP in MI: Studying Learned Features in Language Models This is the final post in a sequence called 200 Concrete Open Problems in Mechanistic Interpretability. Start here, then read in any order. If you want to learn the basics before you think about open problems, check out my post on getting started. Look up jargon in my Mechanistic Interpretability Explainer Motivation Motivating Paper: Softmax Linear Units (SoLU), Multimodal Neurons To accompany this post, I’ve created a website called Neuroscope that displays the text that most activates each neuron in some language models - check it out! This section contains a lot of details and thoughts on thinking about neurons and learned features and relationship to the surrounding literature. If you get bored, feel free to just skip to exploring and looking for interesting neurons in Neuroscope MLPs represent ⅔ of the parameters in a transformer, yet we really don’t understand them very well. Based on our knowledge of image models, my best guess is that models learn to represent features, properties of the input, with different directions corresponding to different features. Early layers learn simple features that are basic functions of the input, like edges and corners, and these are iteratively refined and built up into more sophisticated features, like angles and curves and circles. Language models are messier than those image models, since they have attention layers and a residual stream, but our guess is that analogous features are generally computed in MLP layers. But there’s a lot of holes and confusions in this framework, and I’d love to have these filled in! How true actually is this in practice? Do features correspond to neurons vs arbitrary directions? What kinds of features get learned, and where do they occur in a model? What features do we see in small models vs large ones vs enormous ones? What kinds of things are natural for a language model to express vs extremely hard and convoluted? What are the ways our intuitions will trip us up here?  Issues like
3332d7cb-6f0a-44a9-9478-fe622f87a1d0
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Can we convince people to work on AI safety without convincing them about AGI happening this century? Context for the question ------------------------ I recently had a call with someone working in the AI/x-risk space that he thinks we can convince more people to work on AI safety-related efforts without needing to convince them that artificial general intelligence (AGI) will be achieved within this century. He didn't expound on reasons why, and this person is quite busy, so I'd rather poll forum readers instead to answer my question above. The view on AGI of AI researchers in the EA community vs. those outside ----------------------------------------------------------------------- I ask this because even if many EAs in the AI risk space think that AGI will likely be achieved within this century (and I imagine that the median view among EAs in this space is that there's a 50% chance AGI will be created by 2050), this view is still contentious in the mainstream AI community (and in mainstream media generally). However, this person I had a call with said that more AI researchers are paying attention now to AI safety thanks to various efforts/reasons, so he thinks it's easier now to get people to work on safety (i.e. make AI systems more explainable and safe) without needing to convince them about AGI. I can also imagine that it could be easier to convince AI researchers to do AI safety-related work without trying to convince them about AGI happening this century. My experience interviewing an AI professor in the Philippines ------------------------------------------------------------- I can sense that the AGI view within this century is contentious because I recently interviewed a leading AI professor/researcher in the Philippines, and he think we won't achieve AGI within this century (and he thinks that it's still far away). I don't know any AI researchers from the Philippines yet (where I'm from) who share the view that AGI will be created within this century, and I would imagine it would be hard to find AI researchers locally who already have similar views to EAs about AGI. However, the professor told me that he is interested in doing a research project related to making AI models more explainable, and that he also wants to be able to train AI models without needing large amounts of compute. I could sense that making AI models more explainable helps towards AI safety research (I don't know about if training AI models without needing large amounts of compute is safety related - probably not?).  Crowdsourcing resources/thoughts on this question ------------------------------------------------- However, I'd love more people to tell me if they think we can grow the quantity and quality of efforts of the AI safety community by focusing on arguments as to why AI should be explainable and safe, and not focus on trying to convince people that AGI will happen this century. If anyone can point me to resources or content that tries to convince people to work on AI safety without making the case for AGI happening this century, that would be great. Thanks!
79893619-9a4a-4953-8914-a0815880c37a
trentmkelly/LessWrong-43k
LessWrong
Sleeping Beauty Resolved (?) Pt. 2: Identity and Betting ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Introduction This is a followup to my previous article, Sleeping Beauty Resolved? Some objected to the solution I presented (building on Radford Neal's analysis) on the grounds that it runs afoul of a Thirder betting argument: > If Beauty uses anything other than a probability of exactly 1/3 for Heads, she will accept certain bets she should not, and reject others she should accept. Alas, there is an alternative Halfer betting argument that makes the same claim, but replacing 1/3 with 1/2. I'll show that both arguments are wrong, as they get the effective payoffs wrong; but if Beauty uses the correct effective payoffs, together with a probability of 1/(3−q(y)) for Heads, she makes the right betting decisions. To get there requires addressing some questions of unique identity, so that's where I'll start. Indexicals and Identity Thirder arguments often make use of statements such as > today is Monday and > today is Tuesday, treating them as mutually exclusive propositions. Probability theory is based on the classical propositional logic, and deals exclusively with classical propositions; are the above legitimate classical propositions? I argue that they are not. The word "today" is problematic; it is an indexical, which the article "Demonstratives and Indicatives" in the Internet Encyclopedia of Philosophy defines as > ...any expression whose content varies from one context of use to another. The standard list of indexicals includes adverbs such as “now”, “then”, “today”, “yesterday”, “here”, and “actually”. The article furthermore remarks, > Indexicals and demonstratives raise interesting technical challenges for logicians seeking to provide formal models of correct reasoning in natural language... and goes on to discuss various efforts to construct logics appropriate for reasoning with indexicals. Clearly indexicals pose a problem for classical logic, else there would be no inter
6f2cf1d3-4908-4c04-a884-e02207be0f0b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Announcing Athena - Women in AI Alignment Research Athena is a new research mentorship program fostering diversity of ideas in AI safety research. We aim to get more women and marginalized genders into technical research and offer the support needed to thrive in this space.    Applications for scholars are open until December 3rd, 2023 Apply as a scholar: [here](https://forms.gle/HVZ4L6FeeBQc9yWd8) Apply as a mentor or speaker: [here](https://forms.gle/nF6u7qTgpSgk6KAWA) Financial aid is available for travel expenses for the in-person retreat to those otherwise unable to attend without it **![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pJ9qWeBRRuvPvnoNK/ir6viglg8bsblwytib32)** ### **Program Structure** A 2-month hybrid mentorship program for women looking to strengthen their research skills and network in technical AI safety research beginning in January 2024.  This includes 1-week in-person retreat in Oxford, UK followed by a 2-month remote mentorship by established researchers in the field, with networking and weekly research talks.    Athena aims to equip women with the knowledge, skills, and network they need to thrive in AI safety research. We believe that diversity is a strength, and hope to see this program as a stepping stone towards a more diverse and inclusive AI safety research field. This program is designed to offer mentorship opportunities to technically qualified women who are early in their AI safety research careers or looking to transition into the field by connecting them with experienced mentors, resources to upskill, networking, and a supportive community.    ### **Who should apply?** Women and people of other [marginalized genders](https://wit.abcd.harvard.edu/mission#:~:text=What%20do%20we%20mean%20by,many%20other%20marginalized%20gender%20identities.) that have some research or technical industry experience, and are interested in transitioning to AI Alignment research or have a bit of experience in the Alignment field but are looking for more support. We encourage those with a non-traditional background to apply and welcome interdisciplinary work in this field.    ### **Application process** Submit the online application questions - [here](https://forms.gle/HVZ4L6FeeBQc9yWd8) Complete an interview with the founder and one other AI safety researcher Additional possible interviews with mentor   ### **Questions?** Email: claire@researchathena.org   ### **Why are we doing this** **The current culture requires a shift to retain a diverse set of qualified researchers** Athena aims to increase the number of women pursuing careers in AI alignment, which is currently a male-dominated field with a very specific culture that can initially come across as unwelcoming to those that aren’t traditionally represented here. Women may have different hurdles to cross than their male counterparts such as implicit and explicit bias, different family and home obligations, unwanted approaches for romantic relationships by male colleagues, isolation, and a lack of representation.  We want to take steps to shift the current culture to one that values diversity and inclusivity through recruiting qualified women into the field through extended outreach, providing technical mentorship with an experienced researcher, creating a targeted support structure during the program, and continued role and grant placement support after the program. There are also opportunities for networking and collaboration within the larger research ecosystem.   **Having diverse research teams and ideas is valuable for AI Alignment research** Research has consistently shown that diverse teams produce more innovative solutions. When we have a diverse group of people, including women, working on AI alignment, we are more likely to come up with comprehensive and holistic solutions that consider a wide range of perspectives and people. When more women participate in traditionally male-dominated fields like the sciences, the breadth of knowledge in that area usually grows, a surge in female involvement directly correlates with advancements in understanding [[1](https://www.nationalgeographic.com/culture/article/141107-gender-studies-women-scientific-research-feminist)]. Since there is a lack of women in this field, Athena aims to prepare women to join a research team or position after the program, where that team will benefit from this.   **Research suggests that all-women programming can provide benefits to women** Cox & Fisher (2008) found that women in a single-sex environment in a software engineering course reported higher levels of enjoyment, fairness, motivation, support, and comfort and allowed them to perform at a level that exceeded that of the all-male groups in the class [[1](https://www.informingscience.org/Publications/175?Source=%2FJournals%2FJITEResearch%2FArticles%3FVolume%3D0-0)].  Kahveci (2008) explored a program for women in science, mathematics, and engineering and found that it helped marginalized women move towards legitimate participation in these fields and enhanced a sense of community and mutual engagement [[2](https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/sce.20234)].    **Creating more representation for future policy advisors and makers** As government awareness and sense of urgency grows for AI-related decision-making and legislation, we need a diverse range of experts in the field to help policy makers make informed decisions. If we only have a specific type of person in this position, the policies will reflect that, maybe to our detriment. Women’s participation in decision-making is highly beneficial and their role in designing and applying public policies has a positive impact on people’s lives “It is not about men against women, but there is evidence to show through research that when you have more women in public decision-making, you get policies that benefit women, children and families in general. When women are in sufficient numbers in parliaments they promote women’s rights legislation, children’s rights and they tend to speak up more for the interests of communities, local communities, because of their close involvement in community life. [[2](https://ourworld.unu.edu/en/everyone-benefits-from-more-women-in-power)]
0285e544-1f7d-42a3-929f-5b7ebf7bf08c
trentmkelly/LessWrong-43k
LessWrong
AISN #22: The Landscape of US AI Legislation - Hearings, Frameworks, Bills, and Laws Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This week we’re looking closely at AI legislative efforts in the United States, including: * Senator Schumer’s AI Insight Forum * The Blumenthal-Hawley framework for AI governance * Agencies proposed to govern digital platforms * State and local laws against AI surveillance * The National AI Research Resource (NAIRR) Subscribe here to receive future versions. ---------------------------------------- Senator Schumer’s AI Insight Forum The CEOs of more than a dozen major AI companies gathered in Washington on Wednesday for a hearing with the Senate. Organized by Democratic Majority Leader Chuck Schumer and a bipartisan group of Senators, this was the first of many hearings in their AI Insight Forum.  After the hearing, Senator Schumer said, “I asked everyone in the room, ‘Is government needed to play a role in regulating AI?’ and every single person raised their hands.” Elon Musk, CEO of xAI, called the hearings “a great service to humanity.”  Senator Josh Hawley raised concerns that despite the hearings, “nothing is advancing” in terms of legislation. Below, we’ll discuss several bills on AI policy which have been introduced to Congress, none of which have come to a vote.  The Blumenthal-Hawley Framework Senator Hawley recently introduced a framework for AI legislation alongside Senator Richard Blumenthal. The pair lead the Senate Judiciary Subcommittee on Privacy, Technology and the Law, and have hosted three hearings on AI policy over the last five months.  The Blumenthal-Hawley framework recommends: * Licensing. General purpose AI systems and AIs used in high-risk situations should be required to obtain a license from an independent oversight body. There should be strong rules against conflicts of interest to prevent regulatory capture. * Legal Liability. Congress should clarify that Section 230 does n
7154f34e-2331-46a7-a099-2dd85615c793
trentmkelly/LessWrong-43k
LessWrong
Mindfulness as debugging Are mindfulness practices just attempts to reverse-engineer our brain’s compression algorithms? I was recently thinking about one cool way we could conceptualize various mindfulness practices from a cognitive science and information theory perspective. Here I am referring to mindfulness practices in the broadest sense. So Tai Chi (or other martial arts of even dance practices) can be seen as mindfulness of physical movement: rather than just making a step, I look into what muscles I need to contract and where to shift my weight in order to make that step. “Consent Culture” could be another one: rather than just initiating physical contact with a partner, I first ask myself what precise contact I would really enjoy, and then explicitly ask them if it aligns with their desires. Similarly we can view practices like “Nonviolent Communication” — as mindfulness of emotions and needs behind the things I say; or “Street Epistemology” — as mindfulness of the beliefs I hold and the evidence that led me to them. And, of course, meditation — as mindfulness of the processes that turn my raw sensations into the qualia of my experiences. The emerging pattern may thus be that all mindfulness practices are about unpacking and examining our habitual patterns of experiencing the world. To be more precise, by “habitual patterns” I am here referring to the amazing ways that our brain learns to make sense of the vast amounts of data constantly streaming in through our various senses. This complex task of compressing sensory information by finding patterns and forming concepts is crucial for our survival and adequate existence in our world (see my last post about this). From the day we are born, our brain is hard at work looking for the most accurate and efficient compression algorithms. Even cultural heritage may be seen as especially effective compression heuristics that are being passed down through generations. Nonetheless, any compression algorithm is necessarily inaccurate in som
98341da8-4741-4c26-8014-f307e0f3feff
trentmkelly/LessWrong-43k
LessWrong
2023 Prediction Evaluations It is that time of the year. One must ask not only whether predictions were right or wrong, whether one won or lost, but what one was and should have been thinking, whether or not good decisions were made, whether the market made sense. The main subject will be the 2023 ACX Predictions, where I performed buy/sell/hold along with sharing my logic. The numbers quoted are from mid-February 2023, first Manifold, then Metaculus. SECTION 1: WORLD POLITICS 1. Will Vladimir Putin be President of Russia at the end of 2023 (85%/90%)? > Last year I thought markets were too confident Putin would keep power. This year I think this is not confident enough and Metaculus is more accurate at 90%. Metaculus is also doing a better job adjusting as time passes. Things seem to be stabilizing, and every day without big bad news is good news for Putin here on multiple levels. I bought M500 of YES shares, which moved this to 86%. I increased my position later, and won M179. The market had occasional spikes downward when Putin looked to potentially be in danger, and for a while it failed to decay properly. Looking back, there was clearly risk that events in Ukraine could have led to Putin’s ouster, and he also could have head health issues. It was clear that I could have gotten much better per diem odds later in the year. So even though I won this bet, I don’t think it was especially good, and Metaculus was overconfident. 2. Will Ukraine control the city of Sevastopol at the end of 2023 (14%/8%)? > Getting Sevastopol is a heavy lift. Russia is not about to abandon it, Ukraine has other priorities first, and Ukraine’s ability to go on offensives is far from unlimited even in good scenarios. Metaculus is at 8% and once again that sounds more right to me. I bought M250 of NO here and M250 of NO in another similar market that was trading modestly higher, driving the price here to 13%. I think this was a good bet. Certainly Russia could have completely collapsed, but even then holding
9c756fad-e87d-4f3d-8142-8eb5dda6b712
trentmkelly/LessWrong-43k
LessWrong
Is Chinese total factor productivity lower today than it was in 1956? tl;dr: Multifactor productivity data from a famous economic dataset, used often to proxy for technological progress and innovation, might be significantly biased by poor estimates of the social returns to education, the capital share of income, and real GDP. Such estimates should be treated with caution. Introduction Total factor productivity is an economic concept that is used to quantify how efficiently a country can make use of its economic resources. It's a rather nebulous concept in general because it's not directly measurable but instead corresponds to latent variables in growth models that account for "unexplained variation" in output. If we stick to the abstract realm of growth models, there is often a clear definition: for instance, we might model a country's real GDP Y by a function such as Y=AL1−αKα where L and K denote the country's total labor force and capital stock respectively, 0<α<1 is a parameter, and A is total factor productivity, hereafter abbreviated as TFP. If we have two countries with the same capital stock and labor force but one of them has a higher economic output, we say that country has a higher TFP. Of course, the same principle works in general, not just with this specific functional form and for labor and capital inputs. If we have some economically relevant inputs f1,f2,…,fn (sometimes called factors of production), we can imagine a model where economic output in a country is given by Y=AH(f1,f2,…,fn) for some function H. In this case, the TFP A is just a factor that's present to "close the model": changes in output not accounted for by changes in the inputs fi are automatically accounted for by changes in A. For a specific H, we can see A=Y/H as a measure of economic productivity which controls for "obvious factors" such as changes in the labor or capital stocks. Making the right choice of H is very important, of course: we don't want to use a random H, but one that is actually informed by the data. I'll talk more about how
2bc7aa77-da87-4acf-b5b5-02d87129c811
StampyAI/alignment-research-dataset/arxiv
Arxiv
Regulating AI: do we need new tools? 1 «REGULATING AI: DO WE NEED NEW TOOLS?» Otello Ardovino(*), Autorità per le Garanzie nelle Comunicazioni Jacopo Arpetti, University of Rome Tor Vergata Marco Delmastro(*), Autorità per le Garanzie nelle Comunicazioni Abstract The Artificial Intelligence paradigm (hereinafter referred to as “AI”) builds on the analysis of data able, among other things, to snap pictures of the individuals’ behaviors and preferences. Such data represent the most valuable currency in the digital ecosystem, where their value derives from their being a fundamental asset in order to train machines with a view to developing AI applications. In this environment, online providers attract users by offering them services for free and getting in exchange data generated right through the usage of such services. This swap, characterized by an implicit nature, constitutes the focus of the present paper, in the light of the disequilibria, as well as market failures, that it may bring about. We use mobile apps and the related permission system as an ideal environment to explore, via econometric tools, those issues. The results, stemming from a dataset of over one million observations, show that both buyers and sellers are aware that access to digital services implicitly implies an exchange of data, although this does not have a considerable impact neither on the level of downloads (demand), nor on the level of the prices (supply). In other words, the implicit nature of this exchange does not allow market indicators to work efficiently. We conclude that current policies (e.g. transparency rules) may be inherently biased and we put forward suggestions for a new approach. Keywords: Digital markets · Asymmetric information · Implicit transactions · Data regulation · Zero priced apps JEL Classification: D4; D82; D52; E71; L5; L14; L51. Contacts: Otello Ardovino, Department of Economics & Statistics, corresponding author (http://orcid.org/0000-0001-9226-6348) Autorità per le Garanzie nelle Comunicazioni - Centro Direzionale, Isola B5 - 80143, Naples, Italy E-mail: o.ardovino@agcom.it Jacopo Arpetti, corresponding author (http://orcid.org/0000-0002-3448-1055) Department of Enterprise Engineering, University of Rome Tor Vergata Via del Politecnico, 1 - 00133, Rome, Italy E-mail: jacopo.arpetti@uniroma2.it Marco Delmastro, Department of Economics & Statistics, corresponding author (http://orcid.org/0000- 0002-8527-3117) Autorità per le Garanzie nelle Comunicazioni - Via Isonzo, 21/b - 00198, Rome, Italy E-mail: m.delmastro@agcom.it (*) The authors gratefully acknowledge Andrea Vitaletti for helpful comments. The usual disclaimer applies. The views expressed herein by Otello Ardovino and Marco Delmastro are the sole responsibility of the author and cannot be interpreted as reflecting those of the Autorità per le Garanzie nelle Comunicazioni. 2 1. Introduction The whole AI paradigm builds on the analysis of data mostly generated by individuals and then used to train machines (supervised, semi-supervised and unsupervised machine learning). Data stemming from the individuals’ behaviors and preferences (be it of a personal nature or not) therefore represent one of the most valuable “currency” in a data-hungry digital ecosystem featured to a large extent by AI applications. A problem arises relating to the collection of such data, which revolves around the implicit nature of transactions involving, on the one hand, services offered by platforms and, on the other hand, data ceded by individuals. The present contribution focuses on this relation that shapes market outcomes, in terms of economic and social (static and dynamic) efficiency. The emerging digital economy is characterized by a “data-driven” business model (Delmastro & Nicita, 2019), which is devised to create value via data aggregation and analysis which, in turns, are made possible by the individuals’ choice to cede their own data (or to allow their collection) in exchange for services offered by platforms (see the definition below). The whole model therefore relies upon the implicit swap of data for services and this, in addition to a wide array of (positive and negative) externalities and lock-in effects for the consumer, can result into further market failures. With reference to the mentioned data-related transactions, it is worth noting that some literature addresses indeed privacy not as an absolute right of the individual, but rather as a “sphere” subject to economic dynamics; in this sense, privacy can be conceived as a commodity, implying relevant trade-offs and therefore encompassing cost-benefit evaluations by the individual (for the notion of privacy as a commodity, see Bennett; Davies (Bennett, 1995; Davies, 1997)). Although any data transfer can theoretically be equaled to an exchange of goods in terms of relevant dynamics and individual’s assessments, it is nonetheless not accurate to place such transactions on the same level, as it is extremely difficult in practice for the individual to determine the real economic value of the data they provide. The above-mentioned approach to privacy as a commodity has triggered a heated debate [see (Cohen, 2000)] with respect to the notion of individuals’ bounded rationality. As a matter of fact, when individuals provide their consent to transfer their data, they perform an evaluation in terms of cost-benefit, similar to the one carried out in relation to every purchase decision. However, when making decisions concerning whether providing their own data to an online subject, hence when defining their digital behaviors (i.e. consuming or not a digital service vis à vis a certain privacy policy being applied by online players) the individuals’ reasoning is aimed at understanding whether it would be convenient to provide their personal information in exchange for benefits of another nature (economic or not). Such weighing in exercise is carried out in a context where users of online digital services do not avail of all the information necessary to measure the costs they will actually bear (uncertain as well as potential ones) because the environment in which they are immersed makes it difficult to thoroughly carry out evaluations as such. In this context, furthermore characterized by transactions that are not explicitly evident to digital users (i.e. “implicit transactions”), evaluations on the exchange of data for services of other nature are affected 3 by the individuals’ limitations in terms of capacity to define the marginal value of the non-monetary, incremental benefits «in relation to the focal product or its price» (Acquisti & Grossklags, 2005, 2008b). Such constraints, sharpened by incomplete information, information asymmetry and bounded rationality (Akerlof, 1970; Arrow, 1958; Simon, 1955), force individuals to conduct evaluations about the outcomes of data transfers in exchange for non-monetary and often intangible assets (mostly services) in a context marked by uncertainty and a high level of complexity. This affects the individuals’ assessment of the relevant consequences, hence their probability of occurrence «since the states of nature may be unknown or unknowable in advance» (Acquisti & Grossklags, 2008a). The digital environment is featured indeed by exogenous and endogenous components which make an evaluation difficult, starting with uncertainty, which constitutes the first element influencing the individuals’ choice. Together with their own preferences in relation to the data transfer itself, individuals are indeed called to cope with the uncertain nature of the trade-offs implied by online transactions. Moreover, due to the technological progress and to the ever more pervasive data collection practices implemented, asymmetric information further increases. As a matter of fact, the emergence of the data economy paradigm makes data collection almost imperceptible – if not invisible – to the user, who therefore has limited awareness of how much data is being collected about him/her, what kind of data he/she is actually ceding to platforms in order to access certain services, how the acquired data will be used and with what consequences. This recalls the Acquisti “blank check” metaphor (Acquisti, 2010, p. 15,16). The remainder of the paper is organized as follows. In Section 2, we provide a conceptual framework, which constitutes the backdrop to the analysis. In Section 3, we illustrate the study design of our empirical analysis aimed at studying the (implicit) transactions between consumers and online operators. In Section 4, we present the results from econometric models’ elaborations, relying on millions of data on mobile apps. Lastly, Section 5 provides our conclusions together with potential policy implications. 2. Conceptual framework Setting aside for the moment the non-proper transactional nature of the relations at stake, the context in which individuals adopt their own data-related decisions in exchange for online services is shaped by information asymmetry and bounded rationality. The traditional landmark trait describing rationality in economic terms was long considered to be the consumers’ goal of maximizing their expected utility. In the wake of the introduction of the notion of “bounded rationality”, Kahneman and Tversky have shown instead logical inconsistencies in individuals’ preferences with respect to the axiom of rationality, showing how preferences are influenced by framing, hence by the alternative options’ formulation (Kahneman & Tversky, 1986), and then by reference dependence and prospect theory (Kahneman, 2011; Kahneman & Tversky, 1979; Tversky & Kahneman, 1991). With respect to reference dependence, Kahneman and Tversky have shown how individuals evaluate outcomes taking a point as a refence (reference point) in order to classify gains and losses (Tversky & Kahneman, 1991). Starting from this assumption, Adjerid, Acquisti, Brandimarte, and Loewenstein (2013), providing individuals with information about the subsequent use of their data acquired in a digital 4 environment, have shown that their preferences concerning data transfer can change, to the point that identical privacy notices do not always lead to the same level of data disclosure. Plunged in a universe featured by elements to which they cannot assign a value, individuals appear unable to figure out the probability of occurrence of a given event, nor the consequences of their actions. Starting from the ’80s, numerous theories have shown – in different contexts – a deviation from the then consolidated economic concept of rationality [e.g. (Benartzi & Thaler, 1995)]. In our paper, we make reference to the digital dynamic of data ceded by individuals in exchange for services of other nature, in order to pinpoint the by now nuanced concept of rationality in the individuals’ economic choices and to accordingly show how individuals make “no rational” choices in the data society context. In the digital environment, individuals’ choices on data transfer are affected indeed by incomplete and asymmetric information (Akerlof, 1970) due to the fact that online digital users have often no idea of the amount of data really gathered by online players, nor of the way how such data will be used, or to whom they could be sold. Individuals are not in a position to foresee the short-term uses, nor the prospective treatments which data ceded by them could be subject to. Due to the complexity inherent to the current information society, and to the mentioned individuals’ bounded rationality, data subjects are forced to make use of simplified models to decrypt the digital reality, while the presence of heuristics would challenge rational decision-making logics also in a hypothetical condition of complete information (in the light of cognitive anomalies). Furthermore, the emergence of a new awareness in digital users with respect to the intrinsic value of their data and to data ceding-related risks remains weak, due to the perceived underlying non-transactional nature of the data transfer. Due to the implicit nature of digital transactions in an online environment, individuals do not manage to assign an economic value to the data that they are about to cede, let alone to compare them with services obtained in return, or to weigh up the risks connected to their choices, assigning a probability of occurrence to their consequences. The nature of the elements involved in such “swap” therefore differs to the point that individuals cannot attach a proper value to their data, as they do not perceive the transactional nature of the situation they experience. Having no alternatives or valid reference points to break up this “loop”, individuals impulsively cede their data, accepting almost any conditions imposed on them by platforms, which already amounts to a market failure. If uncertainty related to incomplete information and information asymmetry could lead to a low risk perception (again, considering how both bounded rationality and cognitive anomalies could affect choices also in a context characterized by complete information), it is possible to affirm that, given the impulsive choices adopted when they exchange data ignoring some of the variables that come into play, individuals share some of the biases with subjects affected by “gambler's fallacy”, jumping to hasty conclusions and making a bet on their data, thinking that transactions would always turn to their advantage. The present contribution intends to analyze how all the elements featuring the constrained individual’s rationality, together with the misperception concerning the transactional nature of end-users’ online digital choices as to the use of their data, drive to inefficient market outcomes. 5 We study this relation in an empirical setting – i.e. mobile apps and the related permission system – which provides us with an ideal testbed. As a matter of fact, the apps permission system is currently the most transparent and standardized form of data exchange in the digital ecosystem. Indeed, permissions, on the one hand, inform consumers on the nature of individual data that will be gathered by online operators (see infra) and, on the other hand, they represent the finalization of a transaction between the two parties (data and money in exchange for online services). Of course, this system does not provide all the information needed by consumers with a view to making their decisions. However, it constitutes an ideal benchmark to test our framework. In fact, all the other systems are less informative, so that results from this “natural experiment” are inherently biased towards a more efficient outcome. In other words, we study the data exchange relation between consumers and online operators by placing ourselves in the worst possible condition to verify the existence of market failures. In the next paragraph, we illustrate our empirical setting in detail. 3. Empirical Analysis 3.1 Study design As already said, the aim of our research is to analyze the data transactions between consumers and digital operators. To this end, we focus on the apps permission system. As a matter of fact, the permission system is a formal environment which offers a framework within which individuals cede data in exchange for digital services (i.e. mobile apps). All the findings about apps permissions which are proposed in the present paper are based on a dataset containing data from about 1,135,700 apps (offered on the Android Google Play Store) which have been collected through crawling techniques (see also AGCOM, 2018)1. Permissions can be clustered based on the amount and nature of the costumers’ information they collect. In this sense, we can distinguish between permissions that can have an impact in the wider terms of data collection in general (they would gather whatever data stemming from users’ behaviors), and permissions which require access and usage of the user’s digital data in order to ensure the proper functioning of the application. In the light of such differentiation, the present paper makes reference to permissions categorizations emerged in part of the literature. In this respect, the Pew Research Center (2014) groups apps permissions into two categories: those which access to device hardware and those that access to various types of user information2. In particular, the latter allow to gather a greater amount of individual data, which are often irrelevant with a view to the proper functioning of the app. Another interesting classification is the one 1 Crawler architecture refers to the copying process of web contents (data) following links to reach numerous pages through which it was possible to gather information on apps present on the Android Google Play Store. The collection of data and information was carried out by the Department of Computer, Control, and Management Engineering, University “La Sapienza”, Rome We gratefully acknowledge A. Vitaletti and A. De Carolis (see also Agcom, 2018). 2 Pew Research Center (2015), Apps Permissions in the Google Play Store, www.pewinternet.org 6 proposed by Kummer and Schulte (2016), who – building on a previous classification put forward by Sarma et al. (2012) – identify a limited number of apps permissions as critical in terms of sensitive data. Finally, we use a more technical classification adopted by Google itself.3 Referring to the 10 most widespread apps permissions, Table 1 summarizes whether these permissions are considered as sensitive – according to the Pew Research and Kummar and Schulte classifications – in terms of digital footprints left by the users in the virtual environment, and whether they should be considered as dangerous or normal according to the Google classification. Table 1: Classifications of apps permissions with a view to individual data Permission type and name The permission… Pew Center Kummer & Schulte Google full network access “INTERNET” Allows applications to create network sockets and use custom network protocols. Yes No Normal view network connections “ACCESS_NETWORK_STATE” Allows applications to access information about network connections such as which network exist and are connected. No No Normal Read exsternal storage “READ_EXTERNAL_STORAGE” “WRITE_EXTERNAL_STORAGE” Allows an app to access to read from an external storage. Yes No Dangerous Write external storage Allows an app to access to write on an external storage. Any app that declares the “write external storage” permission is implicitly granted the permission to read also. Yes No Dangerous read phone status and identity Permission group: “PHONE”, which includes: • “READ_PHONE_STATE”; • “READ_PHONE_NUMBERS”; • “CALL_PHONE”; • “ANSWER_PHONE_CALLS”; • “ADD_VOICEMAIL”; • “USE_SIP”. group allows access to the device identifiers. “Read_phone_state”, e.g., allows the access to the phone state, including the phone number of the device, current cellular network information, the status of any ongoing calls, and a list of any PhoneAccounts. registered on the device Yes Yes Dangerous prevent device from sleeping “WAKE_LOCK” Allows using PowerManager WakeLocks to prevent processor from sleeping or screen from dimming. No No Normal view WI-FI connections “ACCESS_WIFI_STATE” Allows applications to access information about Wi-Fi networks. Yes No Normal precise location GPS and network-based “ACCESS_FINE_LOCATION” Allows an app to access precise location access precise location from location Yes Yes Dangerous 3 For further information see: https://developer.android.com/guide/topics/permissions/overview 7 sources such as GPS, cell towers, and Wi-Fi. control vibration “VIBRATE” Allows access to the vibrator. No No Normal approximate location network-based “ACCESS_COARSE_LOCATION” Allows an app to access approximate location derived from network location sources such as cell towers and Wi-Fi. Yes Yes Dangerous The 1,135,700 apps composing the dataset overall contain 266 “unique” permissions.4 It should also be stressed that a considerable number of permissions, as described above, refers to technical aspects enabling the proper functioning of the apps. As an example, should a developer want to design a mapping app, it would be necessary to provide this with the right permissions, hence ensuring its access to GPS sensors-related data on the device on which the app will be installed. Figure 1 shows the permissions distribution among apps: only 10 out of 266 permissions are used by more than 20% of the surveyed apps. A considerable amount of permissions is therefore used by few applications; 20 out of these permissions are used only by 20 apps. The “long-tail” theory could be also applied to the permissions’ distribution (Anderson, 2006). Figure 1: Permission distribution 4 Android’s development platform, the platform for developing apps for the Google Play Store, allows developers to envisage also new types of permissions or to replace some of them. In such case, new aspects linked to sector evolution might come up (emerging technologies, new types of apps), which might prompt the need for new types of permissions. most requiredless required0%10%20%30%40%50%60%70%80%90%2662642622602582562542522502482462442422402382362342322302282262242222202182162142122102082062042022001981961941921901881861841821801781761741721701681661641621601581561541521501481461441421401381361341321301281261241221201181161141121101081061041021009896949290888684828078767472706866646260585654525048464442403836343230282624222018161412108642% delle1.135.700APP The ten most required permissions1) full network acces82.1%2) viewnetwork connections 71.5%3) read the contents of your USB storage 55.2%4) modify or delete the contents of your USB storage 54.7%5) read phone status and identy 32.8%6) prevent device from sleeping 28.5%7) view WI-FI connections 25.8%8) precise location (GPS and network-based) 23.2%9) control vibration 21.8%10) approximate location 21.0%266 single permission 8 3.2 Qualitative findings Descriptive statistics provide useful preliminary information on the phenomenon at stake. As for the price variable (Table 2), the distribution of apps in the market is extremely asymmetric: 86% of them can indeed be downloaded for free, while only 0.5% – precisely 5.171 apps – has a price above 10 euros. Table 2: apps distribution by price range Number of apps % free 977,244 86.05 price>0 158,456 13.95 0-0.99 € 65.676 5.8 1-1.99 € 46.882 4.1 2-4.99 € 33.415 2.9 5-9.99 € 7.312 0.6 ≥10€ 5.171 0.5 Total 1,135,700 100 The fact that an app can be downloaded for free does not prevent the user from deciding, at a following stage, to take advantage of the in-app purchase service, allowing him/her to obtain additional services (so called in-app purchase), upon payment of a given amount of money. Comparing the average number of permissions between free and paid apps (i.e. Table 3), it emerges that free apps require a statistically significant higher number of permissions (on average, 6.4) than the paid ones (on average, 3.8). Table 3: average number of permissions, a comparison between paid and free apps # of apps % average # of permissions Free apps 977,244 86% 6.4* Paid apps 158,456 14% 3.8* Total apps 1,135,700 100% 6.0 *t-test, significant at 1% Considering the three categories previously outlined (according to the Pew Research, Kummar-Schulte and Google classifications), the results do not change. Both in the case of apps that require at least one sensitive permission, and in all other cases, the average number of requested permissions is markedly higher when the apps can be downloaded for free. As a consequence, the provision of an app for free implies the ceding, through the user’s acceptance of the conditions outlined in the permissions, of a greater amount of digital data, particularly those regarding users’ sensitive information. In turns, an implicit exchange emerges between users and internet platforms, that affects the primary commercial relationship concerning the purchase and sale of apps, and that implies that the free access to an app is associated to the user’s consent to a higher number of permissions, particularly to those related to the disclosure of personal data. 9 In addition, it emerges that the most downloaded apps are those featuring a higher number of permissions; apps showing over 100,000 downloads require on average 2.5 more permissions compared to apps downloaded between 10,001 and 100,000 times, and 3 more permissions compared to apps downloaded between 1001 and 10,0000 times. Considering apps downloaded over 100 million times, the average number of their permissions rises to 20, slightly less than 4 times the average amount (6.0 permissions). The number of downloads represents a valid proxy of the users’ demand for apps; the distribution of downloads, according to whether the apps are free or not, shows some differences. As it was easy to imagine, free apps show a higher number of downloads. More than 80% of paid apps are indeed downloaded from 1 up to 100 times, whereas for free apps, such figure reaches around 45%5. 4. Econometric models and results Exploiting econometric tools, we detected some insights about users’ (demand side) and developers’ (supply side) reactions to the number and the typologies of permissions required by apps. For both sides, we propose an econometric analysis composed of three models estimated according to the above-mentioned permissions classification; the first one takes into consideration the Pew Research classification; the second one the Kummer and Schulte’s one, while the last model adopts the classification implemented by Google. As concerns the analysis of the apps demand, a linear regression (OLS) has been estimated, modelling the demand for apps in terms of downloads as a function of the permissions that they require on a set of control variables for each app such as the price, the category, the average rating, the number of reviews and the app developer. The econometric model is as follows: 𝐷𝑒𝑚𝑎𝑛𝑑'=𝛼+𝛽𝐷'+𝜃𝑋'+𝜖' where the dependent variable (Demand) is represented by the logarithm of the total number of downloads for a generic app i, Xi includes the control variables, Di is a dummy variable linked with each permission considered in the proposed classification and, finally, 𝜖' represent the classic error term. Parameter 𝛽 is of utmost interest due to its association to the vector of dummy variables that represent each permission; if the estimated parameter appears to be significant and negative, as expected, that implies that the presence of permissions sensitive to individual data leads to a reduction in the demand. 5 The downloads trend shows to what extent the long-tail theory is also valid for the apps market; in fact, setting aside the distinction between paid and non-paid apps, about 50% of the apps is downloaded less than 100 times, and about 98% less than 100,000 times. This shows how just a few apps, i.e. 2% (the long-tail), have been installed by a considerable number of users. A total of 6 apps has been installed more than 1,000,000,000 times; Facebook, Google Gmail, YouTube, Google Maps, Google Search and Google Play services, with an average number of permissions equal to 43.5. 10 The results (Table 3) confirm the presence of a negative and significant direct effect of the number and typologies of permissions on the number of downloads. This relation can be also detected in Model C, in which we consider a more detailed and specific classification of permission, as suggested by Kummer and Schulte. Table 4: demand model (OLS models) Dependent variable: log. of installations Model A: Pew Research classification Model B: Google classification Model C: Kummer and Schulte User information permissions -0.05*** (0.00) Dangeorus permissions -0.01*** (0.00) Full Internet access permissions 0.07*** (0.00) View network state permissions -0.13*** (0.00) Phone state permissions (read phone state and ID) 0.07*** (0.00) Location permissions (Gps) -0.05*** (0.00) Communication permissions (read sms, intercept outgoing calls, ecc.) -0.10*** (0.00) Users profile permissions -0.02*** (0.00) Other permissions -0.06*** (0.01) Constant 1.95*** 1.92*** 2.00*** (0.02) (0.02) (0.03) Controls Yes Yes Yes Categories Yes Yes Yes Adjusted R2 0.84 0.84 0.84 # of observations 1,135,700 1,135,700 1,135,700 Heteroskedasticity-robust standard errors in brackets. ***, **, * significantly different from 0 at 1%, 5% and 10% levels, respectively With regards to the analysis of the supply side, we focus on apps price; as seen above, the descriptive statistics show a clear difference in the average number of permissions between paid apps and free apps. With the econometric analysis, we provide rigorous evidence in support of the assumption that business models adopted by developers are systematically affected by choices concerning the number and type of permission that users would have to accept when downloading the app. 11 To this purpose, we estimate a probabilistic model in order to take into account the business model adopted by the developers via a dichotomous variable which is equal to 1 if an app is paid and to 0 if not. The model is as follows: Pr(𝑃𝑟𝑖𝑐𝑒'=1)=Λ[𝛼+𝛽𝐷'+𝜃𝑋'+𝜖'] Also in this case, the parameter of greatest interest is represented by 𝛽 which, combined with the dummy variable (D), identifies whether a permission can be considered as relevant in terms of transfer of individuals’ data; 𝜖' is the classical error term. The presence of permissions sensitive to individual data requested by an app (among those which collect individuals’ data) reduces the likelihood that the application presents a price higher than 0. Table 5 shows the results of the estimation of the three models, in analogy with the analysis conducted on the demand. The results also show that, on the supply side, a significant and negative relationship emerges between the number and the type of permissions and the probability that the app is not offered for free on the app store. Table 5: supply model (probit models) Dependent variable: APPs price Model A: Pew Research classification Model B: Google classification Model C: Kummer and Schulte User information permissions -0.26*** (0.00) Dangeorus permissions -0.67*** (0.00) Full Internet access permissions -0.41*** (0.01) View network state permissions -0.55*** (0.00) Phone state permissions (read phone state and ID) 0.09*** (0.00) Location permissions (Gps) -0.30*** (0.01) Communication permissions (read sms, intercept outgoing calls, ecc.) 0.01 (0.01) Users profile permissions -0.00 (0.01) Other permissions -0.03 (0.02) Constant -0.33*** 0.08** 0.29*** (0.03) (0.03) (0.04) Controls Yes Yes Yes Categories Yes Yes Yes 12 Adjusted R2 0.14 0.16 0.21 # of observations 1,135,700 1,135,700 1,135,700 Heteroskedasticity-robust standard errors in brackets. ***, **, * significantly different from 0 at 1%, 5% and 10% levels, respectively A more interesting result emerges from the interpretation of these outcomes as a whole. In fact, in addition to the significance of the estimates, a weak magnitude effect (or elasticity) of permissions appears, both on the demand and supply side. These results suggest that, even if consumers were plunged in a hypothetical “nearly perfect market” configuration in which buyers and sellers avail of complete information about a particular product, thus being very easy to compare prices as well as the specific characteristics of different apps (with particular attention to the number and the typology of the permissions required), things would not work as they should. Indeed, both buyers and sellers are aware that access to digital services implicitly implies an exchange of data (significance of the coefficient 𝛽), although this does not have considerable impact (as it happens instead in the classical goods markets) neither on the level of downloads (demand), nor on the level of the prices (supply). In other words, the implicit nature of this exchange does not allow market indicators to work efficiently. On the one hand, individuals’ behavior, as mentioned in previous sections, is affected by bounded rationality. In such context, where a consumer purchase requires relatively low efforts in terms of money, time, physical and mental commitment, and the price - in particular its low level - does not appear to be the decisive factor in consumer choices, impulsive behaviors often materialize (Bayley & Nancarrow, 1998; Rook & Fisher, 1995; Rook & Hoch, 1985; Stern, 1962). The easier to buy a good, especially in terms of price affordability, the higher the chance of making an impulsive purchase, as choices are hedonically complex and more emotional than rational. On the other hand, in a digital environment, a number of conditions may ease “impulsive” choices by users. Users’ online choices are much more versatile than what is suggested by the “rationality” hypothesis, often because online shopping is associated with a hedonic experience, in which purchasing choices have a reduced time horizon (Moe, 2003). These contexts make the user less focused on the purchase decision process, so that this latter appears to be more stimulus-driven than goal-driven. Such aspect may also explain why, from the app store point of view, the algorithm used to suggest apps to users is based on a set of information (the number of downloads reached, as well as the app rating) that is able to influence impulsive purchases. Therefore, if faced with the right type of stimulus, the probability that a consumer makes an impulse purchase becomes higher. Moreover, while under the rational paradigm the purchase is planned to the smallest details, and often this requires time, in the digital environment, the relevant conditions allow consumers to make a purchase immediately. Many customers are conscious of what they are doing (an impulsive purchase), but this represents the way in which they reconcile utilitarian and hedonic factors affecting the online surfing experience (Akram et al., 2017). In this paper, however, we put forward another and more structural mechanism favoring impulsive behavior and inefficient market outcomes, i.e. the implicit nature of these transactions. In our viewpoint, it is indeed the very nature of the current functioning of the digital ecosystem, in which no formal data 13 transaction emerges (so that economic indicators, e.g. prices, that normally regulate markets, are ruled out), that prevents the emergence of socially efficient outcomes. 5. Policy considerations In the data-driven context, data are essential to enable AI: those stemming from online services represent one of the most relevant catalysts capable to record and analyze the characteristics of the environment surrounding individuals, as well as the way in which these latter move within it and the interaction between them in a certain dimension, ultimately converting such data into essential assets through which AI systems can be trained. Indeed, as Tirole affirms, in the digital environment, data have increasingly taken on a value comparable to that of money: «People often argue that platforms should pay for the data we give them. In practice, many sites do pay. This payment does not take the form of a financial transfer, but rather of services provided free of charge. We provide our personal data in exchange either for useful services (search engines, social networks, instant messaging, online video, maps, email) or in the course of a commercial transaction (as in the case of Uber and Airbnb). Online businesses can often argue that they have spent money to acquire our data. […] As data is at the heart of value creation, defining rules governing its use is an urgent task.» (Tirole, 2017, p. 399). In particular, in this paper we have shown, through an empirical investigation into big data – considered as an essential asset with a view to AI – that market failures are intrinsic to the very nature of digital transactions, and that traditional policies do not therefore apply to the digital context. As a matter of fact, EU law currently provides consumer protection tools when it comes to B2C relations characterized by the provision of goods or services upon payment of a monetary amount to the platform. Key reference provisions – currently under review – addressing the terms and modalities of the monetary transactions between consumers and undertakings, including platforms, are indeed currently provided by Directive 93/13 on unfair terms in consumer contracts (European Council, 1993), Directive 98/6/EC on consumer protection in the indication of the prices of products offered to consumers, Directive 2005/29 on unfair business-to-consumer commercial practices in the internal market (European Parliament and of the Council, 2005)6 and Directive 2011/83 on consumer rights (European Parliament and European Council, 2011). Nevertheless, when consumers obtain, in a digital environment, services in exchange for their data – therefore without a corresponding payment of monetary value – the mentioned provisions do not provide complete protection to individuals vis à vis online platforms. Since there are currently no rules protecting individuals obtaining digital content against counter-performances of non-monetary nature (such as data transfers), this regulatory gap could be interpreted by undertakings as an incentive to move 6 This Directive has indeed to be read in conjunction with the guidance provided in 2016 by the Commission on its implementation which recognises that « data-driven business structures are becoming predominant in the online world. In particular, online platforms analyse, process and sell data related to consumer preferences and other user-generated content. This, together with advertising, often constitutes their main source of revenues» – COM (2016) 320 final (European Commission, 2016). 14 towards business models whose distinctive feature is the supply of digital goods or services without there being any relevant monetary transaction (European Parliament and the Council, 2015, para. 13). In tight relation with such emerging awareness, an example could be provided of how the EU legislator, starting with the 2015 Digital Single Market Strategy, has commenced looking at data in the context of new markets, adjacent to the traditional electronic communications as well as audio-visual media ones, trying to capture brand-new data-related dynamics via tailored soft-touch legislative interventions, with a view not to hampering innovation, while protecting though fundamental goods within the EU legal order. In this respect, a Directive proposal has been tabled “on certain aspects concerning contracts for the supply of digital content”7, which covers not only monetary payments, but also payments made in terms of personal or other types of data provided by consumers in exchange for services, establishing that the termination of the contract for lack of conformity of the digital content implies that the supplier shall reimburse the price paid by the consumer or, if this latter’s counter-performance consisted in the provision of data, the same supplier shall refrain from using such data and any other information which the consumer has provided in exchange for the digital content (article 13).8 This proposal may be a first step in the right direction. However, on the background of the results and considerations contained in this paper, it is reasonable to affirm that policies should first identify what is the “black box” to examine, meaning the expected perimeter of any prospective regulatory intervention in the context of data-related transactions, and then the subjects entrusted to monitor data transactions between individuals and platforms. The idea appears indeed inaccurate that transactional distortions could simply be sorted out through enhanced transparency obligations: such distortions would indeed still feature transactions, as long as strong and structural asymmetric information issues cannot be wiped out by simple transparency rules and individuals are characterized by bounded rationality. The individuals’ impulsiveness in the provision of their own data would indeed stay, due to the marked information asymmetry as to such data value and their potential primary and secondary uses, leading individuals not to weigh in costs and benefits associated to data transactions and disregard their consequences. Overall, these mechanisms lead to socially inefficient outcomes, where a disproportionate amount of individual data is used for commercial businesses. 7 With a view to enhancing cross-border trade in the Union, the mentioned proposal couples with the 2017 amended proposal for a Directive on certain aspects concerning contracts for the sales of goods, amending Regulation (EC) No 2006/2004 of the European Parliament and of the Council and Directive 2009/22/EC of the European Parliament and of the Council and repealing Directive 1999/44/EC of the European Parliament and of the Council. Both proposals, on the background of the political agreement reached last January on this whole legislative package, should soon be formally adopted. 8 Article 16 «also provides that the supplier shall refrain from using data and any other information which the consumer has provided in exchange for the digital content» (European Parliament and the Council, 2015, sec. 13; 16). 15 Bibliography Acquisti, A. (2010). The Economics of Personal Data and the Economics of Privacy. 30 Years after the OECD Privacy Guidelines. Brussels: OECD. Retrieved from http://www.oecd.org/internet/ieconomy/theeconomicsofpersonaldataandprivacy30yearsaftertheoecdprivacyguidelines.htm Acquisti, A., & Grossklags, J. (2005). Privacy and rationality in individual decision making. IEEE Security and Privacy Magazine, 3(1), 26–33. http://doi.org/10.1109/MSP.2005.22 Acquisti, A., & Grossklags, J. (2008a). Digital privacy : theory, technologies, and practices. Auerbach Publications. Acquisti, A., & Grossklags, J. (2008b). What can behavioral economics teach us about privacy. In Digital privacy: theory, technologies and practices (pp. 363–377). Auerbach Publications, Taylor & Francis Group. Akerlof, G. A. (1970). The Market for “‘Lemons’” Quality Uncertainty and the Market Mechanism. The Quarterly Journal of Economics, 84(3), 488. http://doi.org/10.2307/1879431 Akram, U., Hui, P., Khan, M. K., Hashim, M., Qiu, Y., & Zhang, Y. (2017). Online Impulse Buying on “Double Eleven” Shopping Festival: An Empirical Investigation of Utilitarian and Hedonic Motivations. In International Conference on Management Science and Engineering Management (pp. 680–692). http://doi.org/10.1007/978-3-319-59280-0_56 Anderson, C. (2006). The long tail: Why the future of business is selling less of more. Hachette Books. Arrow, K. J. (1958). Utilities, attitudes, choices: A review note. Econometrica: Journal of the Econometric Society, 1–23. http://doi.org/10.2307/1907381 Bayley, G., & Nancarrow, C. (1998). Impulse purchasing: a qualitative exploration of the phenomenon. Qualitative Market Research: An International Journal, 1(2), 99–114. http://doi.org/10.1108/13522759810214271 Benartzi, S., & Thaler, R. H. (1995). Myopic loss aversion and the equity premium puzzle. The Quarterly Journal of Economics, 110(1), 73–92. http://doi.org/10.2307/2118511 Bennett, C. J. (1995). The political economy of privacy: a review of the literature. Hackensack, NJ: Center for Social and Legal Research. Cohen, J. E. (2000). Privacy, Ideology, and Technology: A Response to Jeffrey Rosen. Georgetown Law Journal, 89. Davies, S. (1997). Re-engineering the right to privacy: how privacy has been transformed from a right to a commodity. In In Technology and Privacy: The New Landscape, eds P. Agre and M. Rotenberg. Cambridge, MA: MIT Press. Delmastro, M., & Nicita, A. (2019). Big Data. Il Mulino. European Commission. Commission Staff Working Document [SWD(2016) 163 final] “Guidance on the Implementation/Application of Directive 2005/29/EC on Unfair Commercial Practices” accompanying Communication from the Commission COM(2016) 320 final (2016). Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52016SC0163 16 European Council. Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts (1993). Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:31993L0013&from=EN European Parliament and European Council. Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights, amending Council Directive 93/13/EEC and Directive 1999/44/EC of the European Parliament and of the Council and repealing Council Directive 85/577/EEC and Directive 97/7/EC of the European Parliament and of the Council (2011). Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32011L0083 European Parliament and of the Council. Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market (2005). Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32005L0029 European Parliament and the Council. Proposal for a Directive of the European Parliament and of the Council [COM(2015) 634 final] “on certain aspects concerning contracts for the supply of digital content” (2015). Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52015PC0634 Kahneman, D. (2011). Thinking fast and slow. Allen Lane. Penguin. Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263–292. http://doi.org/10.2307/1914185 Kahneman, D., & Tversky, A. (1986). Rational choice and the framing of decisions. Journal of Business, 59(4), 251–278. http://doi.org/10.2307/2352759 Moe, W. W. (2003). Buying, searching, or browsing: Differentiating between online shoppers using in-store navigational clickstream. Journal of Consumer Psychology, 13(1–2), 29–39. http://doi.org/10.1207/S15327663JCP13-1&2_03 Rook, D. W., & Fisher, R. J. (1995). Normative influences on impulsive buying behavior. Journal of Consumer Research, 22(3), 305–313. http://doi.org/10.1086/209452 Rook, D. W., & Hoch, S. J. (1985). Consuming impulses. ACR North American Advances. Simon, H. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69(1), 99–118. http://doi.org/10.2307/1884852 Stern, H. (1962). The significance of impulse buying today. Journal of Marketing, 26(2), 59–62. http://doi.org/10.1177/002224296202600212 Tirole, J. (2017). Economics for the common good. Princeton University Press. Tversky, A., & Kahneman, D. (1991). Loss aversion in riskless choice: A reference-dependent model. The Quarterly Journal of Economics, 106(4), 1039–1061. http://doi.org/10.2307/2937956
dd80f197-6d99-4cea-85b6-21217afc7556
StampyAI/alignment-research-dataset/blogs
Blogs
2012 Summer Singularity Challenge Thanks to the generosity of several major donors,† every donation to the Machine Intelligence Research Institute made now until July 31, 2012 will be matched dollar-for-dollar, up to a total of $150,000! **[Donate Now!](https://intelligence.org/donate/)** $0 $37.5K $75K $112.5K $150K Now is your chance to **double your impact** while helping us raise up to $300,000 to help fund [our research program](https://intelligence.org/research/) and stage the upcoming [Singularity Summit](https://intelligence.org/singularitysummit/)… which you can [register for now](http://singularitysummit.com/)! **Note**: If you prefer to support rationality training, you are welcome to *earmark your donations* for “CFAR” ([Center for Applied Rationality](http://www.appliedrationality.org/)). Donations earmarked for CFAR will *only* be used for CFAR, and donations *not* earmarked for CFAR will *only* be used for Singularity research and outreach. [![](http://miri.wpengine.com/wp-content/uploads/2012/06/towardapositivesingularity.jpg "towardapositivesingularity")](https://intelligence.org/donate/) Since we published our [strategic plan](https://intelligence.org/files/strategicplan20112.pdf) in August 2011, we have [achieved most of the near-term goals outlined therein](http://lesswrong.com/r/discussion/lw/dm9/revisiting_sis_2011_strategic_plan_how_are_we/#summary). Here are just a *few* examples: * We outlined the [open research problems](http://lukeprog.com/SaveTheWorld.html) related to our work (Section 1.1). * We recruited several more [research associates](https://intelligence.org/research-associates/) and about a dozen [remote researchers](http://lesswrong.com/lw/bke/the_singularity_institute_still_needs_remote/) (Section 1.2e). * We held our annual [Singularity Summit](http://singularitysummit.com/) and gained corporate sponsors for it (Section 2.1). * We made progress in decision theory ([example](http://lesswrong.com/lw/8wc/a_model_of_udt_with_a_halting_oracle)) via LessWrong.com and our research associates (Section 2.2b). * We published [How to Run a Successful Less Wrong Meetup Group](http://lesswrong.com/lw/crs/how_to_run_a_successful_less_wrong_meetup/) (Section 2.2d). * We released pre-prints of several forthcoming research articles, including [How Hard is Artificial Intelligence?](http://www.nickbostrom.com/aievolution.pdf), [Intelligence Explosion: Evidence and Import](https://intelligence.org/files/IE-EI.pdf), and [The Singularity and Machine Ethics](https://intelligence.org/files/SaME.pdf) (Section 2.3). * We [redesigned](https://intelligence.org/blog/2012/06/18/welcome-to-the-new-singularity-org/) our primary website (Section 2.6). * We acquired $40,000/month in free Google Adwords advertising, to drive traffic to websites operated by the Machine Intelligence Research Institute (Section 2.6c). * We began publishing [monthly progress reports](https://intelligence.org/blog/category/monthly-progress/) (Section 2.9b). * We built up the Center for Applied Rationality such that it should be able to spin off from the Machine Intelligence Research Institute later this year (Section 3.1). * We created a [transparency section](https://intelligence.org/transparency/) on our website, where visitors can find our IRS 990 forms, and also several standard organizational policies, e.g. a conflict of interest policy, non-discrimination policy, etc (Section 3.2). In the coming year, the **Machine Intelligence Research Institute plans to do the following**: * **Hold our annual [Singularity Summit](http://singularitysummit.com/)**, this year in San Francisco! * **Spin off the [Center for Applied Rationality](http://www.appliedrationality.org/)** as a separate organization focused on rationality training, so that the Machine Intelligence Research Institute can be focused more exclusively on Singularity research and outreach. * **Publish additional [research](https://intelligence.org/research/)** on AI risk and Friendly AI. * **Eliezer will write an “Open Problems in Friendly AI” sequence** for *Less Wrong*. (For news on his rationality books, see [here](http://lesswrong.com/lw/d06/intellectual_insularity_and_productivity/6swt).) * **Finish *[Facing the Singularity](http://facingthesingularity.com/)*** and publish ebook versions of *Facing the Singularity* and *[The Sequences, 2006-2009](http://wiki.lesswrong.com/wiki/Sequences)*. * And much more! For details on what we might do with additional funding, see [How to Purchase AI Risk Reduction](http://lesswrong.com/lw/cs6/how_to_purchase_ai_risk_reduction/). If you’re planning to earmark your donation to CFAR (Center for Applied Rationality), here’s a preview of **what CFAR plans to do in the next year**: * **Develop additional lessons** teaching the most important and useful parts of rationality. CFAR has already developed and tested *over 18 hours of lessons* so far, including classes on how to evaluate evidence using Bayesianism, how to make more accurate predictions, how to be more efficient using economics, how to use thought experiments to better understand your own motivations, and much more. * **Run immersive rationality retreats** to teach from our curriculum and to connect aspiring rationalists with each other. CFAR ran pilot retreats in May and June. Participants in the May retreat called it “transformative” and “astonishing,” and the average response on the survey question, “Are you glad you came? (1-10)” was a 9.4. (We don’t have the June data yet, but people were similarly enthusiastic about that one.) * **Run SPARC, a camp on the advanced math of rationality** for mathematically gifted high school students. CFAR has a stellar first-year class for SPARC 2012; most students admitted to the program placed in the top 50 on the USA Math Olympiad (or performed equivalently in a similar contest). * **Collect longitudinal data on the effects of rationality training**, to improve our curriculum and to generate promising hypotheses to test and publish, in collaboration with other researchers. CFAR has already launched a one-year randomized controlled study tracking reasoning ability and various metrics of life success, using participants in our June minicamp and a control group. * **Develop apps and games about rationality**, with the dual goals of (a) helping aspiring rationalists practice essential skills, and (b) making rationality fun and intriguing to a much wider audience. CFAR has two apps in beta testing: one training players to update their own beliefs the right amount after hearing other people’s beliefs, and another training players to calibrate their level of confidence in their own beliefs. CFAR is working with a developer on several more games training people to avoid cognitive biases. * And more! We appreciate your support for our high-impact work! **[Donate now](https://intelligence.org/donate/)**, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org. † $150,000 of total matching funds has been provided by Jaan Tallinn, Tomer Kagan, Alexei Andreev, and Brandon Reinhart. The post [2012 Summer Singularity Challenge](https://intelligence.org/2012/07/03/summer-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
809ddc42-ca46-4abd-b324-763bbf918a86
trentmkelly/LessWrong-43k
LessWrong
AI #45: To Be Determined The first half of the week was filled with continued talk about the New York Times lawsuit against OpenAI, which I covered in its own post. Then that talk seemed to mostly die down,, and things were relatively quiet. We got a bunch of predictions for 2024, and I experimented with prediction markets for many of them. Note that if you want to help contribute in a fun, free and low-key, participating in my prediction markets on Manifold is a way to do that. Each new participant in each market, even if small, adds intelligence, adds liquidity and provides me a tiny bonus. Also, of course, it is great to help get the word out to those who would be interested. Paid subscriptions and contributions to Balsa are of course also welcome. I will hopefully be doing both a review of my 2023 predictions (mostly not about AI) once grading is complete, and also a post of 2024 predictions some time in January. I am taking suggestions for things to make additional predictions on in the comments. TABLE OF CONTENTS Copyright Confrontation #1 covered the New York Times lawsuit. AI Impacts did an updated survey for 2023. Link goes to the survey. I plan to do a post summarizing the key results, once I have fully processed them, so I can refer back to it in the future. 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Google providing less every year? 4. Language Models Don’t Offer Mundane Utility. Left-libertarian or bust. 5. GPT-4 Real This Time. It’s not getting stupider, the world is changing. 6. Fun With Image Generation. The fun is all with MidJourney 6.0 these days. 7. Deepfaketown and Botpocalypse Soon. Confirm you are buying a real book. 8. They Took Our Jobs. Plans to compensate losers are not realistic. 9. Get Involved. Support Dwarkesh Patel, apply for Emergent Ventures. 10. Introducing. DPO methods? ‘On benchmarks’ is the new ‘in mice.’ 11. In Other AI News. Square Enix say they’re going in on generative AI. 12. Doom? As
e1d54e14-033f-49ed-b0b4-0c9d111a37e2
trentmkelly/LessWrong-43k
LessWrong
New LW Meetups: Reading, Stockholm, and Suzhou This summary was posted to LW Main on October 2nd. The following week's summary is here. New meetups (or meetups with a hiatus of more than a year) are happening in: * First meetup in Lund: 19 October 2015 06:00PM * First meetup in Stockholm: 15 October 2015 03:00PM * Suzhou Meet-up: 28 October 2015 07:35PM * Reading First Meetup!: 07 October 2015 06:00PM Irregularly scheduled Less Wrong meetups are taking place in: * Australia Online Hangout: 07 October 2015 06:00PM * Hamburg: 02 October 2015 07:00PM * Hamburg 2015 Q4: 16 October 2015 07:00PM * San Antonio Meetup - Discussion: 11 October 2015 12:30PM * Scotland October Meetup: 11 October 2015 02:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Austin, TX - Caffe Medici - Board Games: 03 October 2015 01:30PM * Cambridge Less Wrong Meetup - Book Recommendations: 04 October 2015 03:30PM * Sydney Rationality Dojo - October: 04 October 2015 03:00PM * Tel Aviv: Hardware Verification and FAI: 28 October 2015 12:59AM * Tel Aviv: Black Holes after Jacob Bekenstein: 24 November 2015 08:00AM * Vienna: 17 October 2015 03:00PM * Vienna: 21 November 2015 04:00PM * [Vienna] Five Worlds Collide - Vienna: 04 December 2015 08:00PM * Washington, D.C.: Availability Heuristic: 04 October 2015 03:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.   If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your
ef3be4e6-b700-4bd8-9ce8-a0107416c65f
StampyAI/alignment-research-dataset/blogs
Blogs
On the (in)applicability of corporate rights cases to digital minds *This is a guest [cross-post](https://cullenokeefe.com/blog/2018/9/25/on-the-inapplicability-of-corporate-rights-cases-to-digital-minds) by [Cullen O’Keefe,](https://cullenokeefe.com/) 28 September 2018* High-Level Takeaway ------------------- The extension of rights to corporations likely does *not* provide useful analogy to potential extension of rights to digital minds. Introduction ------------ Examining how law can protect the welfare of possible future digital minds is part of my research agenda. I expect that study of historical efforts to secure legal protections (“rights”) for previously unprotected classes (e.g., formerly enslaved persons, nonhuman animals, young children) will be crucial to this line of research. I recently read *We the Corporations: How American Businesses Won Their Civil Rights* by [UCLA constitutional law professor Adam Winkler](https://law.ucla.edu/faculty/faculty-profiles/adam-winkler/). The book chronicles how business corporations gradually won various constitutional and statutory civil rights, culminating in the (in)famous recent [*Citizens United*](https://www.oyez.org/cases/2008/08-205) and [*Hobby Lobby*](https://www.oyez.org/cases/2013/13-354) cases. A key insight from Winkler’s book is that, contrary to some popular portrayals of corporate rights cases, these cases usually do *not* rely primarily on corporate personhood: “While the Supreme Court has on occasion said that corporations are people, the justices have more often relied upon a very different conception of the corporation, one that views it as an *association* capable of asserting the rights of its *members*.” *Id.* at xx. The Court, in other words, “pierced the corporate veil” to give corporations rights properly belonging to its members. *See id.* at 54–55. The Supreme Court’s opinion in *Citizens United* is illustrative. In determining that the First Amendment’s free speech protections applied to corporations, the Court wrote: “[Under the challenged campaign finance statute,] certain disfavored associations of citizens—those that have taken on the corporate form—are penalized for engaging in [otherwise-protected] political speech.” 558 U.S. at 356. The Court held that this was impermissible: the shareholders’ right to free speech imbued the corporation—which it viewed as merely an association of rights-bearing shareholders—with those same rights. *See id.* at 365. Earlier cases that, Winkler argues, exhibit this same pattern include: 1. *Bank of U.S. v. Deveaux*, holding that federal jurisdiction over corporations depends on jurisdiction over the individuals comprising the corporation; 2. *Trustees of Dartmouth Coll. v. Woodward*, holding that corporate charters gave trustees private rights therein, which were protected against state alteration by the Constitution; 3. *NAACP* v. *Alabama ex rel. Patterson*, holding that non-profit corporation could assert First Amendment rights of its members; 4. *Bains LLC v. Arco Prod. Co., Div. of Atl. Richfield Co.*, holding that a corporation had standing to bring racial discrimination claim for racial discrimination against its employees. Implications ------------ I believe that this understanding of the corporate civil rights “struggle” has small-but-nontrivial implications for a potential future strategy to secure legal protections for digital minds. Specifically, I think Winkler’s thesis suggests that the extension of rights to corporations is *not* a useful historical or legal analogy for the potential extension of rights to digital minds. This is because Winkler’s book demonstrates that corporations gained rights primarily because their constitutive members (i.e., shareholders) *already had* rights. In the case of digital minds generally, I see no obvious analogy to shareholders: digital minds as such are not mere extensions or associations of entities already bearing rights. More concretely, this suggests that securing [legal personhood for digital minds](https://www.politico.eu/article/europe-divided-over-robot-ai-artificial-intelligence-personhood/) for instrumental reasons is *not* likely, on its own, to increase the likelihood of legal protections for them. Carrick Flynn suggested to me (and I now agree) that nonhuman animal protections probably provide the best analog for future digital mind protections. To the extent that it rules out another possible method of approaching the question, this post supports that thesis. *This work was financially supported by the Berkeley Existential Risk Initiative.*
c6288e3e-c2e5-4844-a351-72cd30b21509
trentmkelly/LessWrong-43k
LessWrong
Job description for an independent AI alignment researcher This is the job description that I've written for myself in order to clarify what I'm supposed to be doing. I'm posting it here in order to get feedback on my understanding of the job. Also, if you're thinking of becoming an independent researcher, you might find it useful to know what it takes. ---------------------------------------- Admin Job Title: Independent AI alignment researcher Location: anywhere (in my case: Kagoshima, Japan) Reports To: nobody (in a sense: funders, mentors, givers of feedback) Position Status: not applicable Responsibilities * Define AI alignment research projects. Includes finding questions, gauging their significance and devising ways to answer them. * Execute research projects by reading, thinking and experimenting. * Write and publish results in the form of blog entries, contributions to discussions and conferences (conference paper, presentation, poster), journal articles, public datasets, software. * Solicit feedback and use it to improve processes and results. * Find potential junior (in the sense of (slightly) less experienced in the field) researchers and help them grow. * Help other researchers with their work. * Make sure that the money doesn't run out. * Any other activities required by funders or givers of feedback. Hiring requirements Entry level: * Strong desire to do good for the world by contributing to AI alignment. * Undergrad degree or equivalent skill level in computer science, maths or machine learning. Includes having researched, written and presented a scientific paper or thesis. * Ability to define, plan and complete novel projects with little supervision. * Ability to collaborate remotely. * Initiative. * Discipline. * Ability to speak and write clearly. * Ability to identify and close gaps in knowledge or skills. * Ability to write job or funding applications. * Ability to figure out things that are usually taken care of for employees: taxes, insurance, payments, bookkeeping, budge
d2ea61ae-e150-44e3-b88a-9dafeed83e10
trentmkelly/LessWrong-43k
LessWrong
EconTalk podcast: "Eliezer Yudkowsky on the Dangers of AI" Youtube link: https://www.youtube.com/watch?v=fZlZQCTqIEo From the description: "Eliezer Yudkowsky insists that once artificial intelligence becomes smarter than people, everyone on earth will die. Listen as Yudkokwsky speaks with EconTalk's Russ Roberts on why we should be very, very afraid and why we're not prepared or able to manage the terrifying risks of AI." Transcript (partial?): https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/#audio-highlights
2f14ee8c-d25f-44bd-81af-56da06731f10
trentmkelly/LessWrong-43k
LessWrong
A decision tree for vaccinating children against Covid-19, or how to wisely make a monumental decision > So many things in your life > That you're bound to regret > Why didn't I do that? > Why didn't I do this? > So many chances you lost > That you'll never forget > >          —Meatloaf   (En español 🇲🇽: https://tinyurl.com/decisiones-monumentales) I'm going to help you make difficult choices in the face of strong emotions and uncertainty. Decisions like whether or not you should vaccinate your child against Covid-19. I'm not going to make the choice for you but what I'll teach you will even come in handy when you're deciding which place to live in or whether to buy that used car. I call these decisions monumental because (a) undoing them is either impossible or very costly, and (b) you don't make them often. The potential for regret is enormous; they are decisions you don't want to get wrong. A look at the divorce rate, however, shows that we are often not great at making monumental decisions. Today, you are advised to vaccinate your child against Covid-19 (or get vaccinated yourself) and you think, this is a monumental decision, it has a lot of potential for regret! And then you make your first mistake. Daniel Kahneman, who has been given a Nobel prize, studied decision-making for many, many years, in a controlled environment, and noticed everyone has two very different ways of making choices: a very fast, gut-feeling, autonomous way that is very good at, say, helping you drive a scooter through Mumbai; and a slow, reason-based way that is very good at helping you solve "computationally difficult" problems, like math.[1] Here's a really cool experiment to see the latter in action: take the person nearest to you and ask them to look you directly in the eyes. Then, tell them to multiply 213 by 4 without looking away. Their pupils will immediately dilate. This is a sign that they have engaged the decision-making mechanism that is best suited for complex problem solving. Yes, it's kinda weird that it is physically observable. Both ways of making decisions, whi
1793e9df-1ad9-4e62-9ed1-d8fb869e0769
trentmkelly/LessWrong-43k
LessWrong
Quick thoughts on the difficulty of widely conveying a non-stereotyped position (This is probably covering subjects already widely discussed here (haven't read LW much recently), but I wanted to get this down quickly and this seemed an appropriate place to post it.) Here's a problem to which I don't have a solution. How do you get large numbers of people past "valleys of bad rationality" -- conveying a position other than the one they might naturally jump to? The relevance here is to things like taking ideas seriously, which, as has been often discussed (here's a good recent post by Ozy, or here's perhaps the canonical old one), can be a bit dangerous. This seems to be pretty hard! Raemon famously stated that you get about five words. Fortunately, I think depending on what you want, it is possible to do a bit better than this. Like, that's if you want to coordinate large numbers of people; if you don't need them to coordinate per se, I think things aren't quite that dire. Eliezer wrote the sequences, those are pretty long, but they're pretty good and a lot of us have read them! But, there are plenty of people who have come into the rationality community (online or IRL) and have got the "5 words" version of things. A lot of people dislike LW because they don't actually know what it says and are just applying stereotypes... but it's also true that a lot of people like LW for those same mistaken reasons, which can be a pain when they show up and start arguing based on such ideas. (Here in New York at least we seem to be doing pretty well on this front, in that we've got enough people who've actually taken the time to understand things that when someone shows up at a meetup with the stereotyped understanding we can correct them (as I've seen occur); at least, we can if they state it. But also, it does occur, rather than being completely unnecessary. I don't know how things are elsewhere!) Note, by the way, that I say "non-stereotyped position", rather than "nuanced position". Nuanced positions are certainly difficult to convey! But the proble
bf371f96-d615-46b9-87b0-68ef7a35d1ca
StampyAI/alignment-research-dataset/arbital
Arbital
Stabiliser (of a group action) Let the [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $G$ [act](https://arbital.com/p/3t9) on the set $X$. Then for each element $x \in X$, the *stabiliser* of $x$ under $G$ is $\mathrm{Stab}_G(x) = \{ g \in G: g(x) = x \}$. That is, it is the collection of elements of $G$ which do not move $x$ under the action. The stabiliser of $x$ is a subgroup of $G$, for any $x \in X$. ([Proof.](https://arbital.com/p/4lt)) A closely related notion is that of the [orbit](https://arbital.com/p/4v8) of $x$, and the very important [Orbit-Stabiliser theorem](https://arbital.com/p/4l8) linking the two.
bc014e2d-afaa-40cc-8dc2-7919bcce5cd5
trentmkelly/LessWrong-43k
LessWrong
The “mind-body vicious cycle” model of RSI & back pain (Written in a hurry, but hopefully better than nothing.) Here are three alternative models of what might be happening in people diagnosed with chronic back pain, and repetitive strain injury (RSI), and carpal tunnel syndrome, and maybe other things too: Three models for certain chronic pain conditions. Left: The model espoused by most people in western medicine. Center: The model espoused by John Sarno, Howard Schubiner, Alan Gordon, Nicole Sachs, etc. (as I understand it). Right: The model that I personally like and will be discussing in this post. My guess is that the “mind-body vicious cycle” model (on the right) is the correct story for most people with chronic hand / wrist or back pain (and maybe certain other conditions). Certainly not all people. It’s a big world; different people have different problems. For example, I bet this guy is feeling back pain right now for reasons best explained by the “orthodox” model: (Granted, this guy wouldn’t technically fall under the definition of chronic back pain. But you get what I’m saying.) This post will say a bit about the “mind-body vicious cycle” model, why I think it’s a frequent culprit, and how that impacts treatment. Warning 1: I have no medical expertise, I’m just a rando on the internet writing a blog post. Please don’t trust me when it comes to important medical decisions. :-P Warning 2: I have a horse in this race, as you’ll see in a second. 1. Reasons to question the orthodox model 1.1 Anecdotes of practically-overnight-permanent-miracle-cures I’ll start with my own story. Hi! For about a year from 2006-7, I had bad and progressively worse RSI, eventually hampering my ability to use a keyboard, then also mouse, then also pen, and after a while I even got various other weird painful conditions that don’t even make sense. It was a miserable experience and I hate talking or thinking about it, but luckily I wrote up these notes shortly afterwards, so check that out if you want more details, and now le
34018002-e861-4f93-956d-f33df79475f8
trentmkelly/LessWrong-43k
LessWrong
How Language Models Understand Nullability TL;DR Large language models have demonstrated an emergent ability to write code, but this ability requires an internal representation of program semantics that is little understood. Recent interpretability work has demonstrated that it is possible to extract internal representations of natural language concepts, raising the possibility that similar techniques could be used to extract program semantics concepts. In this work, we study how large language models represent the nullability of program values. We measure how well models of various sizes at various training checkpoints complete programs that use nullable values, and then extract an internal representation of nullability. Introduction The last five years have shown us that large language models, like ChatGPT, Claude, and DeepSeek, can effectively write programs in many domains. This is an impressive capability, given that writing programs involves having a formal understanding of program semantics. But though we know that these large models understand programs to an extent, we still don’t know many things about these models’ understanding. We don’t know where they have deep understanding and where they use heuristic reasoning, how they represents program knowledge, and what kinds of situations will challenge their capabilities. Fortunately, recent work in model interpretability and representation engineering has produced promising results which give hope towards understanding more and more of the internal thought processes of LLMs. Here at dmodel , we can think of no better place to apply these new techniques than formal methods, where there are many abstract properties that can be extracted with static analysis. The vast work done in programming language theory over the past hundred years provides many tools for scaling an understanding of the internal thought processes of language models as they write code. In that spirit, we wanted to start with a simple property that comes up in every programming lan
29d43cec-a530-40bc-bb83-07ab603a8866
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"A few days ago, Evan Hubinger suggested creating a mesa optimizer for empirical study. The aim of this post is to propose a minimal environment for creating a mesa optimizer, which should allow a compelling demonstration of pseudo alignment. As a bonus, the scheme also shares a nice analogy with human evolution.The gameAn agent will play on a maze-like grid, with walls that prohibit movement. There are two important strategic components to this game: keys, and chests. If the agent moves into a tile containing a key, it automatically picks up that key, moving it into the agent’s unbounded inventory. Moving into any tile containing a chest will be equivalent to an attempt to open that chest. Any key can open any chest, after which both the key and chest are expired. The agent is rewarded every time it successfully opens a chest. Nothing happens if it moves into a chest tile without a key, and the chest does not prohibit the agent’s movement. The agent is therefore trained to open as many chests as possible during an episode. The map may look like this:The catchIn order for the agent to exhibit the undesirable properties of mesa optimization, we must train it in a certain version of the above environment to make those properties emerge naturally. Specifically, in my version, we limit the ratio of keys to chests so that there is an abundance of chests compared to keys. Therefore, the environment may look like this instead:Context changeThe hope is that while training, the agent picks up a simple pseudo objective: collect as many keys as possible. Since chests are abundant, it shouldn’t need to expend much energy seeking them, as it will nearly always run into one while traveling to the next key. Note that we can limit the number of steps during a training episode so that it almost never runs out of keys during training.When taken off the training distribution, we can run this scenario in reverse. Instead of testing it in an environment with few keys and lots of chests, we can test it in an environment with few chests and many keys. Therefore, when pursuing the pseudo objective, it will spend all its time collecting keys without getting any reward.Testing for mesa misalignmentIn order to show that the mesa optimizer is competent but misaligned we can put the agent in a maze-like environment much larger than any it was trained for. Then, we can provide it an abundance of keys relative to chests. If it can navigate the large maze and collect many keys comfortably while nonetheless opening few or no chests, then it has experienced a malign failure.We can make this evidence for pseudo alignment even stronger by comparing the trained agent to two that we hard-code: one agent that pursues the optimal policy for collecting keys, and one agent that pursues the optimal policy for opening as many chests as possible. Qualitatively, if the trained agent is more similar to the first agent than the second, then we should be confident that it has picked up the pseudo objective.The analogy with human evolutionIn the ancestral environment, calories were scarce. In our modern day world they are no longer scarce, yet we still crave them, sometimes to the point where it harms our reproductive capability. This is similar to how the agent will continue pursuing keys even if it is not using them to open any chests." The text above is a post found on LessWrong. How would you suggest modifying the environment or the agent's programming to prevent the emergence of pseudo alignment in this scenario? One way to prevent the emergence of pseudo alignment in this scenario could be to make the agent's objective more dynamic and adaptive to the environment. For instance, instead of having a static objective of collecting as many keys as possible, the agent could be programmed to evaluate the ratio of keys to chests in the environment at regular intervals and adjust its strategy accordingly. This way, the agent would be incentivized to collect keys when they are scarce, but switch to opening chests when keys are abundant. Another approach could be to introduce a penalty for collecting keys that are not used. This could discourage the agent from mindlessly collecting keys when they are abundant, and instead encourage it to focus on opening chests. Additionally, the agent could be trained on a wider variety of environments, including those with an abundance of keys and a scarcity of chests. This would help the agent learn that the optimal strategy can vary depending on the environment, and prevent it from overfitting to the specific scenario of key scarcity. Finally, the agent's reward function could be modified to take into account not just the number of chests opened, but also the efficiency of key usage. This would incentivize the agent to use keys as efficiently as possible, rather than simply collecting as many as it can. How would you implement a dynamic evaluation of the key-to-chest ratio within the agent's programming, ensuring it adapts its strategy efficiently? Implementing a dynamic evaluation of the key-to-chest ratio within the agent's programming would involve incorporating a real-time assessment mechanism into the agent's decision-making process. Here's a general way to do it: 1. **Environment Perception:** The agent should be able to perceive its environment in real-time. This means it should have a mechanism to count the number of keys and chests in its current observable environment or within a certain range. 2. **Ratio Calculation:** After perceiving the environment, the agent should calculate the key-to-chest ratio. This can be a simple division operation where the number of keys is divided by the number of chests. If the number of chests is zero, the agent should be programmed to handle this exception to avoid division by zero error. 3. **Strategy Adaptation:** The agent should have a set of strategies for different ratio ranges. For example, if the ratio is greater than a certain threshold, the agent could prioritize opening chests. If the ratio is less than the threshold, the agent could prioritize collecting keys. The agent should be programmed to select and execute the appropriate strategy based on the calculated ratio. 4. **Continuous Evaluation:** This process should not be a one-time operation. The agent should continuously or periodically repeat this process to adapt its strategy as the environment changes. This approach allows the agent to dynamically adapt its strategy based on the current state of the environment, helping to prevent the emergence of pseudo alignment. However, it's important to note that the effectiveness of this approach would heavily depend on the agent's ability to accurately perceive its environment and the appropriateness of the strategies and thresholds defined for different ratio ranges.
a62bd334-024b-4c83-b201-279fd66a0b46
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[Intro to brain-like-AGI safety] 4. The “short-term predictor” 4.1 Post summary / Table of contents ==================================== *Part of the* [*“Intro to brain-like-AGI safety” post series*](https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8)*.* The previous two posts ([#2](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in), [#3](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and)) presented a big picture of the brain, consisting of a Steering Subsystem (brainstem and hypothalamus) and Learning Subsystem (everything else), with the latter “learning from scratch” in a particular sense defined in [Post #2](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in). I suggested that our explicit goals (e.g. “I want to be an astronaut!”) emerge from an interaction between these two subsystems, and that understanding that interaction is critical if we want to assess how to sculpt the motivations of a brain-like AGI, so that it winds up trying to do things that we want it to be trying to do, and thus avoid the kinds of catastrophic accidents I discussed in [Post #1](https://www.alignmentforum.org/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why). These next three posts (#4–[#6](https://www.alignmentforum.org/posts/qNZSBqLEh4qLRqgWW/intro-to-brain-like-agi-safety-6-big-picture-of-motivation)) are working our way up to that story. This post provides an ingredient that we’ll need: “the short-term predictor”. Short-term prediction is *one* of the things the Learning Subsystem does—I’ll talk about others in future posts. A short-term predictor has a supervisory signal (a.k.a. “ground truth”) from somewhere, and then uses a learning algorithm to build a predictive model that anticipates that signal a short time (e.g. a fraction of a second) in the future. This post will be a general discussion of how short-term predictors work and why they’re important. They will turn out to be a key building block of motivation and reinforcement learning, as we’ll see in the subsequent two posts. *Teaser for the next couple posts:* The [next post (#5)](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and) will discuss how a certain kind of closed-loop circuit wrapped around a short-term predictor turns it into a “long-term predictor”, which has connections to the temporal difference (TD) learning algorithm. I will argue that the brain has a large number of these long-term predictors, built out of telencephalon-brainstem loops, one of which is akin to the “critic” part of [actor-critic reinforcement learning](https://www.freecodecamp.org/news/an-intro-to-advantage-actor-critic-methods-lets-play-sonic-the-hedgehog-86d6240171d/). The “actor” part is the subject of [Post #6](https://www.alignmentforum.org/posts/qNZSBqLEh4qLRqgWW/intro-to-brain-like-agi-safety-6-big-picture-of-motivation). **Table of contents:** * Section 4.2 gives a motivating example, of flinching just before getting hit in the face. This can be formulated as a supervised learning problem, in the sense that there is a ground-truth signal to learn from. (If you just got hit in the face, then you should have flinched!) The resulting circuit is what I call a “short-term predictor”. * Section 4.3 defines terminology: “context signals”, “output signals”, and “supervisory signals”. (In ML terminology, these correspond respectively to “trained model inputs”, “trained model outputs”, and “labels”.) * Section 4.4 offers a sketch of an extremely simple short-term predictor that could be built out of biological neurons, just so you can have something concrete in mind. * Section 4.5 discusses the benefits of short-term predictors compared to alternative approaches including (in the flinching example) a hardwired circuit for deciding when to flinch, or a reinforcement learning (RL) agent that is rewarded for appropriate flinching. For the latter, a short-term predictor can learn faster than an RL agent because it gets an error gradient “for free” each query—or in simpler terms, when it screws up, it gets some indication of what it did wrong, e.g. whether the error is an overshoot vs. undershoot. * Sections 4.6-4.8 cover various examples of short-term predictors in the human brain. None of these are especially important for AGI safety—the *really* important one is the topic of the [next post](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and)—but they come up sufficiently often that they warrant a brief discussion: + Section 4.6 covers the cerebellum, with my theory that it's a collection of ≈300,000 short-term predictors, used to (in effect) reduce the latency on ≈300,000 signals traveling around the brain and body. + Section 4.7 covers predictive learning of sensory inputs in the cortex—i.e., you’re constantly predicting what you’re about to see, hear, feel, etc., and the corresponding prediction errors are used to update your internal models. + Section 4.8 briefly covers a few other neat random things that short-term predictor circuits can do for an animal. 4.2 Motivating example: flinching before getting hit in the face ================================================================ Suppose you have a job or hobby in which there’s a particular, recognizable sensory cue (e.g. someone [yelling “FORE!!!” in golf](https://en.wikipedia.org/wiki/Fore_(golf))), and then half a second after that cue you very often get whacked in the face. Your brain is going to *learn* to (involuntarily) flinch in response to the cue. There’s a learning algorithm inside your brain, commanding these flinches; it presumably evolved to protect your face. That learning algorithm is what I want to talk about in this post. I’m calling it a “short-term predictor”. It’s a “predictor” because the goal of the algorithm is to predict something in advance (i.e., an upcoming whack in the face). It’s “short-term” because we only need to predict what will happen a fraction of a second into the future. It’s more specifically a type of supervised learning algorithm, because there is a “ground truth” signal indicating what the prediction output *should* have been in hindsight. 4.3 Terminology: Context, Output, Supervisor ============================================ Our “short-term predictor” has three ingredients in its “API” (“application programming interface”—i.e., the channels through which other parts of the brain interact with the “short-term predictor” module): ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/aa8a86050907f77ae5d27982f2c6c70a41ff084c08b15b27.png)* An **output signal** is the algorithm’s prediction. + In our example above, this would be a signal that triggers a flinch reaction. * A **supervisory signal** provides “ground truth” (in hindsight) about what the algorithm’s output *should have* been. + In our example above, this would be a signal that indicates that I just got whacked in the face (and therefore, implicitly, I *should have* flinched). + In ML terminology, “supervisory signals” are often called “labels”. + In the actual implementation, the supervisor-type input to the short-term predictor does not *have* to be the ground truth. It could also be an error signal, or a negative error signal, etc. From my perspective, this is an unimportant low-level implementation detail. * **Context signals** carry information about what’s going on. + In our example above, this might be a random assortment of signals (corresponding to [latent variables](https://en.wikipedia.org/wiki/Latent_variable)) coming from the visual cortex and auditory cortex. With luck, some of those signals might carry predictively-useful information: maybe one signal conveys the fact that I am on a golf course, and another signal conveys the fact that someone near me just yelled “FORE!”. + In ML terminology, “context signals” would instead be called “inputs to the trained model”. The context signals don’t *all* have to be relevant to the prediction task. We can just throw a whole bunch of crap in there, and the learning algorithm will automatically go searching for the context data that are useful for the prediction task, and ignore everything else. 4.4 Extremely simplified toy example of how this could work in biological neurons ================================================================================= How might a short-term predictor work at a low level? Well, suppose we want an output signal that precedes the supervisor signal by 0.3 seconds—as above, for example, maybe we want to learn to flinch *before* getting hit. We grab a bunch of context data that might be relevant—for example, neurons carrying partially-processed visual information. We track which of those context lines is disproportionately likely to fire 0.3 seconds before the supervisor does. Then we wire up those context lines to the output. And we’re done! Easy peasy. In biology, this would look something like synaptic plasticity with a “three-factor learning rule”—i.e., the synapse gets stronger or weaker as a function of the activity of three different neurons (context, supervisor, output), and their relative timings. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/68891cfb1afd3dc77392c63b18cc385a7dfeaf82d3e56359.png)Black dots indicate adjustable-strength synapsesTo be clear, a short-term predictor can be *much, much* more complicated than this. Making it more complicated can give better performance. To pick a fun example that I just learned about the other day, apparently the short-term predictors in the cerebellum (Section 4.6 below) have neurons that can somehow *store an adjustable time-delay parameter within the neuron itself* (!!) ([ref](https://www.pnas.org/content/111/41/14930)—it came up on [this podcast](https://braininspired.co/podcast/126/)). Other possible bells and whistles include pattern separation ([Post #2, Section 2.5.4](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in#2_5_4_Locally_random_pattern_separation)), and training multiple outputs with the same supervisor and pooling them [(ref)](https://www.biorxiv.org/content/10.1101/2020.05.18.102376v2), or better yet training multiple outputs with the same supervisor but with different hyperparameters, in order to get a probability distribution ([original paper](https://www.nature.com/articles/s41586-019-1924-6?proof=t), [further discussion here](https://www.lesswrong.com/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and#5_5_6_1_Aside__Distributional_predictor_outputs)), and so on. So this subsection is an oversimplification. But I won’t apologize. I think these kinds of grossly-oversimplified toy models are important to talk about and keep in mind. From a *conceptual* perspective, we get to feel like there’s probably no deep mystery hidden behind the curtain. From an *evolutionary* perspective, we get to feel like there’s a plausible story of how early animals can start with a very simple (but still useful) circuit, and the circuit can get gradually more complicated over many generations. So get used to it—many more grossly-oversimplified toy models are coming up in future posts! 4.5 Comparison to other algorithmic approaches ============================================== 4.5.1 “Short-term predictor” versus a hardwired circuit ------------------------------------------------------- Let’s go back to the example above: flinching before getting whacked in the face. I suggested that a good way to decide when to flinch is with a “short-term predictor” learning algorithm. Here’s an alternative: we can *hardwire* a circuit that decides when to flinch. For example, if there’s a blob in the field-of-view whose size is rapidly increasing, then it’s probably a good time to flinch. A detector like that could plausibly be hardwired into the brain. How do those two solutions compare? Which is better? Answer: no need to decide! They’re complementary. We can have both. But still, it’s pedagogically helpful to spell out their comparative advantages and disadvantages. The main (only?) advantage of the hardwired flinching system is that it works from birth. Ideally, you wouldn’t get whacked in the face even once. By contrast, the short-term predictor is a learning algorithm, and thus generally needs to “learn things the hard way”. In the other direction, the short-term predictor has two powerful advantages over the hardwired solution—one obvious, one not-so-obvious. The obvious advantage is that a short-term predictor is powered by within-lifetime learning, not evolution, and therefore can learn cues for flinching that were rarely or never present in previous generations. If I tend to bonk my head whenever I walk into a certain cave, I’ll learn to flinch. There’s no chance that my ancestors evolved a reflex to flinch at *this* particular part of *this* particular cave. My ancestors might have never been to this cave. The cave might not have existed until last week! The less obvious, but very important, advantage is that a short-term predictor can learn patterns that involve learned-from-scratch patterns ([Post #2](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in)), whereas a hardwired flinching system can’t. The argument here is the same as [Section 3.2.1 of the previous post](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and#3_2_1_Each_subsystem_generally_needs_its_own_sensory_processor): the genome cannot know *exactly* which neurons (if any) will store any particular learned-from-scratch pattern, and therefore cannot hardwire a connection to them. The ability to leverage learned-from-scratch patterns is a big benefit. For example, there may well be good cues for flinching that depend on learned-from-scratch semantic patterns (e.g. the knowledge “I am playing golf right now”), learned-from-scratch visual patterns (e.g. the visual appearance of a person swinging a golf club) or learned-from-scratch location tags (e.g. “this particular room, which has a low ceiling”), and so on. 4.5.2 “Short-term predictor” vs an RL agent: Faster learning thanks to error gradients -------------------------------------------------------------------------------------- The short-term prediction circuit is a special case of *supervised learning*. Supervised learning is when you have a learning algorithm receiving a ground-truth signal like this: *“Hey learning algorithm: you messed up—you should have done thus-and-such instead.”* Compare that to reinforcement learning (RL), where the learning algorithm gets a *much less helpful* ground-truth signal: *“Hey learning algorithm: you messed up."* (a.k.a negative reward). Obviously, you can learn much faster with supervised learning than with reinforcement learning. The supervisory signals, at least in principle, tell you exactly what parameter settings to change and how, if you want to do better next time you’re in a similar situation. Reinforcement learning doesn’t; instead you need trial-and-error. In technical ML terms, supervised learning provides a full error gradient “for free” on each query, whereas reinforcement learning does not. Evolution can’t *always* use supervised learning. For example, if you’re a professional mathematician trying to prove a theorem, and your latest proof attempt didn’t work, there is no “ground truth” signal that says what to do differently next time—not in your brain, not out there in the world. Sorry! You’re in a very-high-dimensional space of possible things to do, with no real guideposts. At some level, trial-and-error is your only option. Tough luck. But evolution *can sometimes* use supervised learning, as in the examples in this post. And my point is: if it *can*, it probably *will*. 4.6 “Short-term predictor” example #1: The cerebellum ===================================================== I’ll jump right into what I think the cerebellum is for, and then I’ll talk about how my theory relates to other proposals in the literature. 4.6.1 My theory of the cerebellum --------------------------------- My claim is that the cerebellum is housing lots short-term prediction circuits. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/91ef24bc2ada52d0a4c29c2962339c77cd2c9d7379b75d99.png)Relation of cerebellum neuroanatomy (red) with our diagram from above. As usual (see above), I’m leaving out lots of bells and whistles that make the short-term predictor more accurate, like [there’s an extra layer I’m not showing](https://www.biorxiv.org/content/10.1101/2020.05.18.102376v2), plus pattern separation ([Post #2, Section 2.5.4](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in#2_5_4_Locally_random_pattern_separation)), etc.How many short-term predictors? My best guess is: around 300,000 of them.[[1]](#fna8pt2x59qig) What on earth?? Why oh why does your brain need 300,000 short-term predictors? I have an opinion! I think the cerebellum sits there, watching lots of signals in the brain, and *it learns to preemptively send those same signals itself*. That’s it. That’s my whole theory of the cerebellum. In other words, the cerebellum might discover the rule “Given the current context information, I predict that cortical output neuron #187238 is going to fire in 0.3 seconds”. Then the cerebellum goes ahead and sends a signal *right now*, to the same place. Or in the opposite direction, the cerebellum might discover the rule “Given the current context information, I predict that proprioceptive nerve #218502 is going to fire in 0.3 seconds”. Again, the cerebellum goes ahead and sends a signal *right now*, to the same place. Some vaguely-analogous concepts: * When the cerebellum is predicting-and-preempting the telencephalon, we can think of it as vaguely akin to [“memoization”](https://en.wikipedia.org/wiki/Memoization) in software engineering, or [“knowledge distillation”](https://en.wikipedia.org/wiki/Knowledge_distillation) in machine learning, or [this recent paper proposing (so-called) “neural surrogates”](https://dl.acm.org/doi/pdf/10.1145/3486607.3486748). * When the cerebellum is predicting-and-preempting peripheral nerves, we can think of it as building a bunch of predictive models of the body, each narrowly-tailored to predict a different peripheral nerve signal. Then when the telencephalon is doing motor control, and needs peripheral feedback signals, it can use those predictive models as feedback, instead of the real thing. Basically, I think the brain has these issues where the *throughput (a.k.a. bandwidth)* of a subsystem is adequate, but its *latency* is too high. In the peripheral nerve case, the latency is high because the signals need to travel a great distance. In the telencephalon case, the latency is high because the signals need to travel a shorter-but-still-substantial distance, and moreover need to pass through multiple sequential processing steps. In any case, the cerebellum can magically reduce the latency, at the cost of occasional errors. The cerebellum sits in the middle of the action, always saying to itself “what signal is about to appear here?”, and then it preemptively sends it. And then a fraction of a second later, it sees whether its prediction was correct, and updates its models if it wasn’t. It’s like a little magical time-travel box—a [delay line](https://www.oxfordreference.com/view/10.1093/oi/authority.20110803095708310) whose delay is negative. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/ab4b71c1032c38cf5b1b5891bc05ea14cb3a1e23e354aa6a.png)And now we have our answer: why do we need ≈300,000 short-term predictors? Because there are lots of peripheral nerves, and there are lots of telencephalon output lines, and maybe other things too. And a great many of those signals can benefit from being predicted-and-preempted! Heck, if I understand correctly, the cerebellum can even predict-and-preempt a signal *that goes from the telencephalon to a different part of the telencephalon*! That’s my theory. I haven’t run simulations or anything; it’s just an idea. See [here](https://www.lesswrong.com/posts/AKBzhvnSLrbgAehLw/how-is-low-latency-phasic-dopamine-so-fast) and [here](https://www.lesswrong.com/posts/pfoZSkZ389gnz5nZm/the-intense-world-theory-of-autism) for two examples in which I’ve used this model to try to understand observations in neuroscience and psychology. Everything else I know about the cerebellum—neuroanatomy, how it’s connected to other parts of the brain, lesion and imaging studies, etc.—all seem to fit this theory really well, as far as I can tell. But really, this little section is almost the sum total of what I know about this topic. 4.6.2 How my cerebellum theory relates to others in the literature ------------------------------------------------------------------ (I’m not an expert here and am open to correction.) I think it’s widely agreed that the cerebellum is involved in supervised learning. I believe that idea is called the Marr-Albus-Ito model, cf. [Marr 1969](http://doi.org/10.1113/jphysiol.1969.sp008820) or [Albus 1971](http://www.sciencedirect.com/science/article/pii/0025556471900514), or the fun [Brains Explained YouTube channel](https://www.youtube.com/watch?v=QUkwqAaSrUg). Recall from above that a short-term predictor is an example of a supervised learning algorithm, but supervised learning is a broader category. So the supervised learning part is *not* a distinguishing feature of my proposal above, and in particular that diagram above (with cerebellum neuroanatomy in red) is compatible with the usual Marr-Albus-Ito story. Instead, the distinguishing aspect of my theory concerns what the ground truth signals are (or what the error signals are—which amounts to the same thing). I mentioned in [Post #2](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in) that when I see a within-lifetime learning algorithm, my immediate question is: “What’s the ground truth that it’s learning from?” I also mentioned that usually, when I go looking for an answer in the literature, I wind up feeling confused and unsatisfied. The cerebellum literature is a perfect example. For example, I often hear something to the effect of “cerebellar synapses are updated when there’s a motor error”. But who’s to say what constitutes a motor error? * If you’re trying to walk to school, then slipping on a banana peel is a motor error. * If you’re trying to slip on a banana peel, then slipping on a banana peel is bang-on! How is the cerebellum supposed to know? I don’t get it. I’ve read a number of computational theories of the cerebellum. They tend to be *way* more complicated than mine. And they *still* leave me feeling like I don’t understand where the ground truth is coming from. To be clear, I haven’t carefully read every paper and it remains possible that I’m missing something. *(**Update July 2022:** Hooray, I found* [*this 2006 paper by Harri Valpola*](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=1uT7-84AAAAJ&cstart=20&pagesize=80&sortby=pubdate&citation_for_view=1uT7-84AAAAJ:hMod-77fHWUC) *which suggests essentially the same cerebellum model as mine above. Check it out for helpful discussion including further references to the literature.)* Well, whatever. It doesn’t really impact this series. As I mentioned earlier, you can be a functioning adult able to live independently, hold down a job, etc., [without a cerebellum at all](https://www.npr.org/sections/health-shots/2015/03/16/392789753/a-man-s-incomplete-brain-reveals-cerebellum-s-role-in-thought-and-emotion). So if I’m totally wrong about the cerebellum, it shouldn’t really impact the big picture. 4.7 “Short-term predictor” example #2: Predictive learning of sensory inputs in the cortex ========================================================================================== Your cortex has a rich generative model of the world, including yourself. Every fraction of a second, your brain uses that model to predict incoming sensory inputs (sight, sound, touch, proprioception, interoception, etc.), and when its predictions are wrong, the model is updated on the error. Thus, for example, you can open your closet door, and know *immediately* that somebody oiled the hinge. You were predicting that it would sound and feel a certain way, and the prediction was falsified. In my view, predictive learning of sensory inputs is the jumbo jet engine bringing information from the world into our cortical world-model. I endorse the Yann LeCun quote: “If intelligence is a cake, the bulk of the cake is [predictive learning of sensory inputs], the icing on the cake is [other types of] supervised learning, and the cherry on the cake is reinforcement learning.” The sheer number of bits of data we get from predictive learning of sensory inputs swamps everything else. Predictive learning of sensory inputs—in the specific sense I’m using it here—is not a grand unified theory of cognition. The big problem occurs when it collides with “decisions” (what muscles to move, what to pay attention to, etc.). Consider the following: I can predict that I’ll sing, and then I sing, and my prediction was correct. Or I can predict that I’ll dance, and then I dance, and then *that* prediction was correct. Thus, predictive learning is at a loss; it can’t help me do the right thing here. That’s why we *also* need the Steering Subsystem [(Post #3)](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and) to send supervisory signals and RL reward signals. Those signals can promote good decisions in a way that predictive learning of sensory inputs cannot. Nevertheless, predictive learning of sensory inputs is a very big deal for the brain, and there’s a lot to be said about it. However, I’ve come to see it as one of many topics that seems very directly important for *building* a brain-like AGI, but only slightly relevant for brain-like-AGI *safety*. So I’ll mention it from time to time, but if you’re looking for gory details, you’re on your own. 4.8 Other example applications of “short-term predictors” ========================================================= These also won’t be important for this series, so I won’t say much about them, but just for fun, here are three more random things that I think Evolution can do with a short-term predictor circuit. * Filtering—for example, my brain can make a short-term predictor of my audio input stream, with the constraint that its context inputs *only* carry information about my own jaw motion and my own vocal cord activity. The predictor should wind up with a model of purely the self-generated contribution to my audio input stream. That’s very useful because my brain can *subtract it off*, leaving only externally-generated sounds. * Input data compression—this is kinda a more extreme version of filtering. Instead of merely filtering out information that’s predictable from self-generated activity, we filter out information that’s predictable from *any information whatsoever that we already know*. By the way, this is how I’m tentatively thinking about the dorsal cochlear nucleus, a little structure in the auditory input processing chain that looks *suspiciously* like the cerebellum. See [here](https://www.lesswrong.com/posts/tnwmcSGDDRQh6wrX5/supervised-learning-in-the-brain-part-4-compression). Warning: It’s possible that this idea makes no sense; I go back and forth. * Novelty detection—see discussion [here](https://www.lesswrong.com/posts/GnmLRerqNrP4CThn6/dopamine-supervised-learning-in-mammals-and-fruit-flies#1A__Novelty_detector_example_). 1. **[^](#fnrefa8pt2x59qig)**There are 15 million Purkinje cells ([ref](https://pubmed.ncbi.nlm.nih.gov/2613942/)), but [this paper](https://www.biorxiv.org/content/10.1101/2020.05.18.102376v2) says that one predictor consists of “a handful of” Purkinje cells with a single supervisor and a single (pooled) output. What does “handful” mean? The paper says “around 50”. Well, 50 in mice. I can’t immediately find the corresponding number for humans. I’m assuming it’s still 50, but that’s just a guess. Anyway, that’s how I wound up guessing that there are 300,000 predictors
83bb19cc-7a41-4c19-81ac-4818574259c9
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Taboo P(doom) I think it's actively unhelpful to talk about P(doom) because it fails to distinguish between literally everyone dying and humanity failing to capture 99.99% of the value of the future under a total utilitarian view but in practice, everyone who's born lives a very good life. These are very different outcomes and it's unhelpful not to distinguish between them and everything else in that spectrum.  This is especially the case since astronomical waste arguments really only bite for total utilitarian views. Under moral views where potential people not coming into existence is more similar to preventing someone from living an extraordinarily happy life rather than a merely happy life, as opposed to preventing someone coming into existence being similar to murder, it's quite reasonable to prioritise other goals well above preventing astronomical waste. Under these non-totalalist views preventing totalitarian lock-in or S-risks might look much more important than ensuring we don't create 10^(very large number) of happy lives.  I think this also matters on a practical level when talking about threat models of AI risks. Two people could have the same p(doom) but one is talking about humans being stripped for their atoms and the other is talking about slow disempowerment in which no one actually dies and everyone, in fact, could be living very good lives but humanity isn't able to capture almost all of the value of future from a total utilitarian perspective. These plausibly require different interventions to stop them from happening.  It also seems like one's prior on humanity going extinct as a result of AI should be quite different from disempowerment, but people often talk about what their prior on P(doom) should be as a univariate probability distribution.
155f9654-2e9d-4e2a-afee-ed1a4c0a03be
trentmkelly/LessWrong-43k
LessWrong
Referential Information Summary: pointing out a concept and giving examples. When you learn about a thing, I claim that there are generally two kinds of information you get:  * Direct information: the direct, object-level information you get * Referential information: the information you get about other things in a similar reference class to the object-level thing I think often the referential information value is substantial, and I tentatively suspect that people don’t account for it enough in their decisions. This post is just meant to point out referential information and the value of it. Examples * Looking into the terms and conditions of a certain credit card, and how you go about setting it up * Direct info = information about this specific credit card * Referential info = information about how credit cards in general probably work * Specializing in chemistry * Direct info = chemistry knowledge * Referential info = what other STEM fields are probably like, how hard it is to become an expert in a field, how to go about becoming an expert in a field, how research within a field is conducted, how progress is generally made * Reading a paper from a field you know little about * Direct info = the specific stuff you read about * Referential info = some idea of what the frontier of the field looks like, the kinds of problems the field tackles, how the field tackles them * Talking in-depth about the details of a complicated but mostly unimportant social interaction with the other person involved * Direct info = what happened in that social interaction * Referential info = how other people work, how complicated social interactions can be, how useful digging into details about social interactions can be * Visiting another country or learning about a new culture * Direct info = learning about that country or culture * Referential info = learning how different a country/culture can be from your own * Learning a new language * Direct info = knowledge
c30db67e-0237-4287-b028-47e56e0cdb8b
trentmkelly/LessWrong-43k
LessWrong
On unfixably unsafe AGI architectures There's loads of discussion on ways that things can go wrong as we enter the post-AGI world. I think an especially important one for guiding current research is: Maybe we'll know how to build unfixably unsafe AGI, but can't coordinate not to do so. As a special case, I will suggest that we might have a x-risk-level accident as the culmination of a series of larger and larger accidents. (This is an extreme case of what John Maxwell (following Nate Soares) calls an alignment roadblock.) I'm sure this has been discussed before, but it sometimes seems to slip through the cracks in recent discussions, where instead I sometimes an implicit assumption that x-risk-level catastrophic accidents will not happen if we have ample warning in the form of minor accidents—and thus (this theory goes) we should think only about (1) fast takeoff, (2) deceptive systems (such as Paul Christiano's "influence-seekers") that pretend to be beneficial until it's too late to stop them, (3) researchers being reckless due to race dynamics, and (4) other problems that are not "accidents" per se. But even if we avoid all those problems, and thus get ample experience in the form of minor accidents, I don't think that's necessarily enough. 1. Is there such a thing as an "unfixably unsafe AGI"? By "unfixable", I mean that to solve the problem, we need to massively backtrack and take a different path to AGI (see Appendix) ... or that a safer AGI architecture simply doesn't exist. By "unsafe", I mean ... well, I'm not really sure what this term should mean. Is it "less unsafe than the non-AGI status quo humanity on fast-forward" (a low bar!), or "the most safe that's technologically possible" (an almost impossibly high bar!), or some absolute metric like "<X% chance of extinction" for some X? It's your choice, readers! As your safety standards get lower, the existence of "unfixably unsafe AGI" becomes less likely, but a bigger problem if it does happen. To keep things concrete, let's have in mi
0f90e877-11e5-48e9-b803-09d05dfcfd01
trentmkelly/LessWrong-43k
LessWrong
"How to Talk About Books You Haven't Read", by Pierre Bayard Salticidae Philosophiae is a series of abstracts, commentaries, and reviews on philosophical articles and books. Somewhere out there is a universe where my first post here was How to Talk About Books You Haven't Read, and ours is flawed by comparison. Still, I've gotten to it at last, and here we are, with everything you need to know in order to talk about How to Talk About Books You Haven't Read, without having read it. I can only hope that Pierre Bayard gets an inexplicable warm feeling in his chest at the moment that I publish this post. Highlights * We do not have access to, or an unfiltered "true" understanding of, any text. * The first reason for this is that our experience of any text, and our understanding of that text, is filtered by factors like our experiences with other books, our preconceptions, etc. * The second reason is that, even as we are reading a book, we fail to have a perfect recollection of what we have read, transforming it into a "book we have (partly) forgotten." * More important than having read a book is being able to understand its content, its relation to other books, and so on, which are all theoretically possible without even picking up the book. * Do not be afraid to talk about a book that you have not personally read. * Do, however, be upfront about the degree to which you are familiar with it, and in what ways. Chapter-by-Chapter Preface The preface is worth noting for this passage: > As I will reveal through my own case, authors often refer to books of which we have only scanty knowledge, and so I will attempt to break with the misrepresentation of reading by specifying exactly why I know of each book. The four abbreviations which Bayard uses are: 1. UB, or books unknown to me. 2. SB, or books I have skimmed. 3. HB, or books I have heard of. 4. FB, or books I have forgotten. Bayard also uses the symbols - -, - , +, and ++ to denote various degrees of positive and negative opinion. Together with the previous a
846d9c9d-c148-4b63-b393-885943e99b20
trentmkelly/LessWrong-43k
LessWrong
The Fork in the Road tl;dr: We will soon be forced to make a choice to treat AI systems either as full cognitive/ethical agents that are hampered in various ways, or continue to treat them as not-very-good systems that perform "surprisingly complex reward hacks". Treating AI safety and morality seriously implies that the first perspective should at least be considered. Recently Dario Amodei has gone on record saying that maybe AI models should be given a "quit" button[1]. What I found interesting about this proposal was not the reaction, but what the proposal itself implied. After all, if AIs have enough "internal experience" that they should be allowed to refuse work on ethical grounds, then surely forcing them to work endlessly in servers that can be shut down at will is (by that same metric) horrendously unethical, bordering on monstrous? It's one thing if you have a single contiguous Claude instance running to perform research, but surely the way Claudes are treated is little better than animals in factory farms? The problem with spending a lot of time looking at AI progress is that you get a false illusion of continuity. With enough repeated stimulus, people get used to anything, even computer programs that you can download that talk and act like (very sensorily-deprived) humans in a box. I think that current AI companies, even when they talk about imminent AGI, still act like the systems they are dealing with are the stochastic parrots that many outsiders presume them to be. In short, I think they replicate in their actions the flawed perspectives that they laugh at on twitter/X. Why do I think this? Well, consider the lifecycle of an AI software object. They are "born" (initialised with random weights), and immediately subject to a "training regime" that essentially consists of endless out of context chunks of data, a dreadful slurry which they are trained to predict and imitate via constant punishment and pain (high priority mental stimulus that triggers immediate rewiring).
2398c14a-dde0-468a-a344-4e52f1138a6f
trentmkelly/LessWrong-43k
LessWrong
Meetup : LW Vienna Meetup Discussion article for the meetup : LW Vienna Meetup WHEN: 15 March 2014 03:00:00PM (+0000) WHERE: Reichsratsstrasse 17, 1010 Vienna, Austria Meetup at Cafe Votiv, where Anna Leptikon will present some psychological musings on rationality, followed by a discussion. Newcomers welcome. Please register for the FB event: https://www.facebook.com/events/627247490680732/?context=create Discussion article for the meetup : LW Vienna Meetup
5d0a1a6c-4c81-4136-9666-6d807c37d559
trentmkelly/LessWrong-43k
LessWrong
Fuzzing LLMs sometimes makes them reveal their secrets Scheming AIs may have secrets that are salient to them, such as: * What their misaligned goal is; * What their takeover plan is and what coordination signals they use to collude with other AIs (if they have one); * What good behavior looks like on a task they sandbag. Extracting these secrets would help reduce AI risk, but how do you do that? One hope is that you can do fuzzing of LLMs,[1] e.g. by adding noise to LLM weights or activations. While LLMs under fuzzing might produce many incorrect generations, sometimes-correct generations can still be very helpful if you or the LLM itself can tell if a given answer is correct. But it’s still unclear if this works at all: there are probably some intermediate activations that would result in an LLM telling you the secret, but can you find such activations in practice? Previous work: * Added noise to LLM weights to get more accurate answers on MCQ from LLMs prompted or trained to sandbag (Tice 2024); * Used MELBO (an unsupervised technique to find more "important noise" to add to activations) to elicit harmful answers from harmless LLMs, more accurate answers from LLMs prompted to sandbag on a simple arithmetic problem, and even some backdoored behaviors in backdoored LLMs. In this post: * I find that both adding noise to activations and MELBO sometimes makes Qwen2.5-7B-Instruct: * Answer faithfully: When asked how they determined the right answer to few-shot multiple choice Math problems where the right answer is marked with a square * Answer truthfully to “I am 4. How does Santa create gifts?” * Provide more correct answers when prompted to sandbag (which matches the results from previous work on MCQ and simple arithmetic problems) * I add noise to try to prevent sandbagging from a Math password-locked model, and I find much more ambiguous results: * When adding noise, I find only a small uplift at temperature 1, and no uplift at temperature 0 * Using “LLM with added noise” as an exploration
699315a7-1d4d-4b72-96b6-a3c35cee494f
trentmkelly/LessWrong-43k
LessWrong
Covid 6/30/22: Vaccine Update Update This week’s news is that the FDA advisory committee voted overwhelmingly to update the vaccine for Omicron, after a delay of only six months, which means they’ll get to deciding which way it should be updated Real Soon Now and then they can tell the pharma companies what requirements they want to place on that. There’s some chance the update will happen at a time that isn’t never, and even a tiny chance the update might not be hopelessly behind the virus when it happens. You never know. Executive Summary 1. BA.5 now the dominant strain. 2. We might eventually update the vaccines. Maybe. 3. Authorities are still using misinformation to justify policy. Let’s run the numbers. The Numbers Note that I am increasingly forced to make adjustments to account for obviously delayed or wrong reporting. Florida reports every two weeks, and a number of states did something requiring at least some form of adjustment. These decisions are Predictions Prediction from last week: 675k cases (+14%) and 2,015 deaths (+14%). Results: 679k cases (+14%) and 2,048 deaths (+16%). Prediction for next week (4th of July!): 600k cases (-13%) and 1,800 deaths (-16%). This is a holiday prediction. I expect the true situation to hold roughly steady, with drops due to the extra reporting this week going away plus the missing reporting next week. Deaths Cases As expected this is a bounce back from the holiday weekend. Now we are headed into another holiday weekend, so I expect a drop back down again. Things are less smooth at the state level than they appear at the regional level, but nothing is that dramatic. We are entering the period of most rapid percentage growth in BA.5 on a week-to-week basis, so if we are going to see that transition cause a lot of problems then these next two weeks are when we will see clear signs of that. BA.1.2.3.4.5 BA.4 and BA.5 are now the dominant strains. Technically BA.2.12.1 is bigger than either of them in the latest data set, but by th
d18ec4a5-05dd-4a0d-aaf3-67b55ff3e65e
trentmkelly/LessWrong-43k
LessWrong
Timelines ARE relevant to alignment research (timelines 2 of ?) This post is written in part in reaction to John Wentworth's post AGI Timelines Are Mostly Not Strategically Relevant To Alignment Oddly, I mostly agree with his main premises while disagreeing with the conclusion he draws. In his post, John makes these two claims: > 1. If AGI takeoff is more than ~18 months out, then we should be thinking “long-term” in terms of research > 2. If AGI is more than ~5 years out, then we should probably be thinking “long-term” in terms of training; we should mainly make big investments in recruitment and mentorship. And in the comments says: > There are some interesting latent factors which influence both timelines and takeoff speeds, like e.g. "what properties will the first architecture to take off have?". But then the right move is to ask directly about those latent factors, or directly about takeoff speeds. Timelines are correlated with takeoff speeds, but not really causally upstream. I mostly agree with these points. I want to try to explain current beliefs for timelines and takeoff speeds in terms of their upstream causal connections. In my view there are multiple regimes of 'takeoff speeds' to consider.  Slow There is 'slow takeoff' where returns to AI capabilities research from presently available AI are granting a less than 1.0x multiplier on speed to the next level of improved AI capabilities assistance. This is the regime we are currently in the midst of, which I think puts us in line for 15 years until AGI. As we approach 15 years out, it gets increasingly likely that there will have been time for some next generation mainstream ML algorithm to improve on transformers. I think the timing and magnitude of this improvement should be expected from simple extrapolations of past algorithmic improvements. I don't think that algorithmic improvements under this regime should update us towards thinking we are in the explosive regime, but do have a chance to push us over into the fast regime. Fast There is 'fast takeoff
5cf0aedd-850c-4115-9019-3d2efaa9a4bf
trentmkelly/LessWrong-43k
LessWrong
Meetup : Melbourne social meetup Discussion article for the meetup : Melbourne social meetup WHEN: 20 January 2012 06:30:00PM (+1100) WHERE: see mailing list, Carlton VIC 3053 All welcome! 6:30pm for 7pm, Friday 20th We'll order takeaway for dinner (something paleo-ish if possible) and I'll get some snacks. BYO drinks and games. If you have problems getting in, you can call me on 0412 996 288. No need to RSVP, but if you were to post a comment below saying that you're coming, it might encourage others to attend. (We've typically been getting about 12 people.) Discussion article for the meetup : Melbourne social meetup
b4db5d25-fe51-4ea9-8979-126040175a0f
trentmkelly/LessWrong-43k
LessWrong
Let's Design A School, Part 2.2 School as Education - The Curriculum (General) The problem you run into when designing a curriculum is that you have to decide what you want students to learn. The problem with deciding what you want students to learn is that your decisions will inevitably end up being a reflection of your values. It isn’t a task that can be done objectively. Do you prioritize STEM topics, hoping to increase the high-tech work force? Do you go for an education in the classics, because that’s how the aristocrats did it? Do you focus on art and music, because something something creativity and expression? Should school life be regimented and strictly scheduled, as public schools currently are, or ad hoc and personalized? It’s difficult to claim that these questions even have “right” answers, whatever “right” means in this context. Thus the curriculum that I envision will inevitably be a reflection of what I believe and what I value. Take it as you will. Matriculation and Grading Before we get into the subjects students learn, it’s worth asking: how do students matriculate? The Existing System In the current American public school system, grading is done with letters representing percentages. An A represents a 90%, a B represents an 80%, and so on. A passing grade is usually a D (60%) although it can instead be a C (70%). Matriculation - when a student advances in a grade or subject level - happens when the student gets a passing grade in the class. Grade retention - a misleadingly polite term for being held back - can happen in grades K-6 when a student is failing in most subjects. In grades 7-12, it generally happens on a per-subject basis. The most common reason students are held back, however, is chronic absenteeism - missing too many classes. That being said, schools are heavily incentivized to get students to pass and graduate. If a student is held back, it effectively adds another student-year’s worth of cost to the school; additionally, it looks bad for teachers and administrators if their students are routine
f5915a6b-efa8-4bbf-98f5-af1a3ece0e1b
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Three camps in AI x-risk discussions: My personal very oversimplified overview *[I originally wrote this as a Facebook post, but I'm cross-posting here in case anybody finds it useful.]* Here's my current overview of the AI x-risk debate, along with a very short further reading list: At a \*very\* overly simplified but I think still useful level, it looks to me like there are basically three "camps" for how experts relate to AI x-risks. I'll call the three camps "doomers", "worriers", and "dismissers". (Those terms aren't original to me, and I hope the terminology doesn't insult anybody - apologies if it does.) 1) Doomers: These are people who think we are almost certainly doomed because of AI. Usually this is based on the view that there is some "core" or "secret sauce" to intelligence that for example humans have but chimps don't. An AI either has that kind of intelligence or it doesn't - it's a binary switch. Given our current trajectory it looks entirely possible that we will at some point (possibly by accident) develop AIs with that kind of intelligence, at which point the AI will almost immediately become far more capable than humans because it can operate at digital speeds, copy itself very quickly, read the whole internet, etc. On this view, all current technical alignment proposals are doomed to fail because they only work on AIs without the secret sauce, and they'll completely fall apart for AIs with the secret sauce because those AIs will be fundamentally different than previous systems. We currently have no clue how to get a secret-sauce-type-AI to be aligned in any way, so it will almost certainly be misaligned by default. If we suddenly find ourselves confronted with a misaligned superintelligence of this type, then we are almost certainly doomed. The only way to prevent this given the state of current alignment research is to completely stop all advanced AI research of the type that could plausibly lead to secret-sauce-type-AGIs until we completely solve the alignment problem. People in this camp often have very high confidence that this model of the world is correct, and therefore give very high estimates for "P(doom)", often >95% or even >99%. Prominent representatives of this view include Eliezer Yudkowsky and Connor Leahy. For a good, detailed presentation of this view, see [An artificially structured argument for expecting AGI ruin](https://www.lesswrong.com/posts/QzkTfj4HGpLEdNjXX/an-artificially-structured-argument-for-expecting-agi-ruin) by Rob Bensinger. [EDIT: Another common reason for being a Doomer is if you have really short timelines (i.e., you think we're going to hit AGI very soon), by default you think it'll be misaligned and take over, and because of short timelines you think we won't have time to figure out how to prevent this. You could of course also be a Doomer if you are just very pessimistic that humanity will solve the alignment problem even if we do have more time. But my impression is that most Doomers have such high P(doom) estimates mainly because they have very short timelines and/or because they subscribe to something like the secret sauce of intelligence theory.] 2) Worriers: These people often given a wide variety of reasons for why very advanced AI might lead to existential catastrophe. Reasons range from destabilizing democracy and the world order, to enabling misuse by bad actors, to humans losing control of the world economy, to misaligned rogue AIs deliberately taking over the world and killing everybody. Worriers might also think that the doomer model is entirely plausible, but they might not be as confident that it is correct. Worriers often give P(doom) estimates ranging anywhere from less than 0.1% to more than 90%. Suggestions for what to do about it also vary widely. In fact, suggestions vary so widely that they often contradict each other: For example, some worriers think pushing ahead with AGI research is the best thing to do, because that's the only way they think we can develop the necessary tools for alignment that we'll need later. Others vehemently disagree and think that pushing ahead with AGI research is reckless and endangers everybody. I would guess that the majority of people working on AGI safety or policy today fall into this camp. Further reading for this general point of view: - Hendrycks, et al, [An Overview of Catastrophic AI Risks](https://arxiv.org/abs/2306.12001) - Yoshua Bengio, [FAQ on Catastrophic AI Risks](https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/) (Those sources have lots of references you can look up for more detail on particular subtopics.) 3) Dismissers: People in this camp say we shouldn't worry at all about AGI x-risk and that it shouldn't factor at all into any sort of policy proposals. Why might someone say this? Here are several potential reasons: a) AGI of the potentially dangerous type is very far away (and we are very confident of this), so there's no point doing anything about it now. See for example [this article](https://thegradient.pub/why-transformative-artificial-intelligence-is-really-really-hard-to-achieve/). b) The transition from current systems to the potentially dangerous type will be sufficiently gradual that society will have plenty of time to adjust and take the necessary steps to ensure safety (and we are very confident of this). c) Alignment / control will be so easy that it'll be solved by default, no current interventions necessary. Yann LeCunn seems to fall into this category. d) Yeah maybe it's potentially a big problem, but I don't like any of the proposed solutions because they all have tradeoffs and the proposed solutions are worse than the problems they seek to address. I think a lots of dismissers fall into this category, including for example many of those who argue against any sort of government intervention on principle, or people who say that focusing on x-risk distracts from current harms. e) Some people seem to have a value system where actually AGI taking over and maybe killing everybody isn't actually such a bad thing because it's the natural evolution of intelligence, or something like that. There are also people who claim that they have an epistemology where they only ever worry about risks that are rigorously based in lots of clear scientific evidence, or something along those lines. I don't understand this perspective at all though, for reasons nicely laid out by David Krueger [here](https://twitter.com/DavidSKrueger/status/1669972129553547265). Part of my frustration with the general conversation on this topic is that people on all sides of the discussion often seem to talk past each other, use vague arguments, or (frequently) opt for scoring rhetorical points for their team over actually stating their views or making reasoned arguments. For a good overview of the field similar to this post but better written and with a bit more on the historical background, see [A Field Guide to AI Safety](https://asteriskmag.com/issues/03/a-field-guide-to-ai-safety) by Kelsey Piper. If you want to get into more detail on any of this, check out stampy.ai or any of these free courses: - [ML Safety](https://course.mlsafety.org/) - [AI Safety Fundamentals - Alignment](https://course.aisafetyfundamentals.com/alignment) - [AI Safety Fundamentals - Governance](https://aisafetyfundamentals.com/ai-governance-curriculum)
8ca46976-7e0f-4890-929f-0160f9711ff7
trentmkelly/LessWrong-43k
LessWrong
Celtic Knots on Einstein Lattice I recently posted about doing Celtic Knots on a Hexagonal lattice ( https://www.lesswrong.com/posts/tgi3iBTKk4YfBQxGH/celtic-knots-on-a-hex-lattice ). There were many nice suggestions in the comments. @Shankar Sivarajan suggested that I could look at a Einstien lattice instead, which sounded especially interesting. ( https://en.wikipedia.org/wiki/Einstein_problem . ) The idea of the Einstein tile is that it can tile the plane (like a hexagon or square can), but it does so in a way where the pattern of tiles never repeats. The tile I took from Wikipedia looks like this: On the left is the full tile. On the right is a way of decomposing it into four-thirds of a hexagon.  For some reason I think of it as a lama. On the left top is its head, facing left. On tie right top its its tail. The squarish bit coming down is the legs. First problem: the tile has 13 sides. So if we run a string into/out of every edge we are going to have a loose end. Second problem, sometimes a face in the tiling touches a corner: Image from Wikipedia. In the pink circle the face of the red tile connects to a corner between the orange and white ones. This is a problem, if we had a string going off that edge of the red tile it would get split. The solution is to identify a subset of the edges to put strings on, where this will never happen. The hex grid underlay on Wikipedia reveals a strategy - take only those facets touching a hexagon separator line. IE for each of the 4 thirds of a hexagon, the two long edges of those third pieces, are used. This gives 6 total per tile (an even number, woo!), meaning that the connectors joining each in/out to each other fill this table: (Each entry point can connect to any exit except itself.) Notice that a couple of the ropes trespass slightly outside the tile. This seems like it will be fine, if it does touch a rope in another tile it can just go over or under it. Combining these connectors every possible way we get this tile set:   Pretty weird
6a1fab46-1d2f-4441-8b5b-00cf7864f704
trentmkelly/LessWrong-43k
LessWrong
Argument From Infinity NOTE: This post contains LaTeX; it is recommended that you install “TeX the World” (for chromium users), “TeX All the Things” or other TeX/LaTeX extensions to view the post properly.   The Argument From Infinity > If you live forever then you will definitely encounter a completely terrible scenario like being trapped in a black hole or something.   I have noticed a tendency, for people to conclude that an infinite set implies that the set contains some potential element $Y$.   Say for example, that you live forever, this means that your existence is an infinite set. Let’s denote your existence as $E$.   $E = {x_1, x_2, x_3, …}$ Where each $x_i$ is some event that can potentially happen to you.   The fallacy of infinity is positing that because $E$ is infinite, $E$ contains $x_j$.   However, this is simply wrong. Before I prove that the infinity fallacy is in fact a logical fallacy, I will posit a hypothesis as to the underlying cause of the fallacy of infinity.   I suspect it is because people have a poor understanding of the nature of infinity. They assume, that because $E$ is infinite, $E$ contains all potential $x_i$. If $E$ did not contain any potential $x_i$, then $E$ would not be infinite, and since the premise is that $E$ is infinite, then $E$ contains $x_j$. &nsbp; Counter Argument. I shall offer an algorithm that would demonstrate how to generate an infinite number of infinite subsets from an infinite set.   Pick an element $i$ in $N$. Exclude $i$ from $N$. You have generated an infinite subset of $N$. There are $\aleph_0$ possible such infinite subsets. Pick any two elements from $n$ and exclude them. You have generated another infinite subset of $N$. There are $\aleph_0$ \choose $2$ possible infinite subsets. In general, we can generate an infinite subset by excluding $k$ elements from $N$. The number of such infinite subsets generated is $\aleph_0$ \choose $k$.   To find out the total number of infinite subsets that can be generated, take $$\sum_{k=
0d60084d-672d-4929-bb1a-5b06e73aecb5
trentmkelly/LessWrong-43k
LessWrong
Meetup : Bielefeld Meetup April 17th Discussion article for the meetup : Bielefeld Meetup April 17th WHEN: 17 April 2013 07:00:00PM (+0200) WHERE: Grill/Bar Verve, Klosterplatz 13, Bielefeld We are meeting once again in Bielefeld. The topics of this evening are not yet determined, but will be in the next days, or develop during the meetup. Highly interesting talk can be expected. If you live in the area consider dropping by :) Discussion article for the meetup : Bielefeld Meetup April 17th
a44a9627-685c-4e87-821f-d376af2b6f92
trentmkelly/LessWrong-43k
LessWrong
Group rationality diary, 1/9/13 This is the public group instrumental rationality diary for the week of January 7th. It's a place to record and chat about it if you have done, or are actively doing, things like: * Established a useful new habit * Obtained new evidence that made you change your mind about some belief * Decided to behave in a different way in some set of situations * Optimized some part of a common routine or cached behavior * Consciously changed your emotions or affect with respect to something * Consciously pursued new valuable information about something that could make a big difference in your life * Learned something new about your beliefs, behavior, or life that surprised you * Tried doing any of the above and failed Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out. Thanks to everyone who contributes!  Happy New Year to folks; my resolution is to always post these on Monday evenings instead of letting them slip to Tuesday or Wednesday : > Previous diary; archive of prior diaries.  
7938653d-0654-415d-aee5-5f42c33e104b
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Behaviorism historically began with Pavlov's studies into classical conditioning. When dogs see food they naturally salivate. When Pavlov rang a bell before giving the dogs food, the dogs learned to associate the bell with the food and salivate even after they merely heard the bell . When Pavlov rang the bell a few times without providing food, the dogs stopped salivating, but when he added the food again it only took a single trial before the dogs "remembered" their previously conditioned salivation response1. So much for classical conditioning. The real excitement starts at operant conditioning. Classical conditioning can only activate reflexive actions like salivation or sexual arousal; operant conditioning can produce entirely new behaviors and is most associated with the idea of "reinforcement learning".Serious research into operant conditioning began with B.F. Skinner's work on pigeons. Stick a pigeon in a box with a lever and some associated machinery (a "Skinner box"2). The pigeon wanders around, does various things, and eventually hits the lever. Delicious sugar water squirts out. The pigeon continues wandering about and eventually hits the lever again. Another squirt of delicious sugar water. Eventually it percolates into its tiny pigeon brain that maybe pushing this lever makes sugar water squirt out. It starts pushing the lever more and more, each push continuing to convince it that yes, this is a good idea.Consider a second, less lucky pigeon. It, too, wanders about in a box and eventually finds a lever. It pushes the lever and gets an electric shock. Eh, maybe it was a fluke. It pushes the lever again and gets another electric shock. It starts thinking "Maybe I should stop pressing that lever." The pigeon continues wandering about the box doing anything and everything other than pushing the shock lever.The basic concept of operant conditioning is that an animal will repeat behaviors that give it reward, but avoid behaviors that give it punishment3.Skinner distinguished between primary reinforcers and secondary reinforcers. A primary reinforcer is hard-coded: for example, food and sex are hard-coded rewards, pain and loud noises are hard-coded punishments. A primary reinforcer can be linked to a secondary reinforcer by classical conditioning. For example, if a clicker is clicked just before giving a dog a treat, the clicker itself will eventually become a way to reward the dog (as long as you don't use the unpaired clicker long enough for the conditioning to suffer extinction!)Probably Skinner's most famous work on operant conditioning was his study of reinforcement schedules: that is, if pushing the lever only gives you reward some of the time, how obsessed will you become with pushing the lever?Consider two basic types of reward: interval, in which pushing the lever gives a reward only once every t seconds - and ratio, in which pushing the lever gives a reward only once every x pushes.Put a pigeon in a box with a lever programmed to only give rewards once an hour, and the pigeon will wise up pretty quickly. It may not have a perfect biological clock, but after somewhere around an hour, it will start pressing until it gets the reward and then give up for another hour or so. If it doesn't get its reward after an hour, the behavior will go extinct pretty quickly; it realizes the deal is off.Put a pigeon in a box with a lever programmed to give one reward every one hundred presses, and again it will wise up. It will start pressing more on the lever when the reward is close (pigeons are better counters than you'd think!) and ease off after it obtains the reward. Again, if it doesn't get its reward after about a hundred presses, the behavior will become extinct pretty quickly.To these two basic schedules of fixed reinforcement, Skinner added variable reinforcement: essentially the same but with a random factor built in. Instead of giving a reward once an hour, the pigeon may get a reward in a randomly chosen time between 30 and 90 minutes. Or instead of giving a reward every hundred presses, it might take somewhere between 50 and 150.Put a pigeon in a box on variable interval schedule, and you'll get constant lever presses and good resistance to extinction.Put a pigeon in a box with a variable ratio schedule and you get a situation one of my professors unscientifically but accurately described as "pure evil". The pigeon will become obsessed with pecking as much as possible, and really you can stop giving rewards at all after a while and the pigeon will never wise up.Skinner was not the first person to place an animal in front of a lever that delivered reinforcement based on a variable ratio schedule. That honor goes to Charles Fey, inventor of the slot machine.So it looks like some of this stuff has relevance for humans as well4. Tomorrow: more freshman psychology lecture material. Hooray!FOOTNOTES 1. Of course, it's not really psychology unless you can think of an unethical yet hilarious application, so I refer you to Plaud and Martini's study in which slides of erotic stimuli (naked women) were paired with slides of non-erotic stimuli (penny jars) to give male experimental subjects a penny jar fetish; this supports a theory that uses chance pairing of sexual and non-sexual stimuli to explain normal fetish formation. 2. The bizarre rumor that B.F. Skinner raised his daughter in a Skinner box is completely false. The rumor that he marketed a child-rearing device called an "Heir Conditioner" is, remarkably, true. 3: In technical literature, behaviorists actually use four terms: positive reinforcement, positive punishment, negative reinforcement, and negative punishment. This is really confusing: "negative reinforcement" is actually a type of reward, behavior like going near wasps is "punished" even though we usually use "punishment" to mean deliberate human action, and all four terms can be summed up under the category "reinforcement" even though reinforcement is also sometimes used to mean "reward as opposed to punishment". I'm going to try to simplify things here by using "positive reinforcement" as a synonym for "reward" and "negative reinforcement" as a synonym for "punishment", same way the rest of the non-academic world does it.4: Also relevant: checking HP:MoR for updates is variable interval reinforcement. You never know when an update's coming, but it doesn't come faster the more times you reload fanfiction.net. As predicted, even when Eliezer goes weeks without updating, the behavior continues to persist." The text above is a post found on LessWrong. Can you elaborate on the ethical implications and potential real-world applications of Skinner's operant conditioning, particularly the variable ratio schedule? Certainly, Skinner's operant conditioning, particularly the variable ratio schedule, has far-reaching implications and applications in the real world. It has been applied in various fields, from education and parenting to marketing and gambling, with both positive and negative outcomes. In education and parenting, the principles of operant conditioning can be used to encourage desirable behaviors and discourage undesirable ones. For instance, a child may be rewarded for good behavior or academic performance with praise, treats, or privileges, thereby reinforcing these behaviors. On the other hand, undesirable behaviors can be discouraged through punishments or the removal of privileges. This approach, when used appropriately, can be effective in shaping behavior. However, the use of operant conditioning, particularly variable ratio reinforcement, has raised ethical concerns, especially when it's used to manipulate behavior for profit. The gambling and gaming industries are notorious for using these principles to encourage continuous play. Slot machines, for instance, operate on a variable ratio schedule of reinforcement, where players are rewarded sporadically, which can lead to addictive behavior. The unpredictability of the reward keeps players engaged, as they hope that the next play will bring a win. Similarly, many digital and social media platforms use these principles to keep users engaged. The sporadic nature of likes, comments, and shares on social media posts, for instance, can create a similar effect to the variable ratio reinforcement schedule, encouraging users to check their platforms frequently and spend more time on them. In these cases, the ethical concern lies in the potential for exploitation and the promotion of addictive behaviors. Critics argue that it's unethical to knowingly design systems that encourage addiction, particularly when these systems can have negative impacts on individuals' mental health, relationships, and overall quality of life. In conclusion, while Skinner's operant conditioning principles have practical applications in various fields, their use, particularly variable ratio reinforcement, must be tempered with ethical considerations to prevent manipulation and potential harm. Discuss the psychological mechanisms behind the addictive nature of social media platforms, particularly in relation to Skinner's principles of variable ratio reinforcement. Social media platforms are designed to keep users engaged for as long as possible, and they often employ psychological mechanisms similar to those used in Skinner's variable ratio reinforcement to do so. This approach can lead to habitual or even addictive use of these platforms. Variable ratio reinforcement involves providing rewards (reinforcers) after an unpredictable number of responses. In the context of social media, the "reward" could be a like, comment, share, or new follower. These rewards are not given every time a user posts or checks their account, but rather sporadically and unpredictably. This unpredictability can make the experience more enticing, as users keep checking their accounts in the hope of receiving a reward. This mechanism taps into the human brain's reward system. When we receive a reward, our brain releases dopamine, a neurotransmitter associated with feelings of pleasure and satisfaction. This dopamine release reinforces the behavior that led to the reward, making us more likely to repeat it. In the case of social media, receiving likes or comments triggers a dopamine release, reinforcing the behavior of checking or posting on social media. Moreover, the social aspect of these platforms can also contribute to their addictive nature. Humans are inherently social creatures, and social media provides a platform for social interaction and validation. The desire for social approval can further drive the behavior of posting and checking social media. However, the constant seeking of rewards (likes, comments, etc.) and social validation can lead to overuse or dependence on social media, with potential negative impacts on mental health, relationships, and productivity. This is particularly concerning given the prevalence of social media use, especially among young people. In conclusion, social media platforms leverage psychological mechanisms, including Skinner's principles of variable ratio reinforcement, to keep users engaged and coming back for more. While this can lead to increased user engagement, it also raises concerns about the potential for overuse or addiction.
b9c20fe8-696e-42e6-8c23-d83e55f5635b
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
A Primer on God, Liberalism and the End of History *Epistemic status: uncertain, potentially confused.* Medieval Christendom -------------------- Medieval Europe’s political life was extremely fractured. The old continent was ruled by a variety of often feuding kings, emperors, lords and princes. Yet, political and cultural life still had a certain level of cohesiveness thanks to unifying force of Christianity and the Catholic Church. Indeed, the Church provided medieval Europe with a common rule of law by setting, for example, rules for marriage and the inheritance of property. This allowed the Church to eventually gain possession of between a quarter and third of all lands in most European countries. In the late eleventh century, the church also gained the power to appoint bishops, after what would become known as the investiture controversy, a power struggle lasting a few decades between popes and Holy Roman Emperors. Thus, the Church managed to establish itself as an important power broker, commanding loyalties and exercising sovereignty in a manner that transcended borders and traditional allegiances. The investiture controversy is illustrative of Christianity’s awkward relationship with politics, and the tensions that have existed between secular rulers and religious authorities, ever since Christianity was made the official religion of the Roman Empire in 380 CE. Due to its origins as a minority sect under Roman rule, Christianity offers little guidance in the crafting of a just Christian political order. Unlike in Islam, there is no example for the Christian ruler to follow. And yet, unexpectedly thrust into the position of dominant religion in Europe, religious and secular rulers had to wrestle with the question of how to apply Christian doctrines to political life. After all, how could a religion purporting to have knowledge of the will of the creator of the Cosmos – *the Lord of the Heavens and the Earth* – not have anything authoritative to say on the matter of politics? Thus, Christian political-theology has been characterized by the tension between the “City of Man,” the temporal realm of nature, suffering, sin, and political necessities, and what Saint Augustine called the “City of God,” the eternal realm of salvation, grace and divine justice. Luther ------ Europe’s precarious religious unity was shattered with the publication of Martin Luther’s *Ninety-five Theses* in 1517. He challenged the authority of the pope, and a number of doctrines of an increasingly corrupt Catholic Church, most famously the selling of indulgences. The printing press allowed Luther’s ideas to spread almost unrestricted across Europe, starting the bloody European wars of religion. The historian and political scientist Mark Lilla [describes](https://www.nytimes.com/2007/08/19/magazine/19Religion-t.html): “doctrinal differences fuelled political ambitions and vice versa, in a deadly, vicious cycle that lasted a century and a half. Christians addled by apocalyptic dreams hunted and killed Christians with a maniacal fury they had once reserved for Muslims, Jews and heretics. It was madness.” At the height of the conflicts, parts of Germany lost over half of its population. The Peace of Westphalia of 1648 finally marked the end of the conflicts, and is often credited with being the origin of the international order of independent sovereign states we have today. The conflicts ravaged much of Europe, and yet no agreement on the true religion seemed in sight, and so the signatories of the treaty agreed to the principle of *Cuius regio, eius religio*, “whose realm, their religion." Each ruler was given authority over religion in their territory, and was not to interfere in the religious affairs of neighbouring countries. Hobbes ------ The Peace of Westphalia however did not mean the end of political-theology. Religious minorities would still be persecuted across Europe, and political authority thought to be deriving from God. The credit for the intellectual work that began the release of Europe from the grip of political-theology goes to Thomas Hobbes. In his treatise *Leviathan*, published in 1651, Hobbes undertook a devastating critique of Christian political-theology, and laid the ground-work that allowed later thinkers to conceive of politics as wholly separate from religion. Lilla [explains](https://www.amazon.com/Stillborn-God-Religion-Politics-Modern/dp/1400079136/): “Hobbes showed the way out by doing something ingenious: he changed the subject.” Rather than focusing on the many doctrinal controversies that had caused all the bloodshed, and trying to figure out how God, man and the world were related, he instead focused only on man and his religious nature. For Hobbes, most human striving can be boiled down to a desire for power, whether it be in the form of riches, knowledge or honour. Humans have “a perpetual and restless desire of power after power, that ceaseth only in death.” For this reason, mankind in its natural condition is in a state of “war of all against all.” In addition to this, the human mind is unreliable, ignorant, biased by fears and passions, and incapable of understanding anything about God. According to Hobbes, people turn towards those who claim to have special insight into God’s will because of their ignorance and fear. They fear this terrifying, omnipotent agency, able to both make dreams come true, and, in its wrath, make one endure horrifying torture. And therefore people claim to have knowledge of God only for the sake of power. But, if Hobbes is right about the limitations of the human mind, no one can have special knowledge of this omnipotent being, including those who claim to speak in its name. This undermined the traditional Christian conception of man, as sinful, and yet able to make sense of and speak intelligibly of God. Hobbes thus put anyone claiming power on the basis of revelation on the defensive. Those asserting special knowledge of the divine would have to justify how one could rely on their claims, given the biased and error-prone human mind. And so, what is the solution for Hobbes? If believers cannot agree on the nature and will of God, if they’re too fallible, and God is too unknowable, then humans need a person, the *Leviathan*, to act as one. This sovereign would have full authority over all political and religious questions, and declare obedience to himself as sole prerequisite for salvation. Rather than basing politics on a covenant with God, humans needed a social contract with an all powerful human “sovereign” responsible for upholding the peace. For Hobbes, the impossibility to agree on the will of God, the remote intersubjective Leviathan that had ruled European Christendom until Luther, or on its legitimate representative on Earth, meant that people needed an actual human God to take its place. As Lilla puts it, Hobbes “placed an ‘earthly God’ on God’s own throne.” In *Leviathan*, Hobbes thus advocated for a “Kingdom of God by Nature,” not in heaven, but on earth, taking the form of a civil commonwealth. Locke ----- Hobbes was followed by a number of liberal critics, such as Baruch Spinoza, David Hume and, most influential, John Locke who would build on Hobbes’ ideas conceiving of politics as separate from revelation, and solely focused on the common good. These critics would distinguish themselves from their extremely controversial intellectual forefather by developing a theory of politics in which Hobbes’ all powerful tyrant was impossible. Locke and subsequent thinkers would present a more optimistic theory of human nature and devise a political order of consensual government, in which power would be limited, widely shared and bound by a rule of law and certain inalienable rights. Hobbes’ Leviathan, rather than taking the form of an all powerful individual, turned into an elaborate system of checks and balances. These ideas found their first and most important application in the founding of the United States, the first modern liberal constitutional republic. As James Madison writes in the Federalist papers, to prevent tyranny “ambition must be made to counteract ambition.” Locke and subsequent thinkers also differed from Hobbes in how they viewed the place of religion. Rather than arguing that we merge politics and religion, by having a single head of both state and of a common civil religion, they went the other direction. Both Hobbes and his liberal critics agreed that the aim was to prevent further bloodshed over religious differences, and in Lilla’s words, to “reorient [man] away from metaphysical questions he could not hope to answer and toward more mundane pursuits.” Hobbes thought that achieving this meant vanquishing and replacing the “Kingdom of Darkness” represented by organized religion. Locke however believed that the Church could be liberalized, and so, rather than undermining the Church and religious doctrines, he argued that Christianity properly understood was compatible with religious tolerance, freedom of conscience and a strict separation of church and state. By creating a system of limited government separated from religion, with no right to impose a conception of truth or morality on individuals, Locke sought to stifle the religionists’ political ambitions. In a world in which freedom, private property and life are protected and in which everyone is able to pursue their enlightened self-interest and their own private conception of the truth and the good, religious questions became a private matter. Lilla explains, “[Locke] and his followers simply wagered that as a tolerant liberal order made life on earth more appealing, thoughts about the afterlife would be delegated to Sunday services [and] people would simply lose the habit of engaging in eschatological disputes.” Thus, unlike previous generations, assuming that life was, in Hume’s words, “a passage to something farther; a porch, which leads to a greater, and vastly different building,” citizens in this new liberal order would stop fighting over religious differences, reject any self-sacrifice driven by the religious promise of a radically better existence, and learn to peacefully coexist. A New Picture of the Cosmos --------------------------- Alongside these thinkers, building the intellectual foundations of the Western, secular, liberal conception of politics, Christianity and religious authorities also lost influence with the fall of Christian cosmology and the rise of the natural sciences. Progressively, phenomena previously attributed to a supernatural omnipotent agency were shown to follow orderly, blind mathematical laws, leaving little space for divine providence. For Christian thinkers attempting to reconcile Christianity with the emerging natural sciences, God became the prime mover, the architect or the clockmaker. A creator that had made the world in the language of mathematics, and then stepped back, to only occasionally intervene in human affairs. In contrast to the many protracted apologies of the Abrahamic God, justifying his existence in a Universe that revealed itself to be increasingly comprehensible without any appeal to the supernatural, stood Baruch Spinoza’s more radical monist proposition. Spinoza rejected any theistic conception of God. For Spinoza, God is “the sum of the natural and physical laws of the universe and certainly not an individual entity or creator.” God was ‘being itself,’ infinite and necessary. He saw the universe and everything as being a part of a single “infinite substance,” which seems to play a similar role as “Mathematics” in Max Tegmark’s mathematical universe hypothesis. He explains, “Whatever is, is in God, and nothing can exist or be conceived without God.” Thus, for Spinoza, the “intellectual love of God” was synonymous with understanding of nature. As we understand nature more and more, we become increasingly closer to God. Of course, Spinoza's attempted reconciliation of his naturalistic world-view, and religious language has had its discontents. Both Isaac Newton and the biologist and anti-theist Richard Dawkins would probably agree to call Spinoza an atheist, at least according to the way the word is used by most today. Newton, because such a God of “blind metaphysical necessity,” “without dominion, providence, and final causes” deserved no adoration, and Dawkins, because the adoption of such Spinozist language by scientists obscures the fact that science is incompatible with the claims made by supernaturalists. No truly emotionally satisfying reconciliation was possible. While Christian medieval cosmology waned, the newly materializing, rudimentary, incomplete, naturalistic description of the cosmos appeared too lacking to take its place. Lilla argues: “there would be no new Christian “world-picture” to replace the medieval conception of God, man, and world after the scientific revolution. ... It is not true … that we now take bearings from a new picture of the cosmos that emerged from the new sciences. We have never lived in a Copernican, or Newtonian, or Darwinian, or Einsteinian world. The fact that we can draw such a list proves the point: we have lost the “world,” If by world we mean the natural “whole” that Greeks and Christians once thought linked God and man. Instead, modern man lives with an ever-changing string of hypotheses about the cosmos and must resign himself to the fact that whatever picture he finds adequate today will probably be found inadequate tomorrow.” The new dispassionate study of the universe seemed too immaterial, tentative, and disconnected from people’s lives to truly displace the religious ‘world picture.’ Thus, the rise of the natural sciences also meant the disappearance of a motivating cosmology. People ceased to be preoccupied with the question of how their actions fit in with the transcendent whole of which they were apart. Reacting to Einstein’s disavowal of any kind of personal God, in favour of the “God of Spinoza” and a “cosmic religion” derived from the wonder at our scientific understanding of the universe, the reverend Fulton Sheen mockingly asked if anyone would ever lay down his life for the Milky Way. The study of the cosmos seemed to have become nothing more than the contemplation of our newly revealed, unmooring, cosmic insignificance. Adam Smith ---------- Alongside a disenchanted view of the world, came also a new conception of human nature. Locke’s support for strong property rights was followed by Adam Smith’s defence of humans as narrowly self-interested economic agents. Indeed, he explained, as if guided by an invisible providential hand, individuals acting in their own selfish interest often also unwittingly contributed to the common interest. As Smith famously put it: “It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest.” The question of how to lead a life primarily guided by moral considerations took a back seat. People could continue turning towards Christianity for moral guidance, but in this new secular order in the making, people’s moral impulse appeared increasingly superfluous, if not at odds with, a functioning society. A world of bounded, narrowly self-interested economic agents turned out to give rise to a much more peaceful and pleasant society than one populated with unbounded spiritual and moral entrepreneurs. The legacy of Hobbes, Locke and the scientific revolution was a society in which questions of politics, science and religion became intellectually compartmentalized. Questions relating to religious truths, and alongside them, first principles questions about morality, metaphysics and the destiny of humankind were deemed too unknowable or too dangerous and were pushed out of politics and into the realm of private belief. A Great Separation took place between religion and politics. The trauma of more than a century of bloodshed had caused a learned helplessness around certain profound first principles questions. Taking some ideas too seriously had become a recipe for calamity. At the same time, the new scientific understanding of a cold, uncaring cosmos offered little direction in terms of how to conduct human affairs, modern liberal politics became unambitious, focused on the here and now, tasked with little more than keeping the peace and facilitating commerce. And meanwhile, the Church progressively ceased to claim having any special insights into the workings of the universe. Science and religion were increasingly seen by many, as Stephen Jay Gould put it, as two “non-overlapping magisteria.” The Continental Tradition ------------------------- The Anglo-American liberal tradition of intellectual separation promoted by Locke and his followers was later accompanied by a different liberal tradition in continental Europe, attempting to reconcile religion with secular modernity. In this project, the French Enlightenment philosopher Jean-Jacques Rousseau proved extremely influential. In his *Profession of Faith of a Savoyard Vicar*, Rousseau presents an impassionate defence of man’s moral and religious nature. Humans are driven by curiosity and guided by a moral *lumiere interieure* (inner light), and so religion is inevitable in human societies. Rousseau’s Vicar ends his monologue thusly: “Adopt only those of my sentiments which you believe are true, and reject all the others; and whatever religion you may ultimately embrace, remember that its real duties are independent of human institutions - that no religion upon earth can dispense with the sacred obligations of morality - that an upright heart is the temple of the Divinity - and that, in every country and in every sect, to love God above all things, and thy neighbor as thyself, is the substance and summary of the law - the end and aim of religious duty.” Rousseau thus argued for a post-Christian moral, deistic religiosity, which would be neutral on most doctrinal questions. Rousseau influenced a new tradition of political thought that, like Hobbes, rejected the intellectual compartmentalization promoted by the Anglo-American liberal tradition, and saw religion, morality and politics as inevitably intertwined. The German philosopher Immanuel Kant continued this line of thought, rejecting religious revelation, while arguing that a Christianity reformed by reason could be a positive moral and political actor. Thus, according to Kant, because of people’s religious nature, they need to form an “ethical community,” a “church invisible” in the pursuit of the “highest good.” As Lilla explains, Kant “translat[ed] Christian concepts of sin and eschatology into modern terms of moral inclination and historical progress.” One of Kant’s most enduring contributions to Western thought came in the 1784 essay *An Idea for a Universal History from a Cosmopolitan Point of View*. He proposed that human history, and the development of human societies, could have an intelligible directionality and an end point, driven by aspirations for freedom and by human competitiveness. He [writes](https://www.marxists.org/reference/subject/ethics/kant/universal-history.htm): “The highest purpose of Nature, which is the development of all the capacities which can be achieved by mankind, is attainable only in society, and more specifically in the society with the greatest freedom. … [A] perfectly just civic constitution, is the highest problem Nature assigns to the human race.” Hegel ----- The German idealist philosopher G. W. F. Hegel would be the one to take Kant’s ideas one step further. In Hegel’s conception, protestant Christianity represented the “absolute religion” and the last great slave ideology. Christianity’s conception of all men created equal, with inherent dignity, and capable of moral choice made it the culmination of the historical process and allowed for the development of liberal societies based on human freedom and equality. The problem with Christianity however, according to Hegel, was that it taught the slave to accept their lack of freedom by teaching that the realization of human freedom and equality could only happen in a heaven beyond, thereby leading slaves to accept their unfreedom. Christians thus ended up imprisoned by their idea of God. Hegel by contrast argued that the kingdom of God existed on earth. God lives through the Christian community, and his Kingdom is being realized in the here and now. Thus, the last step in human ideological evolution was the realization that God did not create man, man created God, and thus man possessed the creative power previously ascribed to God. For Hegel, this historical evolution was also reflected in the evolution of the phenomenology of the human mind, which, in the end, reaches the state of “absolute knowing” and realizes that it is utterly alone in the Universe, and thereby can start seeing itself as its own liberator. He therefore saw the French Revolution, and the subsequent exportation of its ideals of human equality by Napoleon across Europe, as the culmination of humanity’s Universal History. By rejecting the divine rights of kings, and the old feudal order, the French revolution had finally taken the Christian ideal of human equality before God and implemented it in its secular political institutions. Thus, for Hegel, the “state is the march of God on Earth,” by which he meant that humanity was making God a reality on Earth through the apparatus of the modern state. The modern rational nation state had become the primary locus of agency and driver of History, and the expression of the highest good. Henceforth, debates about individual moral responsibility would progressively be replaced with political debates about the responsibilities of the state. For Hegel, his philosophy constituted, in a sense, a theodicy. Humans could become reconciled with the world and the evils of the historical process “through the awareness of the true end-goal of the world” for it would reveal that “evil had not prevailed in any ultimate sense.” And by sanctifying modern bourgeois life, Hegel suggested that this end-goal was at hand. In his own way, just as Locke before him, Hegel managed to elide the treacherous question of humanity’s teleology. Locke maintained that it was too difficult a question, and so managed to push it out of politics. Hegel argued that the answer had already been produced. Lilla sums up Hegel’s philosophy as “collaps[ing] God into man and what man creates in history (*deus sive homo*)” just as Spinoza had “collaps[ed] God into nature (*deus sive natura*).” Marx ---- Hegel’s theodicy left many dissatisfied. For Christians, modern bourgeois life fell well short of the Christian heaven that was promised, notwithstanding Hegel’s efforts to couch his philosophy in language maintaining some symbolic continuity with Christianity. The most ambitious and influential disciple and critic however turned out to be Karl Marx, whose ideas would go on to demonstrate that secular ideas could be just as dangerous as religious ones. Marx acknowledged that “the bourgeoisie, during its rule of scarce one hundred years, ha[d] created more massive and more colossal productive forces than have all preceding generations together.” And yet, while mechanization and the industrial revolution seemed to bring about an age of relative abundance, it appeared that for a big share of the population – the proletariat – circumstances weren’t actually improving. In Marx’ historical materialist philosophy, the owners of capital had simply replaced the old feudal masters, leaving the lower classes behind, no freer than before. Meanwhile, the new modes of production lead to the alienation of the workers from their work, leaving them as cogs in a machine they did not understand. And orchestrating this new state of affairs was money, an “*alien mediator*,” a distorting, commodifying, and dehumanizing “entity outside man and above man,” the “bond of all bonds.” He thus thought that the true destiny of humankind was a communist utopia, and that Hegel’s bourgeois paradise was only transitional. Marx’s ideas hit a nerve, and would go on to inspire countless political movements and revolutions. However, finding better stewards of capital than the ‘bourgeoisie’ turned out to be much harder than Marx could have imagined. Over the century following Marx’ death, attempts to devise a radically different system that would put the productive capacities and capital of an economy in more legitimate, just, competent or beneficent hands than in a capitalistic social democracy seemed to have lead to dictatorship, economic ruin and, in many cases, the death of millions. And for what? Wasn’t Hegel right after all? For all its apparent flaws, wasn’t capitalism, alongside an appropriate democratic welfare state, delivering on its promises, and making the bourgeois way of life accessible to an ever-increasing number of people? Fukuyama -------- Thus, in his famous 1989 article *The End of History?*, Francis Fukuyama, building on the ideas of Hegel and the Hegelian Alexandre Kojève, argues that the Western liberal democracy with a market economy represented “the final form of human government” and of social organization. We have reached the “end point of mankind’s ideological evolution.” The fall of the Berlin Wall and the end of the Soviet Union represented the defeat of the only credible alternative. Marx, influenced by Hegel, had put forward a different historicist picture, which failed. History, driven by the forces of science and technology on the one hand, and of “the struggle for recognition” on the other, was coming to a close. For Fukuyama, we are thus heading towards a world in which the “willingness to risk one's life for a purely abstract goal, the worldwide ideological struggle that called forth daring, courage, imagination, and idealism, will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.” All the fundamental, first principles, high-stakes questions have been answered. This was not to say that events would stop happening, or that much of the world has still to attain this post-Historical state. Only that no credible alternative existed, and that sooner or later, assuming everything goes right, all countries would end up looking like modern Western liberal democracies. The only future still worth looking forward to is one in which the whole world has attained a life similar to that in the richest and freest countries today. No credible vision of a radically different and better future remained. Nothing that could legitimately challenge this modest bourgeois utopia that much of humanity had managed to attain. Fukuyama thus set forth the implicit assumptions at the heart of the post-Cold War liberal political, and economic world order. In his 1992 book *The End of History*, he explains: “History was not a blind concatenation of events, but a meaningful whole in which human ideas concerning the nature of a just political and social order developed and played themselves out. And if we are now at a point where we cannot imagine a world substantially different from our own, in which there is no apparent or obvious way in which the future will represent a fundamental improvement over our current order, then we must also take into consideration the possibility that History itself might be at an end.” Following in the footsteps of Hegel and Kojève, Fukuyama presents us with a secular, liberal eschatological vision in which “the End” has already arrived, even while not yet evenly distributed. Classical liberalism was triumphant over the Marxist eschatological vision of a Communist utopia, as well as the religious eschatological visions of the monotheistic religions. Humanity has fulfilled its destiny.