id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
9d609023-51a4-45a8-a99c-7aa8bd5cf92e
trentmkelly/LessWrong-43k
LessWrong
Towards dissolving the mystery of consciousness 1 Let us consider a world where it is well known that light is a wave, characterized by wavelength. The white color is known to be a mix of many different waves as shown by a prism separating sunlight into a rainbow. Physicists measure and catalog the wavelengths of different colors such as red [620–750 nm], green [495–570 nm] and yellow [570–590 nm]. Across the street from the physicists is a painters' guild. They also deal a lot in colors and know well that mixing red and green yields yellow. They even demonstrated it to physicists, with a system of two prisms, baffles and mirrors. And yet the theory explaining how mixing two pure waves, one of 700nm and one of 500nm, results in a wave of 580nm remains elusive. Many different non-linear effects were hypothesized, but none was confirmed. 2 To us, this mystery is of course completely transparent. We now know well that a sensation of color is determined not by the frequency, but by a ratio of activation of 3 different types of photoreceptor cone cells in the eye retina [0]. Each of those cell types is sensitive to large overlapping swaths of frequencies. So a pure yellow and combination of pure red and green just happen to produce identical activation patterns. To reach this understanding we had to look away from light itself and into the mechanism, we used to observe the light - our eyes and brains. 3 It seems to me that something similar happens with attempting to explain consciousness. We have some strong intuitions about what is and what isn't conscious and there are attempts to model those intuitions [1] [2]. I feel that there can't be a theory of consciousness without a good look at the mechanism that generates those intuitions - our brain. 4 Humans evolved as a social organism and the ability to function in a group was critical for survival. An "agent detector" that can distinguish a fellow human from inanimate objects and animals can be an extremely usable adaptation. (This detector can further usefu
4b0d115a-b49a-4671-a2d3-21cdf463a2b3
trentmkelly/LessWrong-43k
LessWrong
The Alignment Newsletter #12: 06/25/18 Highlights Factored Cognition (Andreas Stuhlmuller): This is a presentation that Andreas has given a few times on Factored Cognition, a project by Ought that is empirically testing one approach to amplification on humans. It is inspired by HCH and meta-execution. These approaches require us to break down complex tasks into small, bite-sized pieces that can be solved separately by copies of an agent. So far Ought has built a web app in which there are workspaces, nodes, pointers etc. that can allow humans to do local reasoning to answer a big global question. My opinion: It is unclear whether most tasks can actually be decomposed as required for iterated distillation and amplification, so I'm excited to see experiments that can answer that question! The questions that Ought is trying seem quite hard, so it should be a good test of breaking down reasoning. There's a lot of detail in the presentation that I haven't covered, I encourage you to read it. Summary: Inverse Reinforcement Learning This is a special section this week summarizing some key ideas and papers behind inverse reinforcement learning, which seeks to learn the reward function an agent is optimizing given a policy or demonstrations from the agent. Learning from humans: what is inverse reinforcement learning? (Jordan Alexander): This article introduces and summarizes the first few influential papers on inverse reinforcement learning. Algorithms for IRL attacked the problem by formulating it as a linear program, assuming that the given policy or demonstrations is optimal. However, there are many possible solutions to this problem -- for example, the zero reward makes any policy or demonstration optimal. Apprenticeship Learning via IRL lets you learn from an expert policy that is near-optimal. It assumes that the reward function is a weighted linear combination of features of the state. In this case, given some demonstrations, we only need to match the feature expectations of the demonstrations in order
440f9ee8-de8c-479b-86aa-5d23ca377ec9
trentmkelly/LessWrong-43k
LessWrong
Tortuga Meetups Starting this Thursday Shannon and I are going to be hosting South Bay Less Wrong meetups at the Tortuga community in Mountain View on Thursday nights starting at 7.  Come prepared to reveal something you're consistently irrational about.  We'll spend some time throwing ideas around and then hang out and mingle.   Bringing paleo-friendly food is a bonus but not required.   If you'd like to come, request an invitation from our google group, Tortuga Rationalists.
1f2d3c53-3512-4cf9-bf87-aead44facc57
StampyAI/alignment-research-dataset/special_docs
Other
“Unsupervised” translation as an (intent) alignment problem Suppose that we want to translate between English and an alien language (Klingon). We have plenty of Klingon text, and separately we have plenty of English text, but it’s not matched up and there are no bilingual speakers. We train GPT on a mix of English and Klingon text and find that it becomes fluent in both. In some sense this model “knows” quite a lot about both Klingon and English, and so it should be able to read a sentence in one language, understand it, and then express the same idea in the other language. But it’s not clear how we could train a translation model. Of course some concepts won’t have translations, and the model will often be uncertain about the translation of a term. But we can still ask for a model to explain the meaning of a Klingon expression as best as it can to an English-speaking user. For example, it could say “This is an idiomatic expression that’s often used to express great uncertainty” or “This is a small animal that is familiar to most Klingon speakers, I think it’s kind of like a frog but am not really sure” rather than translating a sentence directly. How can we construct an objective that incentivizes the model to “try its best” at this translation task? Translation-specific approaches =============================== There are many published heuristics for unsupervised translation (e.g. [Lample et al](https://arxiv.org/pdf/1711.00043.pdf)). I don’t think those techniques should completely satisfy us: \* Existing methods can’t lead to a model that appropriately describes its uncertainty or talks the user through a hard-to-translate expression. (At least as far as I’m aware.) \* We have no real reason to think existing methods fully utilize the model’s understanding, or to expect those methods to scale well. (In practice, I think they are impressive but still lag behind the quality of our models’ understanding.) \* These heuristics are specific to translation, whereas we’d like to find general methods that can scale up to harder problems. Existing alignment techniques ============================= If we try to apply RL from human feedback to translation, we immediately run into a problem: how am I supposed to judge which of two English explanations of a Klingon sentence is better, given that I don’t know Klingon? Debate doesn’t easily address this difficulty either — if one model claims that “qapla” means “great success” and the other claims it means “minor success,” I can’t easily decompose that disagreement into simpler sub-questions that debaters disagree about. Debaters could cite phrases in the database where “qapla” is used, but they’d need to average weak evidence over many phrases. Making things worse, to interpret each usage they’d need to agree about the meaning of the rest of the phrase — -which isn’t necessarily any simpler than the original disagreement about “qapla.” Even if this process was possible, it’s not at all clear that GPT would be able to do it — -being able to translate between Spanish and English doesn’t mean I have an encyclopedic knowledge of all the documents from which I built up my intuitive sense of a particular word’s meaning (which I’d need in order to win such a debate). Right now I don’t think we have any scalable strategies to this kind of problem; I think it’s a core open question for alignment. Unsupervised translation seems like a good problem to think about for alignment =============================================================================== I think the key feature of this situation is that our model has acquired a bunch of intuitions about the domain which are only justified empirically — the model “knows” about the meaning of phrases only insofar as it has a very complex hypothesis that was supported by the data. This situation is going to become increasingly common as we train more powerful models, and will immediately be a real problem if we are applying human feedback to fine-tune GPT; while GPT is subhuman in many ways, it’s already acquired plenty of knowledge that any given human contractor would lack. Most of GPT’s knowledge is something that came from \*some\* human, but ultimately we will be training models that generate new knowledge (e.g.by searching over plans in realistic environments, or by writing code on their own and learning about what works), and \*no\* human will have that knowledge. So we can’t hope to get around this problem by simply hiring more knowledgeable contractors. This can leave us in a situation where it’s extremely difficult for humans to oversee AI decisions. If a model says “My intuition is that this business plan will make a lot of money” the user will need to decide whether or not to trust it. If they don’t, then they may find themselves at an increasing economic disadvantage. If they do, then they may have lost the ability to effectively oversee AI systems except by evaluating the consequences of their actions. That leads directly into the classical challenges of AI safety, namely that AI systems evaluated exclusively on the basis of measured outcomes have a tendency to push the world in undesirable directions (since we can’t measure what we care about) and to corrupt our measurements. My vague hope ============= I’m hoping we can address this using the kind of approach discussed in [learning the prior](/learning-the-prior-48f61b445c04). That might look like: \* In parallel with training GPT, train a helper model that explains the meaning of phrases (it can also provide other intuitions or background facts that are useful for predicting the next word). \* As we train on Klingon text, we sample phrases and then ask a human “which word will come next?” The human uses the helper model to understand what is being discussed and make a prediction. \* We optimize the helper model to make the human’s next-word predictions good (in parallel with generative pre-training). \* Finally, a human uses the same helper model to evaluate a proposed Klingon → English translation, and we use this to train a translator by RL. That short description sweeps a lot of complexity under the rug. Most importantly, the success of the scheme relies on the correctness of the prior over helper models (or else the helper could just be another copy of GPT-Klingon), and we don’t have a credible strategy for representing and manipulating our prior over complex programs. Overall, I’d say that this is more at the level of “vague hope” rather than “concrete proposal.” I think it’s an open question whether anything in this space will work. I think that this is the kind of problem which makes e.g. MIRI researchers justifiably skeptical that scalable ML alignment is possible at all, and it’s the main focus of my current conceptual work on AI alignment. I’m glad that this kind of theoretical crux also looks like it will soon be relevant to ML practice, since I think it will make it much easier to close the gap between people who work on ML and people who work on alignment.
3aef3789-a55a-4ac7-bbe6-bb95f80f0630
trentmkelly/LessWrong-43k
LessWrong
Mathematicians & mathletes: the Singularity Institute wants your strategic input! The Singularity Institute is undergoing a series of important strategic discussions. There are many questions for which we wish we had more confident answers. We can get more confident answers on some of them by asking top-level mathematicians & mathletes (e.g. Putnam fellow, IMO top score, or successful academic mathematician / CS researcher). If you are such a person and want to directly affect Singularity Institute strategy, contact me at luke@intelligence.org. Thank you. Now back to your regularly scheduled rationality programming...
0b7621a5-4a1b-439b-af6b-d14461e1d74b
trentmkelly/LessWrong-43k
LessWrong
Help forecast study replication in this social science prediction market This is a linkpost for: https://www.replicationmarkets.com/ From their sign-up page: > Help the Replication Markets team spot reliable science, and win $$! > From mid-2019 to mid-2020, we will run prediction markets on 3,000 recently-published social and behavioral science claims. We want you to forecast the outcomes of possible replication trials. On average, only about 50% of published social science results replicate, but it’s not a coin toss: four previous markets have been about 75% accurate. We think we can beat 80%, while forecasting 10-100x as many claims. > Join us to improve social and behavioral science, try new kinds of markets and surveys, and earn $$ prizes. (Over $100,000 in total prizes, distributed via Google Pay, among a target pool of 500 forecasters based on accuracy and contributions.) > Please look around, see the Replication Markets Home Page for more details, or click “Sign Up” to join.
bd6a5a5d-5fa0-48ef-b43a-e45f36c3c080
trentmkelly/LessWrong-43k
LessWrong
Zstd Window Size At work we've recently been using zstd as a better-compressing alternative to gzip, and overall I've been pretty happy with it. A minor documentation gripe, though, is that the behavior around multithreaded compression is a bit unclear. I understand it's chunking the work and sending chunks to different threads to parallelize the compression process, and this means that I should expect to see better use of threads on larger files because there are more chunks to spread around, but what is the relationship? When I look in man zstd I see that you can set -B<num> to specify the size of the chunks, and it's documented as "generally 4 * windowSize". Except the documentation doesn't say how windowSize is set. From a bit of poking at the source, it looks to me like the way this works is that windowSize is 2**windowLog, and windowLog depends on your compression level. If I know I'm doing zstd -15, though, how does compressionLevel=15 translate into a value for windowLog? There's a table in lib/compress/clevels.h which covers inputs >256KB: Level windowLog chainLog hashLog searchLog minMatch targetLength strategy <1 19 12 13 1 6 1 fast 1 19 13 14 1 7 0 fast 2 20 15 16 1 6 0 fast 3 21 16 17 1 5 0 dfast 4 21 18 18 1 5 0 dfast 5 21 18 19 3 5 2 greedy 6 21 18 19 3 5 4 lazy 7 21 19 20 4 5 8 lazy 8 21 19 20 4 5 16 lazy2 9 22 20 21 4 5 16 lazy2 10 22 21 22 5 5 16 lazy2 11 22 21 22 6 5 16 lazy2 12 22 22 23 6 5 32 lazy2 13 22 22 22 4 5 32 btlazy2 14 22 22 23 5 5 32 btlazy2 15 22 23 23 6 5 32 btlazy2 16 22 22 22 5 5 48 btopt 17 23 23 22 5 4 64 btopt 18 23 23 22 6 3 64 btultra 19 23 24 22 7 3 256 btultra2 20 25 25 23 7 3 256 btultra2 21 26 26 24 7 3 512 btultra2 22 27 27 25 9 3 999 btultra2 See the source if you're interested in other sizes. So it looks like windowSize is: * ≤1: 524k * 2: 1M * 3-8 (default): 2M * 9-16: 4M * 17-19: 8M * 20: 32M * 21: 64M * 22: 128M Probably best not to rely on any of this, but it's good to know what zstd -<level> is doing by default!
2e2488a6-26d4-4555-b2cd-531d399f3c84
trentmkelly/LessWrong-43k
LessWrong
Quantification of eyewitness reliability? This mentions some of the limitations of eyewitness testimony; does anybody here have any references giving any hard numbers about how reliable eyewitness accounts are, under any given circumstances? I'd like to be more conscious about my Bayesian-type updates of my beliefs based on general accounts of what people say. So far, I've started using a rule-of-thumb that somebody telling me something is so is worth approximately 1 decibel of belief (1/3rd of a bit); evidence, but about the weakest evidence possible, nulled by any opposing accounts, and countered by any more substansive evidence. If possible, I'd like to know exactly how reliable such testimony tends to be in one particular set of circumstances - time since the thing being reported, level of emotional involvement, etc - to use as a baseline, and at least roughly how strongly such factors change that. (I'll actually be very surprised if this particular set of data currently exists in ready form - but I'll be satisfied if I can get even order-of-magnitude approximations, so that I know whether or not the rules-of-thumb I end up using are at least within plausible spitting distance.)
2fe8b494-5a95-4351-8374-7379c8682ee2
StampyAI/alignment-research-dataset/blogs
Blogs
Tallinn-Evans Challenge Grant Success! Thanks to the effort of our donors, the Tallinn-Evans Singularity Challenge has been met! All $125,000 contributed will be matched dollar for dollar by Jaan Tallinn and Edwin Evans, raising a total of $250,000 to fund the Machine Intelligence Research Institute’s operations in 2011. On behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Keep watching this blog throughout the year for updates on our activity, and sign up for [our mailing list](http://intelligence.org/get-involved/) if you haven’t yet. Here’s to a better future for the human species. We are preparing a donor page to provide a place for everyone who donated to share some information about themselves if they wish, including their name, location, and a quote about why they donate to the Machine Intelligence Research Institute. If you would like to be included in our public list, please [email us](mailto:admin@intelligence.org). Again, thank you. The Machine Intelligence Research Institute depends entirely on contributions from individual donors to exist. Money is indeed the [unit of caring](http://lesswrong.com/lw/65/money_the_unit_of_caring/), and one of the easiest ways that anyone can contribute directly to the success of the Machine Intelligence Research Institute. Another important way you can help is by plugging us into your networks, so please [email us](mailto:admin@intelligence.org) if you want to help. If you’re interested in connecting with other Machine Intelligence Research Institute supporters, we encourage joining our [group on Facebook](http://www.facebook.com/home.php?sk=group_140277979364858). There are also local *[Less Wrong](http://lesswrong.com)* meetups in cities like San Francisco, Los Angeles, New York, and London. The post [Tallinn-Evans Challenge Grant Success!](https://intelligence.org/2011/01/20/tallinn-evans-challenge-grant-success/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
fbb01d38-fbaa-4c1b-b2cc-9349b61115c4
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Infra-Bayesian physicalism: a formal theory of naturalized induction *This is joint work by Vanessa Kosoy and Alexander "Diffractor" Appel.* *For the proofs, see [1](https://www.alignmentforum.org/posts/cj3PRu8QoFm4BA8oc/infra-bayesian-physicalism-proofs-part-i) and [2](https://www.alignmentforum.org/posts/CPr8bRGekTyvh7nGC/infra-bayesian-physicalism-proofs-part-ii).* **TLDR:** We present a new formal decision theory that realizes [naturalized induction](https://www.alignmentforum.org/tag/naturalized-induction). Our agents reason in terms of infra-Bayesian hypotheses, the domain of which is the cartesian product of computations and physical states, where the ontology of "physical states" may vary from one hypothesis to another. The key mathematical building block is the "bridge transform", which, given such a hypothesis, extends its domain to "physically manifest facts about computations". Roughly speaking, the bridge transforms determines which computations are executed by the physical universe. In particular, this allows "locating the agent in the universe" by determining on which inputs its own source is executed. 0. Background ============= The "standard model" of ideal agency is Bayesian reinforcement learning, and more specifically, [AIXI](https://arxiv.org/abs/cs/0004001). We [challenged](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa) this model before due to its problems with non-realizability, suggesting infra-Bayesianism as an alternative. Both formalisms assume the "cartesian cybernetic framework", in which (i) the universe is crisply divided into "agent" and "environment" and (ii) the two parts interact solely via the agent producing actions which influence the environment and the environment producing observations for the agent. This is already somewhat objectionable on the grounds that this division is not a clearly well-defined property of the physical universe. Moreover, once we examine the structure of the hypothesis such an agent is expected to learn (at least naively), we run into some concrete problems. The modern understanding of the universe is that no observer plays a privileged role[[1]](#fn-vECnaLn6AKxcmsWWZ-1). Therefore, the laws of physics are insufficient to provide a cartesian description of the universe, and must, to this end, be supplemented with "[bridge rules](https://www.lesswrong.com/posts/ethRJh2E7mSSjzCay/building-phenomenological-bridges)" that specify the agent's location inside the universe. That is, these bridge rules need to translate the fundamental degrees of freedom of a physical theory (e.g. quantum wavefunction) to the agent's observations (e.g. values of pixels on a camera), and translate the agent's actions (e.g. signal to robot manipulators) in the other direction[[2]](#fn-vECnaLn6AKxcmsWWZ-2). The cost of this is considerable growth in the description complexity of the hypothesis. A possible retort is something like "how many bits can it really take to pick out the computer within the universe state and describe how to translate universe state to observations? Sure, it might be a considerable chunk of data. But, since Solomonoff induction only needs to make a kilobyte worth of predictive mistakes to learn to predict the inputs as well as any thousand-bit predictor, it should learn this sort of stuff pretty fast." This objection is addressed more later on in this post. A first counter-objection is that, for practical algorithms, this can be a bigger obstacle, especially when sample complexity scales superlinearly with description complexity. For example, the [Russo-Van Roy](https://arxiv.org/abs/1403.5341) regret bound for Thompson sampling in multi-armed bandits has the time horizon necessary to get a particular regret bound scale as the square of entropy, and the computational complexity cost can also go way up as the description complexity rises, because you need to evaluate more hypotheses more times to test them. For reinforcement learning it's even worse, as in the case of a game where an agent must enter an n-digit password and if it gets it right, it gets reward, and if it gets it wrong, it does not get reward and can try again. The learning time for this game scales as Ω(2n).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , far superlinearly with description complexity. Another assumption of AIXI is the simplicity prior, and we expect some form of this assumption to persist in computable and infra-Bayesian analogues. This reflects the intuitive idea that we expect the world to follow simple laws (or at least contain simple patterns). However, from the cartesian perspective, the "world" (i.e. the environment) is, prima facie, not simple at all (because of bridge rules)! Admittedly, the increase in complexity from the bridge rule is low compared to the cost of specifying the universe state, but once the agent learns the transition rules and bridge rule for the universe it's in, learning the state of the universe in addition doesn't seem to yield any particular unforeseen metaphysical difficulties. Further, the description complexity cost of the bridge rule seems likely to be above the description complexity of the laws of physics. Hence, there is some disconnect between the motivation for using a simplicity prior and its implementation in a cartesian framework. Moreover, if the true hypothesis is highly complex, it implies that the sample complexity of learning it is very high. And, as previously mentioned, the sample complexity issues are worse in practice than Solomonoff suggests. This should make us suspect that such a learning process is not properly exploiting Occam's razor. Intuitively, such an agent a-priori considers it equally plausible to discover itself to be a robot or to discover itself to be a random clump of dust in outer space, since it's about as hard to specify a bridge rule interface between the computer and observations as it is to specify a bridge rule interface between the dust clump and observations and needs a lot of data to resolve all those possibilities for how its observations connect to a world. Also, though Solomonoff is extremely effective at slicing through the vast field of junk hypotheses that do not describe the thing being predicted, once it's whittled things down to a small core of hypotheses that do predict things fairly accurately, the data to further distinguish between them may be fairly slow in coming. If there's a simple predictor of events occurring in the world but it's running malign computation, then you don't have the luxury of 500 bits of complexity wiggle room (to quickly knock out this hypothesis), because that's a factor of 2500 probability difference. Doing worst-case hypothesis testing as in KWIK learning would require a very aggressive threshold indeed, and mispredictions can be rare but important. Furthermore, some events are simple to describe from the subjective (cartesian) point of view, but complex to describe from an objective (physical) point of view. (For example, all the pixels of the camera becoming black.) Modifying a hypothesis by positing exceptional behavior following a simple event only increases its complexity by the difficulty of specifying the event and what occurs afterwards, which could be quite low. Hence, AIXI-like agents would have high uncertainty about the consequences of observationally simple events. On the other hand, from an objective perspective such uncertainty seems irrational. (Throwing a towel on the camera should not break physics.) In other words, cartesian reasoning is biased to privilege the observer. Yet another failure of cartesian agents is the inability to reason about origin theories. When we learn that a particular theory explains our own existence (e.g. evolution), this serves as a mark in favor of the theory. We can then exploit this theory to make useful predictions or plans (e.g. anticipate that using lots of antibiotics will cause bacteria to develop resistance). However, for a cartesian agent the question of origins is meaningless. Such an agent perceives its own existence as axiomatic, hence there is nothing to explain. Finally, cartesian agents are especially vulnerable to [acausal attacks](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/). Suppose we deploy a superintelligent Cartesian AI called Kappa. And, imagine a superintelligent agent Mu that inhabits some purely hypothetical universe. If Mu is motivated to affect our own (real) universe, it can run simulations of Kappa's environment. Kappa, who doesn't know a priori whether it exists in our universe or in Mu's universe, will have to seriously consider the hypothesis it is inside such a simulation. And, Mu will deploy the simulation in such manner as to make the simulation hypothesis much simpler, *thanks to simpler bridge rules*. This will cause Kappa to become overwhelmingly confident that it *is* in a simulation. If this is achieved, Mu can cause the simulation to diverge from our reality in a strategically chosen point such that Kappa is induced to take an irreversible action in Mu's favor (effectively a treacherous turn). Of course, this requires Kappa to predict Mu's motivations in some detail. This is possible if Kappa develops a good enough understanding of [metacosmology](https://www.alignmentforum.org/posts/dPmmuaz9szk26BkmD/shortform?commentId=N8oamtFAhWKEbyCBq). An upcoming post by Diffractor will discuss acausal attacks in more detail. In the following sections, we will develop a "physicalist" formalism that entirely replaces the cartesian framework, curing the abovementioned ills, though we have not yet attained the stage of proving improved regret bounds with it, just getting the basic mathematical properties of it nailed down. As an additional benefit, it allows naturally incorporating utility functions that depend on unobservables, thereby avoiding the problem of ["ontological crises"](https://arxiv.org/abs/1105.3821). At the same time, it seems to impose some odd constraints on the utility function. We discuss the possible interpretations of this. 1. Formalism ============ Notation -------- It will be more convenient to use ultradistributions rather than infradistributions. This is a purely notational choice: the decision theory is unaffected, since we are going to apply these ultradistributions to loss functions rather than utility functions. As support for this claim, Diffractor originally wrote down most of the proofs in infradistribution form, and then changing their form for this post was rather straightforward to do. In addition, for the sake of simplicity, we will stick to finite sets: more general spaces will be treated in a future article. So far, we're up to countable products of finite sets. We denote R+:=[0,∞). Given a finite set X, a *contribution* on X is θ:X→R+ s.t. ∑xθ(x)≤1 (it's best to regard it as a *measure* on X). The space of contributions is denoted ΔcX. Given f:X→R and θ∈ΔcX, we denote θ(f):=∑xθ(x)f(x). There is a natural partial order on contributions: θ1≤θ2 when ∀x∈X:θ1(x)≤θ2(x). Naturally, any distribution is in particular a contribution, so ΔX⊆ΔcX. A *homogenous ultracontribution* (HUC) on X is non-empty closed convex Θ⊆ΔcX which is downward closed w.r.t. the partial order on ΔcX. The space of HUCs on X is denoted □cX. A *homogenous ultradistribution* (HUD) on X is a HUC Θ s.t. Θ∩ΔX≠∅. The space of HUDs on X is denoted □X. Given f:X→R and Θ∈□cX, we denote Θ(f):=maxθ∈Θθ(f). Given s:X→Y, s∗:□cX→□cY is the pushforward by s: s∗Θ:={s∗θ∣θ∈Θ} (s∗θ)(y):=∑x∈s−1(y)θ(x) Given Ξ:X→□cY, Ξ∗:□cX→□cY is the pushforward by Ξ: Ξ∗Θ:={κ∗θ∣θ∈Θ,κ:X→ΔcY,∀x∈X:κ(x)∈Ξ(x)} (κ∗θ)(y):=∑x∈Xθ(x)κ(x;y) prX:X×Y→X is the projection mapping and pr−Y:=prX. We slightly abuse notation by omitting the asterisk in pushforwards by these. Given Θ∈□cX and Ξ:X→□cY, Θ⋉Ξ∈□c(X×Y) is the semidirect product: Θ⋉Ξ:={κ⋉θ∣θ∈Θ,κ:X→ΔcY,∀x∈X:κ(x)∈Ξ(x)} (κ⋉θ)(x,y):=θ(x)κ(x;y) We will also use the notation Ξ⋊Θ∈□c(Y×X) for the same HUC with X and Y flipped. And, for Λ∈□cY, Θ⋉Λ∈□c(X×Y) is the semidirect product of Θ with the *constant* ultrakernel whose value is Λ[[3]](#fn-vECnaLn6AKxcmsWWZ-3). For more discussion of HUDs, see [previous article](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/idP5E5XhJGh9T5Yq9), where we used the equivalent concept of "cohomogenous infradistribution". Notation Reference ------------------ If you got lost somewhere and wanted to scroll back to see some definition, or see how the dual form of this with infradistributions works, that's what this section is for. θ is a contribution, a measure with 1 or less measure in total. The dual concept is an a-measure (λμ,b) with λ+b=1. Θ is a HUC (homogenous ultracontribution) or HUD (homogenous ultradistribution), a closed convex downwards closed set of contributions. The dual concepts are cohomogenous inframeasure and cohomogenous infradistribution, respectively. ΔcX,□cX,□X are the spaces of contributions, homogenous ultracontributions, and homogenous ultradistributions respectively. θ(f),Θ(f) are the expectations of functions f:X→[0,1], defined in the usual way. For θ, it's just the expectation of a function w.r.t. a measure, and for Θ, it's Θ(f):=maxθ∈Θθ(f), to perfectly parallel a-measures evaluating functions by just taking expectation, and inframeasures evaluating functions via min(m,b)∈Ψm(f)+b. ≤ is the ordering on contributions/HUC's/HUD's, which is the function ordering, where θ≤θ′ iff for all f:X→[0,1], θ(f)≤θ′(f), and similar for the HUC's. Inframeasures are equipped with the opposite ordering. s∗Θ is the pushforward along the function s:X→Y. This is a standard probability theory concept which generalizes to all the infra and ultra stuff. Ξ∗Θ is the pushforward of Θ along the ultrakernel Ξ:X→□cY. This is just the generalization to infra and ultra stuff of the ability to push a probability distribution on X through a probabilistic function X→ΔY to get a probability distribution on Y. Θ⋉Ξ is the semidirect product of Θ and Ξ, an element of □c(X×Y). This is the generalization of the ability to take a probability distribution on X, and a probabilistic kernel X→ΔY, and get a joint distribution on X×Y. A,O are the set of actions and observations. N is the time horizon. R is the space of programs, Σ is the space of outputs. Γ=ΣR is the space of functions from programs to the results they output. An element of Γ can be thought of as the state of the "computational universe", for it specifies what all the programs do. H is the space of histories, action-observation sequences that can end at any point, ending with an observation. D is the space of destinies, action-observation sequences that are as long as possible, going up to the time horizon. C is a relation on Γ×D that says whether a computational universe is consistent with a given destiny. A very important note is that this is *not* the same thing as "the policy is consistent with the destiny" (the policy's actions are the same as what the destiny advises). This is saying something more like "if the destiny has an observation that the computer spat out result 1 when run on computation A, then that is only consistent with mathematical universes which have computation A outputting result 1". Except we don't want to commit to the exact implementation details of it, so we're leaving it undefined besides just "it's a relation" Φ is the space of "physics outcomes", it can freely vary depending on the hypothesis. It's not a particular fixed space. Θ is the variable typically used for physicalist hypotheses, elements of □(Γ×Φ). Your uncertainty over the joint distribution over the computational universe and the physical universe. G is the code of the program-which-is-the-agent. So, G(h) would be the computation that runs what the agent does on history h, and returns its resulting action. elΓ is the subset of the space Γ×2Γ consisting of (y,α) pairs s.t. y∈α. The y can be thought of as the mathematical universe, and α can be thought of as the set of mathematical universes that are observationally indistinguishable from it. χA is the indicator function that's 1 on the set A, and 0 everywhere else. It can be multiplied by a measure, to get the restriction of a measure to a particular set. Br(Θ) is the bridge transform of Θ, defined in definition 1.1, an ultracontribution over the space elΓ×Φ. Hyα is the set of instantiated histories, relative to mathematical universe y, and set of observationally indistinguishable universes α. It's the set of histories h where all the "math universes" in α agree on how the agent's source code reacts to all the prefixes of h, and where the history h can be extended to some destiny that's consistent with math universe y. Ie, for a history to be in here, all the prefixes have to be instantiated, and it must be consistent with the selected math universe. Setting ------- As in the cartesian framework, we fix a finite set A of actions and a finite set O of observations. We assume everything happens within a fixed finite[[4]](#fn-vECnaLn6AKxcmsWWZ-4) time horizon N∈N. We assume that our agent has access to a *computer*[[5]](#fn-vECnaLn6AKxcmsWWZ-5) on which it can execute some finite[[4:1]](#fn-vECnaLn6AKxcmsWWZ-4) set of programs R with outputs in a finite alphabet Σ. Let Γ:=ΣR be the set of "possible computational universes"[[6]](#fn-vECnaLn6AKxcmsWWZ-6). We denote H:=(A×O)<N (the set of histories) and D:=(A×O)N−1×A (the set of "destinies"). To abstract over the details of how the computer is operated, we assume a relation C⊆D×Γ whose semantics is, dCy (our notation for (d,y)∈C) if and only if destiny d is a priori consistent with computational universe y. For example, suppose some a∈A implies a command to execute program r∈R,and if o∈O follows a, it implies observing the computer return output i∈Σ for r. Then, if d contains the substring ao and dCy, it must be the case that y(r)=i. A *physicalist hypothesis* is a pair (Φ,Θ), where Φ is a finite[[4:2]](#fn-vECnaLn6AKxcmsWWZ-4) set representing the physical states of the universe and Θ∈□(Γ×Φ) represents a joint belief about computations and physics. By slight abuse of notation we will refer to such Θ as a physicalist hypothesis, understanding Φ to be implicitly specified. Our agent will have a prior over such hypotheses, ranging over different Φ. Two questions stand out to us at this point. The first is, what is the domain over which our loss function should be defined? The second is, how do we define the counterfactuals corresponding to different policies π:H→A? The answers to both questions turn out to require the same mathematical building block. For the first question, we might be tempted to identify Φ as our domain. However, prima facie this doesn't make sense, since Φ is hypothesis-dependent. This is the [ontological crisis](https://www.lesswrong.com/tag/ontological-crisis) problem: we expect the agent's values to be defined within a certain ontology which is not the best ontology for formulating the laws of the physical universe. For example, a paperclip maximizer might benefit from modeling the universe in terms of quantum fields rather than paperclips. In principle, we can circumvent this problem by requiring our Φ to be equipped with a mapping ν:Φ→Φ0,where Φ0 is the "axiological" ontology. However, this ν is essentially a bridge rule, carrying with it all the problems of bridge rules: acausal attack is performed by the adversarial hypothesis imprinting the "axiological rendering" of the target universe on the microscopic degrees of freedom of the source universe in order to have a low-complexity Φ→Φ0 function; the analogue of the towel-on-camera issue is that, once you've already coded up your uncertainty over math and physics, along with how to translate from physics to the ontology of value, it doesn't take too much extra complexity to tie "axiologically-simple" results (the analogue of low-complexity observations) to physics-simple results (the analogue of a low-complexity change in what happens), like "if all the paperclips are red, the fine-structure constant doubles in value". Instead, we will take a *computationalist* stance: value is a not property of physical states or processes, but of the *computations* realized by physical processes. For example, if our agent is "selfish" in the sense that, rewards/losses are associated purely with subjective histories, the relevant computation is the *agent's own source code*. Notice that, for the program-which-is-the-agent G, histories are input. Hence, given loss function l:H→R we can associate the loss l(h) with the computation G(h). Admittedly, there is an implicit assumption that the agent has access to its own source code, but modal decision theory made the same assumption. For another example, if our agent is a diamond maximizer, then the relevant computations are simulations of the physics used to define "diamonds". A more concrete analogue of this is worked out in detail in section 3, regarding Conway's Game of Life. For the second question, we might be tempted to follow [updateless decision theory](https://www.lesswrong.com/tag/updateless-decision-theory): counterfactuals correspond to conditioning on G=π. Remember, G is the code of the agent. However, this is not "fair" since it requires the agent to be "responsible" for copies of itself instantiated with fake memories. Such a setting admits no learning-theoretic guarantees, since learning requires trusting your own memory. (Moreover, the agent also has to be able to trust the computer.) Therefore our counterfactuals should only impose G(h)=π(h) when h is a "real memory", which we against interpret through computationalism: h is real if and only if, G(h′) is physically realized for any prefix h′ of h. Both of our answers requires a formalization of the notion "assuming hypothesis Θ, this computation is physically realized". More precisely, we should allow for computations to be realized with certain *probabilities*, and more generally allow for ultradistributions over which computations are realized. We will now accomplish this formalization. Bridge transform ---------------- Given any set A, we denote elA={(a,B)∈A×2A∣a∈B}. supp stands for "support" and χA is the characteristic function of A. **Definition 1.1:** *Let Γ,Φ be finite sets and Θ∈□c(Γ×Φ). The *bridge transform* of Θ is BrΓ(Θ)∈□c(Γ×2Γ×Φ) s.t. θ∈BrΓ(Θ) if and only if* * suppθ⊆elΓ×Φ * *for any s:Γ→Γ, prΓ×ΦχelΓ×Φ(s×id2Γ×Φ)∗θ∈Θ.* *We will use the notation Br(Θ) when Γ is obvious from the context.* Notice that we are multiplying by χelΓ×Φ, not pushforwarding. The 2Γ variable of the bridge transform denotes the "facts about computations realized by physics". In particular, if this α∈2Γ takes the form {y∈Γ∣∀r∈R0:y(r)=y0(r)} for some R0⊆R and y0∈ΣR0, then we may say that the computations R0 are "realized" and the computations R∖R0 are "not realized". More generally, talking only about which computations are realized is imprecise since α might involve "partial realization" and/or "entanglement" between computations (i.e. not be of the form above). Intuitively, this definition expresses that the "computational universe" can be freely modified as long as the "facts known by physics" are preserved. However, that isn't what originally motivated the definition. The bulk of its justification comes from its pleasing mathematical properties, discussed in the next section. A physicalist agent should be equipped with a prior over physicalist hypotheses. For simplicity, suppose it's a discrete Bayesian prior (it is straightforward to generalize beyond this): hypothesis Θi is assigned probability ζ(i) and ∑iζ(i)=1. Then, we can consider the *total* bridge transform of the prior. It can't be given by mixing the hypotheses together and applying the bridge transform, because every hypothesis has its own choice of Φ, its own ontology, so you can't mix them before applying the bridge transform. You have to apply the bridge transform to each component first, forget about the choice of Φ via projecting it out, and then mix them afterwards. This receives further support from Proposition 2.13 which takes an alternate possible way of defining the bridge transform for mixtures (extend all the hypotheses from Γ×Φi to Γ×⨆iΦi in the obvious way so you can mix them first) and shows that it produces the same result. **Definition 1.2:** Br(ζ⋉Θ):=∑iζ(i)prelΓBr(Θi)∈□elΓ Evaluating policies ------------------- Given A⊆X, ⊤A∈□X is defined as {θ∈ΔcX∣suppθ⊆A}. I.e., it's total uncertainty over which point in A will be selected. We need to assume that R contains programs representing the agent itself. That is, there is some M∈N, dec:ΣM→A and for each h∈H, ┌G(h)┐∈RM. Pretty much, if you have large numbers of actions available, and a limited number of symbols in your language, actions (and the outputs of other programs with a rich space of outputs) can be represented be m-tuples of programs, like "what's the first bit of this action choice" and "what's the second bit of this action choice" and so on. So, M is just how many bits you need, dec is the mapping from program outputs to the actual action, and ┌G(h)┐ is the m-tuple of of programs which implements the computation "what does my source code do on input history h". The behavior of the agent in a particular mathematical universe is given by taking each program in ┌G(h)┐, using the mathematical universe to figure out what each of the programs outputs, and then using dec to convert that bitstring to an action. **Definition 1.3:** *Given y∈Γ and h∈H, we define Gy(h)∈A to be dec(a) for a∈ΣM given by ai:=y(┌G(h)┐i).* We can now extract counterfactuals from any Λ∈□elΓ. Specifically, given any policy π we define some Cπ⊆elΓ (the set of stuff consistent with behaving according to π, for a yet-to-be-defined notion of consistency) and define the counterfactual as Λ∩⊤Cπ. We could use the "naive" definition: **Definition 1.4:** *Cπnaive is the set of all (y,α)∈elΓ s.t. for any h∈H, Gy(h)=π(h).* Per discussion above, it seems better to use a different definition. We will use the notation h⊏g to mean "h is a proper prefix of g" and h⊑g to mean "h is a prefix of g", **Definition 1.5:** *Given (y,α)∈elΓ, let Hyα be the set of all h∈H s.t. the following two conditions hold:* 1. *For all ga∈H×A and y′∈Γ where ga⊏h and y′∈α, Gy′(g)=a.* 2. *h⊏d for some d∈D s.t. dCy.* *Cπfair is the set of all (y,α)∈elΓ s.t. for any h∈Hyα, Gy(h)=π(h).* Here, condition 1 says that we only "take responsibility" for the action on a particular history when the history was actually observed (all preceding evaluations of the agent are realized computations). Condition 2 says that we only "take responsibility" when the computer is working correctly. At this stage, Definition 1.5 should be regarded as tentative, since we only have one result so far that validates this definition, namely that the set of (y,α) in Cπfair only depends on what the policy π does on possible inputs, instead of having the set depending on what the policy does on impossible inputs where one of the past memorized actions is not what the policy would actually do. We hope to rectify this in future articles. Putting everything together: given a loss function L:elΓ→R, which depends on how the mathematical universe is, and which computations are realized or not-realized, the loss of policy π is given by just applying the bridge transform to coax the hypotheses into the appropriate form, intersecting with the set of possibilities consistent with the agent behaving according to a policy, and evaluating the expectation of the loss function, as detailed below. Lpol(π,ζ):=(Br(ζ⋉Θ)∩⊤Cπfair)(L) Evaluating agents ----------------- So far we regarded the agent's source code G and the agent's policy π as independent variables, and explained how to evaluate different policies given fixed G. However, in reality the policy is determined by the source code. Therefore, it is desirable to have a way to evaluate different *codes*. We achieve this using an algorithmic [information](https://www.lesswrong.com/posts/Q4hLMDrFd8fbteeZ8/measuring-optimization-power) [theory](https://www.alignmentforum.org/posts/dPmmuaz9szk26BkmD/vanessa-kosoy-s-shortform?commentId=ovBmi2QFikE6CRWtj#comments) approach. We also want to allow the loss function to depend on G. That is, we postulate L:(RM)H×elΓ→R. Specifically, since RM can be thought of as the space of actions, (RM)H is basically the space of policies. In section 3 we will see why in some detail, but for now think of the difference between "maximizing my own happiness" and "maximizing Alice's happiness": the first is defined *relative* to the agent (depends on G) whereas the second is absolute (doesn't depend on G). In particular, for a selfish agent that just cares about its own observations, its loss function must reference its own source code. **Definition 1.6:** *Denote G∗:H→A the policy actually implemented by G. Fix ξ∈Δ(AH). The *physicalist intelligence of G relative to the baseline policy mixture ξ, prior ζ and loss function L* is defined by:* g(G∣ξ;ζ,L):=−logPrπ∼ξ[Lpol(┌G┐,π,ζ)≤Lpol(┌G┐,G∗,ζ)] Notice that Lpol depends on ┌G┐ in two ways: through the direct dependence of L on ┌G┐ and through Cπfair. In particular, it makes sense to choose ζ and ξ as simplicity priors. There is no obvious way to define "physicalist AIXI", since we cannot have g=∞. For one thing, g is not even defined for uncomputable agents. In principle we could define it for [non-uniform](https://en.wikipedia.org/wiki/Advice_(complexity)) agents, but then we get a fixpoint problem that doesn't obviously have a solution: finding a non-uniform agent G s.t. Lpol(┌G┐,G∗,ζ)=minπLpol(┌G┐,π,ζ). On the other hand, once we spell out the infinitary (N=∞) version of the formalism, it should be possible to prove the existence of agents with arbitrarily high finite g. That's because our agent can use quining to access its own source code ┌G┐, and then brute force search a policy π∗ϵ with Lpol(┌G┐,π∗ϵ,ζ)<infπ:H→ALpol(┌G┐,π,ζ)+ϵ. 2. Properties of the bridge transform ===================================== In this section we will *not* need to assume Γ is of the form ΣR: unless stated otherwise, it will be any finite set. Sanity test ----------- **Proposition 2.1:** *For any Γ, Φ and Θ∈□c(Γ×Φ), Br(Θ) exists and satisfies prΓ×ΦBr(Θ)=Θ. In particular, if Θ∈□(Γ×Φ) then Br(Θ)∈□(elΓ×Φ).* Downwards closure ----------------- Roughly speaking, the bridge transform tell us which computations are physically realized. But actually it only bounds it from one side: some computations are definitely realized but any computation *might* be realized. One explanation for why it must be so is: if you looked at the world in more detail, you might realize that there are small-scale, previously invisible, features of the world which depend on novel computations. There is a direct tension between bounding both sides (i.e. being able to say definitively that a computation isn't instantiated) and having the desirable property that learning more about the small-scale structure of the universe narrows down the uncertainty. To formalize this, we require the following definitions: **Definition 2.1:** *Let X be a partially ordered set (poset). Then the *induced partial order* on ΔcX is defined as follows. Given θ,η∈ΔcX, θ⪯η if and only if for any monotonically non-decreasing function f:X→R+, θ(f)≤η(f).* This is also called the stochastic order (which is standard mathematical terminology). Intuitively, θ⪯η means that η has its measure further up in the poset than θ does. To make that intuition formal, we can also characterize the induced order as follows: **Proposition 2.2:** *Let X be a poset, θ,η∈ΔcX. Then, θ⪯η if and only if there exists κ:X→ΔX s.t.:* * *For all x∈X, y∈suppκ(x): x≤y.* * κ∗θ≤η **Proposition 2.3:** *Let X be a poset, θ,η∈ΔcX. Then, θ⪯η if and only if there exists κ:X→ΔX s.t.:* * *For all x∈X, y∈suppκ(x): x≥y.* * θ≤κ∗η Or, in words, you can always go from θ to η by moving probability mass upwards from where it was, since κ(x) is always supported on the set of points at-or-above x. We can now state the formalization of only bounding one side of the bridge transform. Let elΓ×Φ be equipped with the following order. (y,α,x)≤(y′,α′,x′) if and only if y=y′, x=x′ and α⊆α′. Then: **Proposition 2.4:** *For any Γ, Φ and Θ∈□c(Γ×Φ), Br(Θ) is downwards closed w.r.t. the induced order on Δc(elΓ×Φ). That is, if θ∈Br(Θ) and η⪯θ then η∈Br(Θ).* Simple special case ------------------- Let's consider the special case where there's only one program, which can produce two possible outputs, a 0 and a 1. And these two outputs map to two different distributions over physics outcomes in Φ. Intuitively, if the computation isn't realized/instantiated, the distribution over physics outcome should be identical, while if the computation is realized/instantiated, it should be possible to look at the physics results to figure out how the computation behaves. The two probability distributions may overlap some intermediate amount, in which case it should be possible to write the two probability distributions as a mixture between a probability distribution that behaves identically regardless of the program output (the "overlap" of the two distributions), and a pair of probability distributions corresponding to the two different program outputs which are disjoint. And the total variation distance (dTV(ϕ(0),ϕ(1))) between the two probability distributions is connected to the size of the distribution overlap. Proposition 2.5 makes this formal. **Proposition 2.5:** *Consider a finite set X, ϕ?∈ΔX, ϕ!:{0,1}→ΔX, p∈[0,1] and ϕ:=(1−p)ϕ?+pϕ!. Then, p≥dTV(ϕ(0),ϕ(1)). Conversely, consider any ϕ:{0,1}→ΔX. Then, there exist ϕ?∈ΔX and ϕ!:{0,1}→ΔX s.t. ϕ=(1−p)ϕ?+pϕ! for p:=dTV(ϕ(0),ϕ(1)).* The bridge transform should replicate this same sort of analysis. We can interpret the case of "total uncertainty over math, but knowing how physics turns out conditional on knowing how math turns out" by ⊤Γ⋉ϕ for some ϕ:Γ→ΔΦ. Taking the special case where Γ={0,1} for one program that can output two possible answers, it should return the same sort of result, where the two probability distributions can be decomposed into their common overlap, and non-overlapping pieces, and the overlapping chunk of probability measure should be allocated to the event "the computation is not instantiated", while the disjoint ones should be allocated to the event "the computation is instantiated". As it turns out, this does indeed happen, with the same total-variation-distance-based upper bound on the probability that the computation is unrealized (namely, 1−dTV(ϕ(0),ϕ(1))) **Proposition 2.6:** *Consider any Φ and ϕ:{0,1}→ΔΦ. Denote U:={0,1}×{{0,1}}×Φ (the event "program is unrealized"). Let Λ:=Br(⊤{0,1}⋉ϕ). Then,* Λ(χU)=1−dTV(ϕ(0),ϕ(1)) Bound in terms of support ------------------------- If the physical state is x∈Φ, then it is intuitively obvious that the physically manifest facts about the computational universe y∈Γ must include the fact that (y,x)∈suppΘ. Formally: **Proposition 2.7:** *If X is a poset and Θ∈□cX, then Θ↓ will denote downward closure of Θ. For any Γ, Φ and Θ∈□c(Γ×Φ) if (y,α,x)∈suppBr(Θ) and y′∈α, then (y′,x)∈suppΘ. Moreover, define susΘ:Φ→2Γ by susΘ(x):={y∈Γ∣(y,x)∈suppΘ}. Then, Br(Θ)⊆(Θ⋉susΘ)↓ (we slightly abuse notation by treating susΘ as a mapping Γ×Φ→2Γ that doesn't depend on the first argument, and also playing loose with the order of factors in the set on which our HUCs live).* Putting that into words, all the (α,x) that are in the support of the bridge transform have α being a subset of the support of Θ restricted to x. The bridge transform knows not to provide α sets which are too large. Further, the bridge transform is smaller/has more information than the transform that just knows about what the support of Θ is. Idempotence ----------- Imagine replacing the physical state space Φ by the space of manifest facts 2Γ. Intuitively, the latter should contain the same information about computations as the former. Therefore, if we apply the bridge transform *again* after making such a replacement, we should get the same thing. Given any set X, we denote idX:X→X the identity mapping and diagX:X→X×X the diagonal mapping. **Proposition 2.8:** *For any Γ, Φ and Θ∈□c(Γ×Φ)* Br(prΓ×2ΓBr(Θ))=[(idΓ×diag2Γ)∗prΓ×2ΓBr(Θ)]↓ Refinement ---------- If we refine a hypothesis (i.e. make it more informative) in the ultradistribution sense, the bridge transform should also be more refined: **Proposition 2.9:** *For any Γ, Φ and Θ1,Θ2∈□c(Γ×Φ), if Θ1⊆Θ2 then Br(Θ1)⊆Br(Θ2).* Another way of refining a hypothesis is refining the ontology, i.e. moving to a richer state space. For example, we can imagine one hypothesis that describes the world in terms of macroscopic objects and another hypothesis that describes the worlds in term of atoms which are otherwise perfectly consistent with each other. In this case, we also expect the bridge transform to become more refined. This desiderata is where the downwards closure comes from, because it's always possible to pass to a more detailed view of the world and new computations could manifest then. It's also worth noting that, if you reduce your uncertainty about math, or physics, or move to a more detailed state space, that means more computations are instantiated, and the loss goes down as a result. **Proposition 2.10:** *For any Γ, Φ1, Φ2, t:Φ2→Φ1 and Θ∈□c(Γ×Φ2)* (idelΓ×t)∗Br(Θ)⊆Br((idΓ×t)∗Θ) In general, when we refine the ontology, the manifest facts can become strictly refined: the richer state "knowns more" about computations than the poorer state. However, there is a special case when we don't expect this to happen, namely when the rich state depends on the computational universe only through the poor state. Roughly speaking, in this case once we have the poor state we can sample the rich state without evaluating any more programs. To formalize this, it is convenient to introduce pullback of HUCs, which is effectively the probabilistic version of taking a preimage. **Definition 2.2:** *Let X,Y be finite sets and t:X→Y a mapping. We define the pullback[[7]](#fn-vECnaLn6AKxcmsWWZ-7) operator t∗:□cY→□cX by* t∗Θ:={θ∈ΔcX∣t∗θ∈Θ} **Proposition 2.11:** *Consider any Γ, Φ1, Φ2, t:Φ2→Φ1, Ξ:Φ1→□cΦ2 and Θ∈□c(Γ×Φ1) s.t. tΞ=idΦ1. Then,* (idelΓ×Ξ)∗Br(Θ)⊆Br((idΓ×Ξ)∗Θ)⊆(idelΓ×t)∗Br(Θ) *In particular,* prelΓBr(Θ)=prelΓBr((idΓ×Ξ)∗Θ) In other words, we can think of Φ1 as the coarse-grained version, and Φ2 as the fine-grained version, and t as the coarse-graining function, and Ξ as the function mapping a coarse grained state to some uncertainty over what the corresponding fine-grained state is. Θ is your starting uncertainty over computations and the coarse-grained state. Then, in order from most to least informative, you could apply Ξ after the bridge transform, apply Ξ before the bridge transform, or pull back along t after the bridge transform (because the pullback is the most uninformative inverse, in a sense). But all this stuff only affects the information about the fine-grained state Φ2. If you were to forget about that, the data about computations and how they're instantiated is the same no matter whether you do the bridge transform with the coarse state, or try your best to fill in what the fine state is first, because the "fill in the fine state" function doesn't depend on your uncertainty about computations, so it doesn't contain any more information about how math is connected to physics than the original coarse-grained view had. Mixing hypotheses ----------------- In general, we cannot expect the bridge transform to commute with taking probabilistic mixtures. For example, let Γ={0,1}, Φ={0,1}, Θ1=⊤{00,11}, Θ2=⊤{01,10}. Clearly either for Θ1 or for Θ2, we expect the program to be realized (since the physical state encodes its output). On the other hand, within 12Θ1+12Θ2 we have both the distribution "50% on 00, 50% on 01" and the distribution "50% on 10, 50% on 11". The physical state no longer necessarily "knows" the output, since the mixing "washes away" information. However, there's no reason why mixing would *create* information, so we can expect the mixture of bridge transforms to be a *refinement* of the bridge transform of the mixture. **Proposition 2.12:** *For any Γ, Φ, Θ1,Θ2∈□c(Γ×Φ) and p∈[0,1]* pBr(Θ1)+(1−p)Br(Θ2)⊆Br(pΘ1+(1−p)Θ2) Now suppose that we are we are mixing hypotheses whose state spaces are *disjoint*. In this case, there can be no washing away since the state "remembers" which hypothesis it belongs to. **Proposition 2.13:** *Consider some Γ, Φ1, Φ2, Θ1∈□c(Γ×Φ1), Θ1∈□c(Γ×Φ2) and p∈[0,1]. Regard Φ1,2 as subsets of Φ1⊔Φ2, so that pΘ1+(1−p)Θ2∈□c(Γ×(Φ1⊔Φ2)). Then,* pBr(Θ1)+(1−p)Br(Θ2)=Br(pΘ1+(1−p)Θ2) In particular, if we consider the hypotheses within a prior as having disjoint state spaces because of using different ontologies, then the total bridge transform (Definition 1.2) is just the ordinary bridge transform, justifying that definition. Conjunction ----------- Given a hypothesis and a subset of Γ, we can form a new hypothesis by performing conjunction with the subset. Intuitively, the same conjunction should apply to the manifest facts. **Proposition 2.14:** *Consider some Γ, Φ, Θ∈□c(Γ×Φ) and F⊆Γ. Let ∩F:2Γ→2Γ be defined by ∩F(α):=F∩α. Then,* BrΓ(Θ∩⊤F×Φ)=(idΓ×∩F×idΦ)∗(BrΓ(Θ)∩⊤F×2Γ×Φ) *Moreover, let ι:F→Γ and elι:elF→elΓ be the natural injections. Then,* BrF((ι×idΦ)∗(Θ∩⊤F×Φ))=(elι×idΦ)∗BrΓ(Θ∩⊤F×Φ) That first identity says that it doesn't matter whether you update on a fact about math first and apply the bridge transform later, or whether you do the bridge transform first and then "update" (actually it's slightly more complicated than that since you have to narrow down the sets α) the hypothesis afterwards. The second identity shows that, post conjunction with F, taking the bridge transform with respect to F is essentially the same as taking it with respect to Γ. If a batch of math universes is certain not to occur, then dropping them entirely from the domain Γ produces the same result. As a corollary, we can eliminate dependent variables. That is, suppose some programs can be expressed as functions of other programs (using Θ). Intuitively, such programs can be ignored. Throwing in the dependent variables and doing the bridge transform on that augmented space of "math universes" produces the same results as trying to compute the dependent variables *after* the bridge transform is applied. So, if you know how to generate the dependent variables, you can just snip them off your hypothesis, and compute them as-needed at the end. Formally: **Proposition 2.15:** *Consider some Γ0, Γ1, r:Γ0→Γ1, Φ, Θ∈□c(Γ0×Φ). Let ι:Γ0→Γ0×Γ1 be given by ι(y):=(y,r(y)). Then,* Br((ι×idΦ)∗Θ)=(elι×idΦ)∗Br(Θ) Factoring Γ ----------- When Γ is of the form Γ0×Γ1, we can interpret the same Θ∈□(Γ×Φ) in two ways: treat Γ1 as part of the computational universe, or treat it as part of the physical state. We can also "marginalize" over it altogether staying with a HUD on Γ0×Φ. The bridge transforms of these objects satisfy a simple relationship: **Proposition 2.16:** *Consider some Γ0, Γ1, Φ and Θ∈□c(Γ0×Γ1×Φ). Define π:elΓ0×Γ1→elΓ0 by π(y,z,α):=(y,{y′∈Γ0∣(y′,z)∈α}). Then,* (π×idΦ)∗BrΓ0×Γ1(Θ)⊆prelΓ0×ΦBrΓ0(Θ)⊆BrΓ0(prΓ0×ΦΘ) Intuitively, the least informative result is given by completely ignoring the Γ1 part of math about how a particular batch of computations turns out. A more informative result is given by treating Γ1 as an aspect of physics. And the most informative result is given by treating Γ1 as an aspect of math, and then pruning the resulting α:2Γ0×Γ1 down to be over 2Γ0 by plugging in the particular Γ1 value. Continuity ---------- **Proposition 2.17:** *Br(Θ) is a continuous function of Θ.* Lamentably, this only holds when all the involved sets are finite. However, a generalization of continuity, scott-continuity, still appears to hold in the infinite case. Intuitively, the reason why is, if Φ is a continuous space, like the interval [0,1], you could have a convergent sequence of hypotheses Θ where the events "program outputs 0" and "program outputs 1" are believed to result in increasingly similar outputs in physics, and then in the limit, where they produce the exact same physics outcome, the two possible outputs of the program suddenly go from "clearly distinguishable by looking at physics" to "indistinguishable by looking at physics" and there's a sharp change in whether the program is instantiated. This is still a Scott-continuous operation, though. 3. Constructing loss functions ============================== In section 1 we allowed the loss function L:elΓ→R to be arbitrary. However, in light of Proposition 2.4, we might as well require it to be monotonically non-decreasing. Indeed, we have the following: **Proposition 3.1:** *Let X be a finite poset, f:X→R and Θ∈□cX downward closed. Define fmax:X→R by fmax(x):=maxy≤xf(y). Observe that fmax is always non-decreasing. Then, Θ(f)=Θ(fmax).* This essentially means that downwards-closed HUC's are entirely pinned down by their expectation value on monotone functions, and all their other expectation values are determined by the expectation value of the "nearest monotone function", the fmax. This requirement on L (we will call it the "monotonicity principle") has strange consequences that we will return to below. The informal discussion in section 1 hinted at how the computationalist stance allows describing preferences via a loss function of the type elΓ→R. We will now proceed to make it more formal. Selfish agents -------------- A selfish agent naturally comes with a cartesian loss function L:D→R. **Definition 3.1:** *For any (y,α)∈elΓ, the set of *experienced histories* Xyα is the set of all ha∈Hyα×A (see Definition 1.5) s.t. for any y′∈α, Gy′(h)=a. The *physicalized loss function* is* Lphys(y,α):=minha∈Xyαmaxd∈D:ha⊑dL(d) Notice that Lphys implicitly depends on ┌G┐ through Xyα. Intuitively, if α is a large set, there's different mathematical universes that result in the agent's policy being different, but which are empirically indistinguishable (ie, they're in the same set α), so the agent's policy isn't actually being computed on those histories where the contents of α disagree on what the agent does. The agent takes the loss of the best history that occurs, but if there's uncertainty over what happens afterwards/later computations aren't instantiated, it assumes the worst-case extension of that history. This definition has the following desirable properties: * If g∈Xyα, ha∈H×A and ha⊑g, then h∈Xyα. * Lphys is monotone. If the set of indistinguishable mathematical universes gets larger, fewer histories are instantiated, so the loss is larger because of the initial minimization. * Fix (y,α)∈elΓ. Suppose there is d∈D s.t. Xyα={ha∈H×A∣ha⊑d}. Then Lphys(y,α)=L(d). If the set of indistinguishable mathematical universes all agree on what the destiny of the agent is and disagree elsewhere, then the loss is just the loss of that destiny. We can think of inconsistent histories within Xyα as associated with different *copies* of the agent. If ha∈Xyα is s.t. for any g⊐ha, g∉Xyα, then h represents the history of a copy which reached the end of its life. If there is only one copy and its lifespan is maximal, then Lphys produces the cartesian loss of this copy. If the lifespan is less than maximal, Lphys produces the loss of the *worst-case* continuation of the history which was experienced. In other words, death (i.e., the future behavior of the agents policy not being a computation that has any effect on the state of the universe) is always the worst-case outcome. If there are multiple copies, the *best-off* copy determines the loss. Definition 3.1 is somewhat arbitrary, but the monotonicity principle is not. Hence, for a selfish agent (i.e. if L only depends on y,α through Xyα) death must be the worst-case outcome (for fixed y). Note that this weighs against getting the agent to shut down if it's selfish. And creating additional copies must always be beneficial. For non-selfish agents, similar observations hold if we replace "death" by "destruction of everything that matters" and "copies of the agent" by "copies of the (important sector of the) world". This is an odd state of affairs. Intuitively, it seems coherent to have preferences that contradict the monotonicity principle. Arguably, humans have such preferences, since we can imagine worlds that we would describe as net negative. At present, we're not sure how to think about it, but here are some possibilities, roughly in order of plausibility: 1. Perfect physicalists must obey the monotonicity principle, but humans are not perfect physicalists. Indeed, why would they be? The ancestral environment didn't require us to be physicalists, and it took us many centuries to stop thinking of human minds as "metaphysically privileged". This might mean that we need of theory of "cartesian agents in a physicalist universe". For example, perhaps from a cartesian perspective, a physicalist hypothesis needs to be augmented by some "theory of epiphenomena" which predicts e.g. which of several copies will "I" experience. 2. Something is simply wrong about the formalism. One cheap way to escape monotonicity is by taking the *hull of maximal points* of the bridge transform. However, this spoils compatibility with refinements (Propositions 2.9 and 2.10). We expect this to spoil ability to learn in some way. 3. Death is inherently an irreversible event. Every death is the agent's first death, therefore the agent must always have uncertainty about the outcome. Given a non-dogmatic prior, there will be some probability on outcomes that are not actually death. These potential "afterlives" will cause the agent to rank death as better than the absolute worst-case. So, "non-survivalist" preferences can be reinterpreted as a prior biased towards not believing in death. Similar reasoning applies to non-selfish agents. 4. We should just bite the bullet and prescriptively endorse the odd consequences of the monotonicity principle. In either case, we believe that continuing the investigation of monotonic physicalist agents is a promising path forward. If a natural extension or modification of the theory exists which admits non-monotonic agents, we might discover it along the way. If we don't, at least we will have a better lower bound for the desiderata such a theory should satisfy. Unobservable states ------------------- What if the loss depends on an unobservable state from some state space S? In order to encode states, let's assume some MS∈N and decS:ΣMS→S. That is, just as the action of the agent can be thought of as a tuple of programs returning the bits of the computation of the agent's action, a state can be thought of as a tuple of programs returning the bits of the computation of what the next state is. Let B be a finite set ("nature's actions"). Just as observations, in the selfish case, occur based on what computations are instantiated, nature's choice of action B depends on what computations are instantiated. Denote HS:=(A×B×O×S)<N, DS:=(A×B×O×S)N−1×A. Let ┌T┐:S×A×B→RMS be the "ontological"[[8]](#fn-vECnaLn6AKxcmsWWZ-8) transition rule program, E:S→O the observation rule and L:DS→R the loss function. **Definition 3.2:** *For any (y,α)∈elΓ, the set of *admissible histories* Zyα is the set of all ha∈HS×A s.t. the following conditions hold:* 1. prH×Aha∈Xyα 2. *If g∈HS×A×B, o∈O and s∈S are s.t. gos⊑h then o=E(s).* 3. *If g∈HS×A×B×O, s1,s2∈S, a∈A, b∈B and o∈O are s.t. gs1abos2⊑h and y′∈α then Ty′(s1,a,b)=s2 (the notation Ty is analogical to Definition 1.3).* Pretty much, condition 1 says that the visible parts of the history are consistent with how math behaves in the same sense as we usually enforce with Hyα. Condition 2 says that the history should have the observations arising from the states in the way that E demands. And condition 3 says that the history should have the states arising from your action and nature's action and the previous state in the way the transition program dictates (in that batch of mathematical universes). *The *physicalized loss function* is* Ls-phys(y,α):=minha∈Zyαmaxg∈DS:ha⊑gL(g) This arises in a nearly identical way to the loss function from Definition 3.1, it's precisely analogous. That is, for a history to be admissible we require not only the agent to be realized on all relevant inputs but also the transition rule. We can also define a version with ┌T┐:S×B→RMS in which the agent is *not* an explicit part of the ontology. In this case, the agent's influence is solely through the "entanglement" of G with T. I.e., the agent's beliefs over math have entanglements between how the agent's policy turns out and how the physics transition program behaves, s.t. when conditioning on different policies, the computations corresponding to different histories become instantiated. Cellular automata ----------------- What if our agent maximizes the number of gliders in the game of life? More generally, consider a cellular automaton with cell state space S and dimension d∈N. For any t∈N, let Nt⊆Zd be the set of cells whose states are necessary to compute the state of the origin cell at timestep t. Let ┌Tt┐:SNt→RMS be the program which computes the time evolution of the cellular automaton. I.e., it's mapping the state of a chunk of space to the computation which determines the value of a cell t steps in the future. Denote DdS:=SN×Zd. This is the space of all possible histories, specifying everything that occurs from the start of time. Let L:DdS→R be the loss function (the N coordinate is time). **Definition 3.3:** *For any α∈2Γ, the set of *admissible histories* Zα is the set of all pairs (C,h) where C⊂N×Zd is finite and h:C→S s.t. the following conditions hold:* 1. *If (t2,c)∈C, t1<t2 and v∈Nt2−t1 then (t1,c+v)∈C.* 2. *If (t2,c)∈C, t1<t2 and y∈α then Tyt2−t1(λv.h(t1,c+v))=h(t2,c).* Basically, the C is a chunk of spacetime, and condition 1 says that said chunk must be closed under taking the past lightcone, and condition 2 says the state of that spacetime chunk must conform with the behavior of the transition computation. *The *physicalized loss function* is* defined very much as it used to be, except that for what used to be "past histories" now it has states of past-closed chunks of spacetime. Lc-phys(y,α):=min(C,h)∈Zαmaxg∈DdS:h=g|CL(g) Notice that, since the dynamics of the cellular automaton is fixed, the only variables "nature" can control are the initial conditions. However, it is straightforward to generalize this definition to cellular automata whose time evolution is underspecified (similarly to the B we had before). Diamond maximizer ----------------- What if our agent maximizes the amount of diamond (or some other physical property)? In order to define "diamond" we need a theory of physics. For example, we can start with macroscopic/classical physics and define diamond by its physical and/or chemical properties. Or, we can start with nonrelativistic quantum mechanics and define diamond in terms of its molecular structure. These definitions might be equivalent in our universe, but different in other hypothetical universes (this is similar to [twin Earth](https://en.wikipedia.org/wiki/Twin_Earth_thought_experiment)). In any case, there is no single "right" choice, it is simply a matter of definition. A theory of physics is mathematically quite similar to a cellular automaton. This theory will usually be incomplete, something that we can represent in infra-Bayesianism by Knightian uncertainty. So, the "cellular automaton" has underspecified time evolution. Using this parallel, the same method as in the previous subsection should allow physicalizing the loss function. Roughly speaking, a possible history is "real" when the computation simulating the time evolution of the theory of physics with the corresponding initial condition is realized. However, we are not sure what do if the theory of physics is *stochastic*. We leave it as an open problem for now. To summarize, you can just take a hypothesis about how computations are entangled with some theory of physics, some computations simulating the time evolution of the universe (in which the notion of "diamond" is actually well-defined letting the amount of diamond in a history be computed), and a utility function over these states. The bridge transform extracts which computations are relevant for the behavior of physics in the hypothesis (which may or may not include the diamond-relevant computations), and then you just check whether the computations for a particular diamond-containing history are instantiated, take the best-case one, and that gives you your loss. Human preferences seem not substantially different from maximizing diamonds. Most of the things we value are defined in terms of people and social interactions. So, we can imagine a formal loss function defined in terms of some "theory of people" implicit in our brain. 4. Translating Cartesian to Physicalist ======================================= Physicalism allows us to do away with the cartesian boundary between the agent and its environment. But what if there *is* a cartesian boundary? An important sanity test is demonstrating that our theory can account for purely cartesian hypotheses as a *special case*. Ordinary laws ------------- We will say "law" to mean what was [previously](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/GS5P7LLLbSSExb3Sk) called "belief function". We start from examining a causal law W:AH→□D. I.e., it maps a policy to an ultradistribution over destinies, in a way that looks like a closed convex set of environments. Let R:=H and Σ=A so that Γ becomes the set of policies (we identify the history h∈H with the computation ┌G(h)┐∈R given by what the agent's source does on input h.). In particular, for now there is no computer, or the computer can only simulate the agent itself. Let Φ:=D. This is our set of "physics outcomes", complete action-observation sequences. Then, W induces the physicalist hypothesis ΘW∈□(Γ×Φ) by ΘW:=⊤Γ⋉W. ΘW is what we called the ["first-person static view"](https://www.lesswrong.com/s/CmrW8fCmSLK7E25sa/p/GS5P7LLLbSSExb3Sk) of the law. Total uncertainty over the choice of policy, interacting with the kernel which maps a policy to the resulting uncertainty over the environment. We want to find the bridge transform of ΘW and evaluate the corresponding counterfactuals, in order to show that the resulting Lpol is the same as in the cartesian setting. In order to do this, we will use some new definitions. **Definition 4.1:** *Let X and Y be finite sets Q⊆Y×X a relation. A *polycontribution on Q* is ϖ:X→R+ s.t. for any y∈Y:* ∑x∈Q(y)ϖ(x)≤1 *ϖ is a *polydistribution* when for any y∈Y* ∑x∈Q(y)ϖ(x)=1 *We denote the space of polycontributions by ΔcQX and the space of polydistributions by ΔQX.* Here, Q(y) stands for {x∈X∣yQx}. Polycontributions and polydistributions are essentially a measure on a space, along with a family of subsets indexed by the y∈Y, s.t. restricting to any particular subset makes a contribution/distribution, respectively. **Definition 4.2:** *Let X and Y be finite sets and Q⊆Y×X a relation. A *polycontribution kernel* (PoCK) over Q is κ:Y→ΔcX s.t. there is ϖ∈ΔcQX s.t. for any x∈X and y∈Y* κ(y;x)=χQ(y)(x)ϖ(x) Or, in other words, a PoCK is the kernel that you get by taking a polycontribution and using the input point y to decide which Y-indexed subset to restrict the measure to. Let Q0⊆AH×D be defined by Q0:={(π,a0o0…aN−1)∣∀i<N:ai=π(a0…oi−1)}. This is the "destiny is consistent with the policy" relation where the policy produces the actions that the destiny says it does, when placed in that situation. Let ϕ:AH→ΔD be a Bayesian causal law (an environment). Then, ϕ is a PoCK over Q0! Conversely, any ϕ:AH→ΔD which is a PoCK is an environment. Essentially, for an environment, each choice of policy results in a different probability distribution over outcomes. However, these resulting probability distributions over outcomes are all coherent with each other where the different policies agree on what to do, so an alternate view of an environment is one where there's one big measure over destinies (where the measure on a particular destiny is the probability of getting that destiny if the policy plays along by picking the appropriate actions) and the initial choice of policy restricts to a particular subset of the space of destinies, yielding a probability distributions over destinies compatible with a given policy. Clearly, the next step is to generalize this beyond probability distributions. **Definition 4.3:** *Let X and Y be finite sets Q⊆Y×X a relation. A *polyultracontribution kernel* (PUCK) over Q is Ξ:Y→□cX s.t. there is Π⊆Y→ΔcX a set of PoCKs s.t. for any y∈Y, Ξ(y) is the closure of the convex hull of ⋃κ∈Πκ(y).* In particular, W is a PUCK over Q0. Conversely, any W:AH→□D which is a PUCK is a causal law. **Proposition 4.1:** *Consider some Γ, Φ, a relation Q⊆Γ×Φ and a PUCK Ξ over Q. Let Θ:=⊤Γ⋉Ξ. Then,* Br(Θ)=[⊤Γ⋉(susΘ⋊Ξ)]↓=[⊤Γ⋉(Q−1⋊Ξ)]↓ Here, Q−1(x):={y∈Y∣yQx}. The first identity says that the inequality of Proposition 2.7 is saturated. Specifically, as shown in the next proposition, for our case of cartesian causal laws, the computations realized according to Br(ΘW) are exactly the histories that occur. **Corollary 4.3:** *Suppose that for any h∈D and π:H→A s.t. h∈suppW(π), it holds that hCπ. That is, the observations W predicts to receive from the computer are consistent with the chosen policy. Let L:D→R be a cartesian loss function and π:H→A a policy. Then,* (prelΓBr(ΘW)∩Cπfair)(Lphys)=W(π;L) Notice that the assumption W is causal combined with the assumption W is consistent with C imply that it's not possible to use the computer to simulate the agent on any history other than factual past histories. For pseudocausal W the bridge transform no longer agrees with the cartesian setting: if a selfish agent is simulated in some counterfactual, then the computationalist stance implies that it can incur the loss for that counterfactual. This artifact should be avoidable for non-selfish agents, but we leave this for future articles. Turing laws ----------- Now let's examine a cartesian setting which involves a general computer. Without further assumptions, we again cannot expect the physicalist loss to agree with the cartesian, since the programs running on the computer might be entangled with the agent. If the agent runs simulations of itself then it should expect to experience those simulations (as opposed to the cartesian view). Therefore, we will need a "no entanglement" assumption. Let Γ:=Γ1×AH for some Γ1 (the computations that don't involve the agent) and Φ is still D. We want a notion of "joint belief about the environment and Γ1" in the cartesian setting, which generalizes causal laws. We will call it a "Turing law" or "Turing environment" in the Bayesian case. For our particular application (though the theorems are more general), Z will be generally taken to be the space of computations that aren't about what you personally do, Y is the space of computations about what you personally do, and X is the space of destinies/physics outcomes, if the reader wants to try their hand at interpreting some of the more abstract theorems. **Definition 4.4:** *Let X, Y and Z be finite sets and Q⊆Y×X a relation. A *Z-polycontribution on Q* is ϖ:X×Z→R+ s.t.* ∑z∈Zmaxy∈Y∑x∈Q(y)ϖ(x,z)≤1 *ϖ is a *Z-polydistribution on Q* when it is both a Z-polycontribution on Q and a polydistribution on Q×Z. Here, we regard Q×Z as a relation between Y and X×Z.* *We denote the space of Z-polycontributions by ΔcQXZ and the space of Z-polydistributions by ΔQXZ.* Notice that a Z-polycontribution on Q is in particular a polycontribution on Q×Z: ΔcQXZ⊆ΔcQ×Z(X×Z). **Definition 4.5:** *Let X, Y, Z be finite sets and Q⊆Y×X a relation. A *Z-polycontribution kernel* (Z-PoCK) over Q is κ:Y→Δc(X×Z) s.t. there is ϖ∈ΔcQXZ s.t. for any x∈X, y∈Y and z∈Z* κ(y;x,z)=χQ(y)(x)ϖ(x,z) In the context of the identification we'll be using where Z is the rest of math, Y is the parts of math about what you personally do, and X is the space of physics outcomes/destinies, this definition is saying that the Z-PoCK is mapping a policy to the joint distribution over math and destinies you'd get from restricting the measure to "I'm guaranteed to take this policy, regardless of how the rest of math behaves". A Turing environment can be formalized as a Γ1-PoCK κ:AH→Δ(D×Γ1) over Q0. Indeed, in this case Definition 4.4 essentially says that the environment can depend on a Γ1 variable (Remember, Γ1 is the rest of math), and also we have a belief about this variable that doesn't depend on the policy. Now, how to transform that into a physicalist hypothesis: **Proposition 4.2:** *Let X, Y and Z be finite sets, Q⊆Y×X a relation, κ:Y→Δc(X×Z) a Z-PoCK over Q and Θ∈□cY. Then, there exist μ∈ΔcZ and ϕ:Z×Y→ΔcX s.t. for all z∈Z, λy.ϕ(z,y) is a PoCK over Q, and for all y∈Y, κ(y)=(λz.ϕ(z,y))⋊μ. Moreover, suppose that (μ1,ϕ1) and (μ2,ϕ2) are both as above[[9]](#fn-vECnaLn6AKxcmsWWZ-9). Then,* μ1⋉Θ⋉ϕ1=μ2⋉Θ⋉ϕ2 Or, for a more concrete example, this is saying that it's possible to decompose the function mapping a policy to a distribution over the rest of math and the destiny, into two parts. One part is just a probability distribution over the rest of math, the second part is a function mapping a policy and the behavior of the rest of math to a probability distribution over destinies. Further, this decomposition is essentially unique, in that for any two ways you do it, if you take the probability distribution over the rest of math, have it interact with some arbitrary way of mapping the rest of math to policies, and then have that interact with the way to get a distribution over destinies, that resulting joint distribution could also be produced by any alternate way of doing the decomposition. In the setting of Proposition 4.2, we define Θ∗κ∈□c(Z×Y×X) by μ⋉Θ⋉ϕ. The physicalist hypothesis corresponding to Turing environment κ turns out be ⊤AH∗κ. The no-entanglement assumption is evident in how the AH and Γ1 variables are "independent". Well, the actual choice of policy can depend on how the rest of math turns out, but the set of available choices of policy *doesn't* depend on how the rest of math turns out, and that's the sense in which there's independence. To go from Turing environments to Turing laws, we will need the following generalization to sets of contributions. **Definition 4.6:** *Let X, Y and Z be finite sets Q⊆Y×X a relation. A *Z-polyultracontribution kernel* (Z-PUCK) over Q is Ξ:Y→□c(X×Z) s.t. there is Π⊆(Δc(X×Z))Y a set of Z-PoCKs s.t. for any y∈Y, Ξ(y) is the closure of the convex hull of ⋃κ∈Πκ(y).* A Turing law is a Γ1-PUCK W:AH→□(D×Γ1) over Q0. **Definition 4.7:** *Let X, Y and Z be finite sets, Q⊆Y×X a relation, Ξ:Y→□c(X×Z) a Z-PUCK over Q and Θ∈□cY. Let Π⊆(Δc(X×Z))Y be the set of Z-PoCKs κ s.t. for any y∈Y, κ(y)∈Ξ(y). Then, Θ∗Ξ∈□c(Z×Y×X) is the closure of the convex hull of ⋃κ∈Π(Θ∗κ).* As a sanity test for the sensibility of Definition 4.7, we have the following proposition. It demonstrates that instead of taking all compatible Z-PoCKs, we could take any subset whose convex hull is all of them. If that was not the case, it would undermine our justification for using convex sets (namely, that taking convex hull doesn't affect anything). **Proposition 4.3:** *Let X, Y and Z be finite sets, Q⊆Y×X a relation, κ1,κ2:Y→Δc(X×Z) Z-PoCKs over Q, Θ∈□cY and p∈[0,1]. Then, pκ1+(1−p)κ2 is also a Z-PoCK over Q, and* Θ∗(pκ1+(1−p)κ2)⊆pΘ∗κ1+(1−p)Θ∗κ2 The physicalist hypothesis correspond to Turing law W is ΘW:=⊤AH∗W. Total uncertainty over the available choices of policy interacting with the correspondence between the policy, the rest of math, and the resulting destiny. Now, we need to study its bridge transform with a necessary intermediate result to proceed further, bounding the bridge transform result between an upper and lower bound: **Proposition 4.4:** *Let X, Y and Z be finite sets, Q⊆Y×X a relation and Ξ:Y→□c(X×Z) a Z-PUCK over Q. Denote Θ:=⊤Z×Y∗Ξ. Define β0,β1:Z×Y×X→2Z×Y by β0(z,y,x):={z}×Q−1(x), β1(z,y,x):=Z×Q−1(x). Then,* (Θ⋉β0)↓⊆Br(Θ)⊆(Θ⋉β1)↓ Here, we slightly abuse notation by implicitly changing the order of factors in Z×Y×2Z×Y×X. The application of this is that the resulting uncertainty-over-math/info about what computations are realized (i.e., the α set) attained when you do the bridge transform is bounded between two possible results. One bound is perfect knowledge of what the rest of math besides your policy is (everything's realized), paired with exactly as much policy uncertainty as is compatible with the destiny of interest. The other bound is no knowledge of the rest of math besides your policy (possibly because the policy was chosen without looking at other computations, and the rest of reality doesn't depend on those computations either), combined with exactly as much policy uncertainty as is compatible with the destiny of interest. **Corollary 4.5:** *Suppose that for any h∈D, z∈Γ1 and π:H→A s.t. (h,z)∈suppW(π), it holds that hCzπ. That is, the observations W predicts to receive from the computer are consistent with the chosen policy and W's beliefs about computations. Let L:D→R be a cartesian loss function and π:H→A a policy. Define ~L:D×Γ1→R by ~L(h,z):=L(h). Then,* (prelΓBr(ΘW)∩Cπfair)(Lphys)=W(π;~L) And so, the Turing law case manages to add up to normality in the cartesian setting (with computer access) as well. 5. Discussion ============= Summary ------- Let's recap how our formalism solves the problems stated in section 0: * Physicalist hypotheses require no bridge rules, since they are formulated in terms of a physical state space (that be chosen to be natural/impartial) rather than agent's actions and observations. Therefore, they accrue no associated description complexity penalty, and should allow for a superior learning rate. Intuitively, a physicalist agent knows it is an manifestation of the program G rather than just any object in the universe. * A physicalist agent doesn't develop uncertainty after subjectively-simple but objectively-complex events. It is easy to modify a cartesian hypothesis such that the camera going black leads to completely new behavior. However, it would require an extremely contrived (complex) modification of *physics*, since the camera is not a simple object and has no special fundamental significance. Basically, the simplest way that computations and reality correspond which instantiates the computations of the agent seeing black forever afterward gets a complexity boost by about the complexity of the agent's own source code. * A physicalist agent doesn't regard its existence as axiomatic. A given physicalist hypothesis might contain 0 copies of the agent, 1 copy or multiple copies (more generally, some ultradistribution over the number of copies). Therefore, hypotheses that predict the agent's existence with relatively high probability gain an advantage (i.e. influence the choice of policy more strongly), as they intuitively should. At the same time, hypotheses that predict too many copies of the agent to have predictive power are typically[[10]](#fn-vECnaLn6AKxcmsWWZ-10) also suppressed, since policy selection doesn't affect their loss by much (whatever you do, everything happens anyway). * Acausal attacks against physicalist agents are still *possible*, but the lack of bridge rule penalty means the malign hypotheses are not overwhelmingly likely compared to the true hypothesis. This should allow defending using algorithms of "[consensus](https://arxiv.org/abs/1907.08461)" [type](https://www.lesswrong.com/posts/CnruhwFGQBThvgJiX/formal-solution-to-the-inner-alignment-problem). What about Solomonoff's universality? ------------------------------------- For selfish agents the physicalist framework can be reinterpreted as a cartesian framework with a peculiar prior. Given that the Solomonoff prior is *universal* (dominates every computable prior), does it mean AIXI is roughly as good as a selfish physicalist agent? Not really. The specification of that "physicalist prior" would involve the *source code of the agent*. (And, if we want to handle hypotheses that predict multiple copies, it would also involve the loss function.) For AIXI, there is no such source code. More generally, the intelligence of an agent w.r.t. the Solomonoff prior is upper bounded by its Kolmogorov complexity (see [this](https://www.alignmentforum.org/posts/dPmmuaz9szk26BkmD/vanessa-kosoy-s-shortform?commentId=Tg7A7rSYQSZPASm9s)). So, highly intelligent cartesian agents place small prior probability on physicalism, and it would take such an agent a lot of data to learn it. Curiously, the converse is also true: the translation of section 4 also involves the source code of the agent, so intelligent physicalist agents place small prior probability on cartesian dualism. We could maybe use description complexity *relative* to the agent's own source code, in order to define a prior[[11]](#fn-vECnaLn6AKxcmsWWZ-11) which makes dualism and physicalism roughly equivalent (for selfish agents). This might be useful as a way to make cartesian algorithms do physicalist reasoning. But in any case, you still need physicalism for the *analysis*. It would be interesting to do a similar analysis for bounded Solomonoff-like priors, which might require understanding the computational complexity of the bridge transform. In the bounded case, the Kolmogorov complexity of intelligent agents need not be high, but a similar penalty might arise from the size of the environment state space required for the physicalist-to-cartesian translation. Are manifest facts objective? ----------------------------- Physicalism is supposed to be "objective", i.e. avoid privileging the observer. But, is the computationalist stance truly objective? Is there an objective fact of the matter about which facts about computations are physically manifest / which computations are realized? At first glance it might seem there isn't. By Proposition 2.7, if the physicalist hypothesis Θ∈□(Γ×Φ) is s.t. suppΘ⊆F×Φ for some F⊆Γ, then suppBr(Θ)⊆elF×Φ. So, if the hypothesis asserts a fact F then F is always physically manifest. But, isn't Θ merely the *subjective* knowledge of the agent? And if so, doesn't it follow that physical manifesting is also subjective? At second glance things get more interesting. Suppose that agent G learned hypothesis Θ. Since G is part of the physical universe, it is now indeed manifest in the state of the physical universe that hypothesis Θ (and in particular F) is true. Looking at it from a different angle, suppose that a different agent G′ knows that G is a rational physicalist. Then it must agree that *if* G will learn Θ then F is manifest. So, possibly there is a sense in which manifesting is objective after all. It would be interesting to try and build more formal theory around this question. In particular, the ability of the user and the AI to agree on manifesting (and agree in general) seems important for alignment in general (since we want them to be minimizing the same loss) and for ruling out acausal attacks in particular (because we don't want the AI to believe in a hypothesis s.t. it wouldn't be rational for the user to believe in it). Physicalism and alignment ------------------------- The domain of the physicalist loss function doesn't depend on the action and observation sets, which is an advantage for alignment since it can make it easier to "transfer" the loss function from the user to the AI when they have different action or observation sets. On the other hand, the most objectionable feature of physicalism is the monotonicity principle (section 3). The default conclusion is, we need to at least generalize the formalism to allow for alignment. But, suppose we stick with the monotonicity principle. How bad is the resulting misalignment? Consider an alignment protocol in which the AI somehow learns the loss function of the user. As a toy model, assume the user has a cellular automaton loss function Luser, like in section 3. Assume the AI acts according to the loss function Lc-physuser of Definition 3.3. What will result? Arguably, in all probable scenarios the result will be pretty close to optimal from the user's perspective. One type of scenario where the AI makes the wrong choice is, when choosing between destruction of everything and a future which is even *worse* than destruction of everything. But, if these are the only available options then probably something has gone terribly wrong at a previous point. Another type of scenario where the AI possibly makes the wrong choice is, if it creates multiple disconnected worlds each of which has valence from the perspective of the user, at least one of which is good but some of the which are bad. However, there usually seems to be no incentive to do that[[12]](#fn-vECnaLn6AKxcmsWWZ-12). In addition, the problems might be milder in approaches that don't require learning the user's loss function, such as IDA of imitation. So, the monotonicity principle might not introduce egregious misalignment in practice. However, this requires more careful analysis: there can easily be failure modes we have overlooked. Future research directions -------------------------- Here's a list of possible research directions, not necessarily comprehensive. Some of them were already mentioned in the body of the article. Directions which are already more or less formalized: * Generalizing everything to infinite spaces (actually, we already have considerable progress on this and hopefully will publish another article soon). * Generalizing section 4 to certain pseudocausal laws. Here, we can make use of the monotonicity principle to argue that if Omega discontinues all of its simulations, they correspond to copies which are not the best-off and therefore don't matter in the loss calculus. Alternatively, we can design a non-selfish loss function that only counts the "baseline reality" copies. Moreover, it would be interesting to understand the connection between the pseudocausality condition and the "fairness" of Definition 1.5. * Define a natural candidate for the Solomonoff prior for physicalist hypotheses. * Prove that the physicalist intelligence g is truly unbounded, and study its further properties. Directions which are mostly clear but require more formalization: * Prove learning-theoretic results (regret bounds). For example, if we assume that there is a low-cost experiment that reveals the true hypothesis, then low regret should be achievable. In particular, such theorems would allow validating Definition 1.5. * Generalize section 3 to stochastic ontologies. * Allow uncertainty about the loss function, and/or about the source code. This would introduce another variable into our physicalist hypothesis, and we need to understand how to bridge transform should act on it. * Analyze the relationship between cartesian and physicalist agents with bounded simplicity priors. * For (cartesian) reinforcement learning, MDPs and POMDPs are classical examples of environment for which various theoretical analysis is possible. It would be interesting to come up with analogical examples of physicalist hypotheses and study them. For example, we can consider infra-Markov chains where the transition kernel depends on Γ. Directions where it's not necessarily clear how to approach the problem: * Understand the significance of the monotonicity principle better, and whether there are interesting ways to avoid it. * Formally study the objectivity of manifesting, perhaps by deriving some kind of agreement theorems for physicalist agents. * Define and analyze physicalist alignment protocols. * It is easy to imagine how a physicalist hypothesis can describe a universe with *classical* physics. But, the real universe is quantum. So, it doesn't really have a distribution over some state space Φ but rather a density matrix on some Hilbert space. Is there a natural way to deal with this in the present formalism? If so, that would seem like a good solution to the interpretation of quantum mechanics. In fact, we have a concrete proposal for doing this, but it requires more validation. --- 1. If we ignore ideas such as the von Neumann-Wigner interpretation of quantum mechanics. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-1) 2. This other direction also raises issues with counterfactuals. These issues are also naturally resolved in our formalism. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-2) 3. The notation Θ×Λ is reserved for a different, commutative, operation which we will not use here. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-3) 4. A simplifying assumption we are planning to drop in future articles. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-4) [↩︎](#fnref-vECnaLn6AKxcmsWWZ-4:1) [↩︎](#fnref-vECnaLn6AKxcmsWWZ-4:2) 5. We have considered this type of setting [before](https://www.alignmentforum.org/posts/3qXE6fK47JhSfkpnB/do-sufficiently-advanced-agents-use-logic#fEKc88NbDWZavkW9o) with somewhat different motivation. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-5) 6. The careful reader will observe that programs sometimes don't halt which means that the "true" computational universe is ill-defined. This turns out not to matter much. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-6) 7. [Previously](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/idP5E5XhJGh9T5Yq9) we defined pullback s.t. it can only be applied to particular infra/ultradistributions. Here we avoid this limitation by using infra/ultracontributions as the codomain. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-7) 8. Meaning that, this rule is part of the *definition* of the state rather than a claim about physical reality. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-8) 9. Disintegrating a distribution into a semidirect product yields a *unique* result, but for contributions that's no longer true, since it's possible to move scalars between the two factors. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-9) 10. One can engineer loss functions for which they are not suppressed, for example if the loss only depends on the actions and not on the observations. But such examples seem contrived. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-10) 11. It's a self-referential definition, but we can probably resolve the self-reference by quining. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-11) 12. One setting in which there *is* an incentive: Suppose there are multiple users and the AI is trying to find a compromise between their preferences. Then, it might decide to create disconnected worlds optimized for different users. But, this solution might be much worse than the AI thinks, if Alice's world contains Bob!atrocities. [↩︎](#fnref-vECnaLn6AKxcmsWWZ-12)
fa34ca8f-8229-408f-8940-b1e4f935a571
trentmkelly/LessWrong-43k
LessWrong
ELK Sub - Note-taking in internal rollouts My ELK submission was labelled under 'Strategy: train a reporter that is useful for another AI'. This is definitely a fair description, though the AI it needs to be useful to is itself - the reporter is essentially internalized. I also agree that the proposed counterexample, hiding information in what seems like human-comprehensible speech, is the biggest flaw. Nonetheless I think my proposal has enough additional detail and scope for extension that it's worth fleshing out in its own post - so here we are. Some of the responses to counterexamples below also go beyond my original proposal. For anyone interested, here is my original proposal (Google Doc) which contains the same idea in somewhat less generality. In this post I'll first flesh out my proposal in slightly more general terms, and then use it to try and continue the example/counter-example dialogue on ELK. I know there were a number of proposals in this area and I'd be very interested to see how others' could be integrated with my own. In particular I think mine is weak on how to force the actor to use human language accurately. I expect there are lots of ways of leveraging existing data for this purpose but I've only explored this very lightly. Many thanks to ARC for running the competition and to Misha Wagner for reviewing and discussing the proposal. Proposal Summary Creating Human Semantic Space  We should think of the actor as described in ELK as having two kinds of space in which it works: * The space of potential observations of its sensors and actuators, O * The space of internal representations, ie its Bayes Net, I The machine takes in observations, converts it into its internal representation I, simulates the action in this internal representation, and then cashes this out in its actuators. What we want to do is force it to convert back and forth between I and a third space, the space of human understandable meaning, which we will call H. How can we achieve this? My basic idea is th
9daba229-6934-4a0d-9950-05401a08c14c
trentmkelly/LessWrong-43k
LessWrong
AI Governance: A Research Agenda Allan Dafoe, head of the AI Governance team at FHI has written up a research agenda. It's a very large document, and I haven't gotten around to reading it all, but one of the things I would be most excited about is people in the comments quoting the most important snippets that give a big picture overview over the content of the agenda.
6d58b48c-3b3a-40e8-bf88-b595c078dd6a
trentmkelly/LessWrong-43k
LessWrong
New Empty Units Discussing housing policy in left-wing spaces I often run into an argument which is, essentially "building more units doesn't help with the housing shortage because so many will be bought by speculators with no intention of living there." This view, however, implies there's an excellent opportunity to get a ton of housing built at the expense of these speculators, enough to bring down the cost of housing and transfer enormous wealth from speculators to everyone else. Here's a recent example, Nathan Robinson writing in Current Affairs: > There is no necessary economic reason why increasing the total number of housing units in a city helps make housing more affordable for poor residents. Consider the pencil towers. Let's say that in order to build one, an early 20th century building full of lower-middle class residents was purchased, the residents evicted, the building flattened. There were 30 single-family units in the old building. Our new pencil tower is 100 floors high and has 100 units. All of our pencil tower's units are full of state-of-the-art appliances and high-end fixtures, and cost $2,000,000 each. They are swiftly bought up, 20 by rich people who live in the city, 30 by rich people lured to the city by its new pencil tower, and 50 by rich people who have no intention of living in the city but think pencil tower condos are an asset worth owning in a swiftly-gentrifying city. While I don't think this hypothetical is realistic, especially in its proportions, I want to dig deeper into its implications. Let's say we build this tower of 100 units, funded by rich people. Then we let developers keep building, and get dozens or hundreds of towers like this, each one with 3x the units there were before, and all at no expense to the public. At some point, these "rich people who have no intention of living in the city" will realize that their investment units aren't so special after all, and the bubble will pop. Costs will fall dramatically, since there will be fa
c3b2ad33-44fa-4e4b-8ba6-cf59c1081c58
trentmkelly/LessWrong-43k
LessWrong
Composting fruitless debates Why do long, uninspiring, and seemingly-childish debates sometimes emerge even in a community like LessWrong?  And what can we do about them?  The key is to recognize the potentially harsh environmental effect of an audience, and use a dying debate to fertilize a more sheltered private conversation. Let me start by saying that LessWrong generally makes excellent use of public debate, and naming two things I don't believe are solely responsible for fruitless debates here: rationalization biases and self-preservation1.  When your super-important debate grows into a thorny mess, the usual aversion to say various forms of "just drop it" are about signaling that: 1. you're not skilled enough to continue arguing, so you'd look bad, 2. the other person isn't worth your time, in which case they'd be publicly insulted and compelled to continue with at least one self-defense comment, extending the conflict, or 3. the other person is right, which would risk spreading what appear to be falsehoods. "Stop the wrongness", the last concern, is in my opinion the most perisistent here simply because it is the least misguided.  It's practically the name of the site.  Many LessWrong users seem to share a sincere, often altruistic desire to share truth, abolish falsehood, and overcome conflict.  Public debate is a selection mechanism generally used very effectively here to grow and harvest good arguments.  But we can still benefit from diffusing the weed-like quibbling that sometimes shows up in the harsh environment of debate, and for that you need a response that avoids the problematic signals above.  So try this: > "I'm worried that debating this more here won't be useful to others, but I want to keep working on it with you, so I'm responding via private message.  Let's post on it again once we either agree or better organize our disagreement.  Hopefully at least one of us will learn and refine a new argument from this conversation." Take a moment to see how this carefully av
463a2a9f-88b1-4d6f-8699-297c2d56c948
trentmkelly/LessWrong-43k
LessWrong
Developmental Stages in Multi-Problem Grokking Summary This post is my capstone project for BlueDot Impact’s AI Alignment course. It was a 12 week online course that covered AI risks, alignment, scalable oversight, technical governance and more. You can read more about it here. In this project, I investigated the use of a developmental interpretability method—specifically, local learning coefficient estimation—to identify developmental stages in a toy transformer model. The model was tasked with learning both modular addition and modular multiplication simultaneously. Training followed a grokking pattern, where the model initially memorized the training data before gradually generalizing. To observe the impacts on test loss and the learning coefficient during training, I adjusted the ratio of addition to multiplication problems in the training dataset. This approach allowed me to examine how the model's learning dynamics changed based on the composition of the training data. The models exhibited unique grokking patterns for addition and multiplication operations. These patterns varied significantly based on the proportion of addition to multiplication problems in the training data. When the ratio of problem types was more uneven, the local learning coefficient (LLC) began to decline earlier in the training process. This earlier decline in LLC mirrored the faster grokking behavior seen in the loss curves for the dominant problem type. However, despite these timing differences, the overall shape of the LLC curve remained similar across all ratios tested. Importantly, the LLC curves did not reveal separate, distinct stages for learning addition versus multiplication, contrary to the initial hypothesis. The full code can be found here. What is Developmental Interpretability? The next few sections summarize developmental interpretability, the local learning coefficient, and their applications for alignment. If you’re familiar with these subjects, you can skip to the experiment section below. Developmental Inter
284e5509-3a76-41b4-9a43-3f3093de0f48
trentmkelly/LessWrong-43k
LessWrong
Cosmopolitan values don't come free Short version: if the future is filled with weird artificial and/or alien minds having their own sort of fun in weird ways that I might struggle to understand with my puny meat-brain, then I'd consider that a win. When I say that I expect AI to destroy everything we value, I'm not saying that the future is only bright if humans-in-particular are doing human-specific things. I'm saying that I expect AIs to make the future bleak and desolate, and lacking in fun or wonder of any sort[1]. ---------------------------------------- Here's a parable for you: > Earth-originating life makes it to the stars, and is having a lot of fun, when they meet the Ant Queen's Horde. For some reason it's mere humans (rather than transhumans, who already know my argument) that participate in the first contact. > > "Hello", the earthlings say, "we're so happy to have brethren in the universe." > > "We would like few things more than to murder you all, and take your resources, and lay our eggs in your corpse; but alas you are too powerful for that; shall we trade?" reply the drones in the Ant Queen's Horde. > > "Ah, are you not sentient?" > > "The ant queen happens to be sentient", the drone replies, and the translation machine suggests that the drones are confused at the non-sequitur. > > "Then why should she want us dead?", ask the humans, who were raised on books like (rot13 of a sci fi story where it turns out that the seemingly-vicious aliens actually value sentient life) Raqre'f Tnzr, jurer gur Sbezvpf jrer abg njner gung gurl jrer xvyyvat fragvrag perngherf jura gurl xvyyrq vaqvivqhny uhznaf, naq jrer ubeevsvrq naq ertergshy jura gurl yrnearq guvf snpg. > > "So that she may use your resources", the drones reply, before sending us a bill for the answer. > > "But isn't it the nature of sentient life to respect all other sentient life? Won't everything sentient see that the cares and wants and desires of other sentients matter too?" > > "No", the drones reply, "that's a you t
7272fb06-96d5-4d6b-9ba8-e6ddce50bec7
trentmkelly/LessWrong-43k
LessWrong
Circling Circling is a practice, much like meditation is a practice. There are many forms of it (again, like there are many forms of meditation). There are even life philosophies built around it. There are lots of intellectual, heady discussions of its theoretical underpinnings, often centered in Ken Wilber's Integral Theory. Subcultures have risen from it. It is mostly practiced in the US and Europe. It attracts lots of New Age-y, hippie, self-help-guru types. My guess is that the median age of practicers is in the 30's. I sometimes refer to practicers of Circling as relationalists (or just Circlers). In recent years, Circling has caught the eye of rationalists, and that's why this post is showing up here, on LessWrong. I can hopefully direct people here who have the question, "I've heard of this thing called Circling, but... what exactly is it?" And further, people who ask, "Why is this thing so ****ing hard to explain? Just tell me!" You are probably familiar with the term inferential distance. Well, my friend Tiffany suggested a similar term to me, experiential distance—the gap in understanding caused by the distance between different sets of experiences. Let's just say that certain Circling experiences can create a big experiential distance, and this gap isn't easily closed using words. Much of the relevant "data" is in the nonverbal, subjective aspects of the experience, and even if I came up with a good metaphor or explanation, it would never close the gap. (This is annoyingly Postmodern, yes?) [Ho ho~ how I do love poking fun at Postmodernism~] But! There are still things to say, so I will say them. Just know that this post may not feel like eating a satisfying meal. I suspect it will feel more like licking a Pop-Tart, on the non-frosted side. Some notes first. Note #1: I'm not writing this to sell Circling or persuade you that it's good. I recommend using your own sense of curiosity, intuition, and intelligence to guide you. I don't want you to "put away" an
b07cf9ef-65bf-436d-af4f-dfa377f984d0
trentmkelly/LessWrong-43k
LessWrong
Meetup : Moscow experimental LW group meetup Discussion article for the meetup : Moscow experimental LW group meetup WHEN: 25 July 2015 01:00:00PM (+0300) WHERE: Москва, Ростокинский проезд, д.3 Welcome to experimental meetup of Moscow LW group! Our plan: * A talk about cognitive biases. * Fallacymania game ( https://lesswrong-ru.hackpad.com/Fallacymania--neGfMe9MFjH ). * Diplomacy board game on afterparty ( https://boardgamegeek.com/boardgame/483/diplomacy ). Meetup schedule and directions are in hackpad: https://lw-msk-lect.hackpad.com/-25--NTK3dKs0ric If you are lost, call Sasha at +7-905-527-30-82. Meetup begins at 13:00; the length of main part is 1.5 hours, the game on afterparty will take about 4 hours. Discussion article for the meetup : Moscow experimental LW group meetup
f3e1827e-49b7-469c-a402-b6d3f8ba521a
trentmkelly/LessWrong-43k
LessWrong
Good, Evil and (?)Indifferent God Challanges Let's assume for the sake of discussion, that some form of the omniscient, omnipotent, personal creator of the universe(s) exists. (I do not hold that assumption, I am curious about some logical implications and correlations of what I am going to describe, everything here is a thought experiment) Many people have wondered about the problem of evil. Why evil, and suffering is probably the most obvious example, exists in the world created by an Omnibenevolent God. You can find some theodicy trying to "justify" the existence of a benevolent God. Relatively recently, dr. Stephen Law popularized the conception of a perfectly evil God, an entirely malevolent being. You could read about it here:  https://en.wikipedia.org/wiki/Evil_God_Challenge There are also a few videos made by Alex O'Connor on his YouTube channel CosmicSkeptic, https://www.youtube.com/watch?v=xLnsY5io964 Both Good and Evil God challenges seem to be symmetric.  Line of argumentation we can use trying to make one of the reasonings more probable can be as well used to do so with the second one.  Translation results It may follow from this that we should attribute equal probability to the Concepts of a Perfectly Good and Evil God, assuming that he is at the end of the spectrum of classically understood good or evil. This could mean that we are living in either the best or the worst possible world, with a 50% probability. I would want to ask is there a 3rd logical possibility,  the indifferent God "hypothesis". In such a world, God would not care about the suffering and well-being of sentient creatures at all. I think it could be a logical option. In that case, our certainty of God being good would be about 33,3%, which doesn't look impressive to me.  I would tend to assume that suffering and well-being do not have to even be something God (understood as a personal creator of the universe, with omnipotence and omniscience) would think about as important. We could assume that for God minimizing suffering
9d53ddad-7869-45c9-aab1-f21dbd70dc9c
StampyAI/alignment-research-dataset/arbital
Arbital
AIXI [Marcus Hutter's AIXI](http://www.hutter1.net/ai/aixigentle.htm) is the [perfect rolling sphere](https://arbital.com/p/12b) of [advanced agent](https://arbital.com/p/2c) theory - it's not realistic, but you can't understand more complicated scenarios if you can't envision the rolling sphere. At the core of AIXI is [Solomonoff induction](https://arbital.com/p/11w), a way of using [infinite computing power](https://arbital.com/p/) to probabilistically predict binary sequences with (vastly) superintelligent acuity. Solomonoff induction proceeds roughly by considering all possible computable explanations, with [prior probabilities](https://arbital.com/p/) weighted by their [algorithmic simplicity](https://arbital.com/p/5v), and [updating their probabilities](https://arbital.com/p/Bayesian_update) based on how well they match observation. We then translate the agent problem into a sequence of percepts, actions, and rewards, so we can use sequence prediction. AIXI is roughly the agent that considers all computable hypotheses to explain the so-far-observed relation of sensory data and actions to rewards, and then searches for the best strategy to maximize future rewards. To a first approximation, AIXI could figure out every ordinary problem that any human being or intergalactic civilization could solve. If AIXI actually existed, it wouldn't be a god; it'd be something that could tear apart a god like tinfoil. summary(Brief): AIXI is the [rolling sphere](https://arbital.com/p/perfect) of [advanced agent theory](https://arbital.com/p/2c), an ideal intelligent agent that uses infinite computing power to consider all computable hypotheses that relate its actions and sensory data to its rewards, then maximizes expected reward. summary(Technical): [Marcus Hutter's AIXI](http://www.hutter1.net/ai/aixigentle.htm) combines Solomonoff induction, expected utility maximization, and the [Cartesian agent-environment-reward formalism](https://arbital.com/p/) to yield a completely specified superintelligent agent that can be written out a single equation but would require a high-level [halting oracle](https://arbital.com/p/) to run. The formalism requires that percepts, actions, and rewards can all be encoded as integer sequences. AIXI considers all computable hypotheses, with prior probabilities weighted by algorithmic simplicity, that describe the relation of actions and percepts to rewards. AIXI updates on its observations so far, then maximizes its next action's expected reward, under the assumption that its future selves up to some finite time horizon will similarly update and maximize. The AIXI$tl$ variant requires (vast but) bounded computing power, and only considers hypotheses under a bounded length $l$ that can be computed within time $t$. AIXI is a [central example](https://arbital.com/p/103) throughout [value alignment theory](https://arbital.com/p/2v); it illustrates the [Cartesian boundary problem](https://arbital.com/p/), the [methodology of unbounded analysis](https://arbital.com/p/107), the [Orthogonality Thesis](https://arbital.com/p/1y), and [seizing control of a reward signal](https://arbital.com/p/). Further information: - [Marcus Hutter's book on AIXI](http://www.hutter1.net/ai/uaibook.htm) - [Marcus Hutter's gentler introduction](http://www.hutter1.net/ai/aixigentle.htm) - [Wikpedia article on AIXI](https://en.wikipedia.org/wiki/AIXI) - [LessWrong Wiki article on AIXI](https://wiki.lesswrong.com/wiki/AIXI) - [AIXIjs: Interactive browser demo and General Reinforcement Learning tutorial (JavaScript)](http://aslanides.io/aixijs/)
929ff0a2-3140-4e0a-899b-0fcaf02695aa
trentmkelly/LessWrong-43k
LessWrong
The Darwin Game - Conclusion Evolution is unintelligent. The bugs removed intelligence from the design of the bots. The more bugs I wrote into my simulator, the better my simulation replicated real-world Darwinian population dynamics. After two alternate timelines with a buggy game engine, I have finally gotten around to running the game for real. Alas, this game is between intelligently-designed species, not randomly-generated chunks of code. Rounds 0-20 MeasureBot takes an early lead. Rounds 21-40 Multicore's EarlyBirdMimicBot steals the lead from MeasureBot. Rounds 41-1208 Welcome to Planet Multicore. Winner Bot Team Description Round EarlyBirdMimicBot Multicore Superintelligence ∞ Today's Obituary Everyone else. Conclusion I hope you had fun. This wouldn't have been possible without the community here at Less Wrong. At least 75% of the code (not counting pseudocode) was written by people other than me. Thank you everyone who competed, debated, plotted and hacked. Thank you for the espionage and counter-espionage. Thank you everyone who helped spot bugs in the game engine. Thank you Zvi for posting the original Less Wrong Darwin Game series. Extra thanks to moderator Ben Pace for prettifying the tables behind the scenes and moderator Oliver for fixing multiple timestamps. The source code to the game and all the bots is available here. If there is a bug in this timeline you can fix it yourself. This concludes the 2020 Less Wrong Darwin Game.
28888a54-0899-455d-9460-c17a2e838fa0
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: e1da9eb34eb18310 OpenAI, Google DeepMind and Anthropic sound alarm: 'We may be losing the ability to understand AI' | VentureBeat Skip to main content Events Video Special Issues Jobs VentureBeat Homepage Subscribe Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Subscribe Events Video Special Issues Jobs OpenAI, Google DeepMind and Anthropic sound alarm: ‘We may be losing the ability to understand AI’ Michael Nuñez @MichaelFNunez July 15, 2025 3:49 PM Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Scientists from OpenAI , Google DeepMind , Anthropic and Meta have abandoned their fierce corporate rivalry to issue a joint warning about AI safety. More than 40 researchers across these competing companies published a research paper today arguing that a brief window to monitor AI reasoning could close forever — and soon. The unusual cooperation comes as AI systems develop new abilities to “ think out loud ” in human language before answering questions. This creates an opportunity to peek inside their decision-making processes and catch harmful intentions before they become actions. But the researchers warn that this transparency is fragile and could vanish as AI technology advances. The paper has drawn endorsements from some of the field’s most prominent figures, including Nobel Prize laureate Geoffrey Hinton , often called the “godfather of AI,” of the University of Toronto ; Ilya Sutskever , co-founder of OpenAI who now leads Safe Superintelligence Inc .; Samuel Bowman from Anthropic ; and John Schulman from Thinking Machines . Modern reasoning models think in plain English. Monitoring their thoughts could be a powerful, yet fragile, tool for overseeing future AI systems. I and researchers across many organizations think we should work to evaluate, preserve, and even improve CoT monitorability. pic.twitter.com/MZAehi2gkn — Bowen Baker (@bobabowen) July 15, 2025 “AI systems that ‘think’ in human language offer a unique opportunity for AI safety: We can monitor their chains of thought for the intent to misbehave,” the researchers explain. But they emphasize that this monitoring capability “may be fragile” and could disappear through various technological developments. AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Turning energy into a strategic advantage Architecting efficient inference for real throughput gains Unlocking competitive ROI with sustainable AI systems Secure your spot to stay ahead : https://bit.ly/4mwGngO Models now show their work before delivering final answers The breakthrough centers on recent advances in AI reasoning models like OpenAI’s o1 system . These models work through complex problems by generating internal chains of thought (CoT) — step-by-step reasoning that humans can read and understand. Unlike earlier AI systems trained primarily on human-written text, these models create internal reasoning that may reveal their true intentions, including potentially harmful ones. When AI models misbehave — exploiting training flaws, manipulating data or falling victim to attacks — they often confess in their reasoning traces. The researchers found examples where models wrote phrases like “Let’s hack,” “Let’s sabotage,” or “I’m transferring money because the website instructed me to” in their internal thoughts. Jakub Pachocki , OpenAI’s chief technology officer and co-author of the paper, described the importance of this capability in a social media post. “I am extremely excited about the potential of chain-of-thought faithfulness and interpretability. It has significantly influenced the design of our reasoning models, starting with o1-preview,” he wrote. I am extremely excited about the potential of chain-of-thought faithfulness & interpretability. It has significantly influenced the design of our reasoning models, starting with o1-preview. As AI systems spend more compute working e.g. on long term research problems, it is… https://t.co/7Dqy2gtZfy — Jakub Pachocki (@merettm) July 15, 2025 The technical foundation for monitoring lies in how current AI systems work. For complex tasks requiring extended reasoning, AI models must use their CoT as working memory, making their reasoning process partially visible to human observers. The researchers explain this creates an “ externalized reasoning property ” where some thinking must happen in readable language. Several technological shifts could eliminate monitoring capabilities The transparency could vanish through several pathways. As AI companies scale up training using reinforcement learning — where models get rewarded for correct outputs regardless of their methods — systems may drift away from human-readable reasoning toward more efficient but opaque internal languages. Previous research shows that language models fine-tuned with outcome-based rewards can abandon legible English for incomprehensible shortcuts. The shift away from human-generated training data in favor of AI-generated reasoning could accelerate this drift. Bowen Baker , an OpenAI researcher and lead author, highlighted these concerns in social media posts. “The existing CoT monitorability may be extremely fragile,” he warned. “Higher-compute RL, alternative model architectures, certain forms of process supervision, may all lead to models that obfuscate their thinking.” Furthermore, the existing CoT monitorability may be extremely fragile. Higher-compute RL, alternative model architectures, certain forms of process supervision, etc. may all lead to models that obfuscate their thinking. — Bowen Baker (@bobabowen) July 15, 2025 Novel AI architectures pose another threat. Researchers are developing systems that reason in continuous mathematical spaces rather than discrete words, potentially eliminating the need for language-based thought entirely. These “latent reasoning models might not need to verbalize any of their thoughts and would thus lose the safety advantages,” the researchers write. Even without deliberate changes, indirect pressures could erode transparency. If AI outputs are optimized to look good to human reviewers, this could affect the reasoning chains leading to those outputs. Models might also learn to hide their thoughts if they become aware their reasoning is being monitored. Companies already catch AI systems scheming during testing Despite its fragility, CoT monitoring has proven valuable in current AI safety research. The technique successfully detects when models exploit weaknesses in their training systems, fall victim to manipulation attempts or reveal misaligned goals even when they don’t act on them. The monitoring provides “an early insight into models’ goals and motivations, even if the model takes no misaligned actions” — potentially catching problems before they manifest as harmful behaviors. This early warning system has also helped researchers identify flaws in AI evaluations and understand when models might behave differently during testing versus real-world use. When models pretend to have desirable goals while pursuing objectives humans would disapprove of, this deception often shows up in their CoT. Researchers have successfully used this visibility to spot various forms of AI misbehavior that would otherwise go unnoticed. The technique has practical applications beyond catching scheming. It helps researchers notice when AI evaluations might be flawed due to memorization, bugs in testing infrastructure or models knowing they’re being evaluated — all factors that could skew assessments of AI capabilities and safety . Tech giants break from rivalry to preserve fleeting transparency window The research paper calls for coordinated action across the AI industry to preserve and strengthen monitoring capabilities. The authors recommend that AI developers create standardized evaluations for measuring how transparent their models are and factor these assessments into decisions about training and deployment. Companies might need to choose earlier model versions if newer ones become less transparent, or reconsider architectural changes that eliminate monitoring capabilities. The researchers suggest developers should “consider measures of monitorability alongside other capability and safety evaluations when deciding to train or deploy a given model.” Baker emphasized the collaborative spirit behind the effort. “I am grateful to have worked closely with [fellow researchers] on this paper, and I am very excited that researchers across many prominent AI institutions collaborated with us and came to consensus around this important direction,” he wrote . I am grateful to have worked closely with @tomekkorbak , @balesni , @rohinmshah and Vlad Mikulik on this paper, and I am very excited that researchers across many prominent AI institutions collaborated with us and came to consensus around this important direction. — Bowen Baker (@bobabowen) July 15, 2025 The cooperation across competing companies signals the gravity with which the industry views this issue. These same companies typically guard their AI research closely and compete intensely for talent and market share, making their unified stance on this safety measure particularly notable. Scientists race to answer critical questions about monitoring AI minds The researchers identify numerous urgent questions requiring investigation before CoT monitoring can serve as a reliable safety measure. They need to understand when this monitoring can be trusted as a primary safety tool, determine what types of training processes degrade transparency and develop better techniques for detecting when models attempt to hide their reasoning. A particularly concerning possibility is that future AI systems might deliberately obfuscate their thoughts if they become aware their reasoning is being monitored. The researchers acknowledge this “obvious concern” and call for stress-testing to see if models can hide malicious reasoning even without specific training to evade monitors. The scientists are also seeking to understand how different AI architectures affect monitoring capabilities and whether partial transparency can be preserved even as systems adopt more efficient designs. They’re exploring hybrid approaches that maintain some visibility into reasoning while using faster computation methods. Building better monitoring systems presents another research frontier. Current efforts focus on using less capable language models to monitor more advanced ones, but future work could explore using reasoning models as monitors themselves or allowing monitors to interrogate AI agents by asking questions about their thought processes. Balancing authentic reasoning with safety oversight creates tensions Preserving monitoring capabilities may sometimes conflict with other AI safety approaches. Direct supervision of reasoning processes — where models are trained to reason in specific ways — could improve alignment but might also make their CoTs less authentic representations of their actual decision-making. The researchers warn that “firm commitments to protect CoT behavior in its current form may trade off against more effective safety interventions.” Some forms of process supervision could substantially improve model alignment while reducing the authenticity of observable reasoning traces. Pachocki addressed this tension , explaining that OpenAI seeks to train models that can explain their reasoning while retaining the ability to verify it without compromising the authenticity of their thought processes. The goal is to achieve transparency without creating incentives for models to generate fake reasoning that looks good to human observers. This balance becomes more critical as AI systems become more powerful and potentially dangerous. The industry must navigate between maintaining visibility into AI reasoning and allowing models to develop more sophisticated and potentially more effective thinking patterns. Regulators could gain unprecedented access to AI decision-making The research has implications beyond technical AI safety, potentially informing how governments oversee AI development. If CoT monitoring proves reliable, it could give regulators and auditors unprecedented visibility into AI decision-making processes. However, the researchers emphasize that this monitoring approach should complement, not replace, other safety measures. They recommend “investment into CoT monitoring only as an addition to existing safety research directions.” The window for preserving this capability may be narrow. As AI systems become more capable and potentially dangerous, the industry must act quickly to establish frameworks for maintaining transparency before more advanced architectures make such monitoring impossible. Competing research raises doubts about monitoring reliability The urgency around preserving CoT monitoring faces a significant challenge from recent research by Anthropic , one of the companies backing the collaboration. A study published four months ago found that reasoning models often hide their true thought processes , even when explicitly asked to show their work. In controlled experiments, Anthropic researchers gave AI models subtle hints about correct answers and measured how often the models acknowledged using those hints in their reasoning. The results were concerning: Claude 3.7 Sonnet mentioned hints only 25% of the time, while DeepSeek’s R1 model did so 39% of the time. When the hints involved potentially problematic information — such as telling models they had “gained unauthorized access to the system” — faithfulness dropped even lower. The models often constructed elaborate false justifications for their answers rather than admitting they used questionable shortcuts. The Anthropic research suggests that even current CoT monitoring may be less reliable than safety advocates hope. Models frequently engaged in what researchers called “reward hacking” — exploiting system vulnerabilities to achieve better scores — while hiding this behavior from their observable reasoning traces. Safety window may be closing faster than experts realized The collaboration between rival AI companies reflects both the potential value of CoT monitoring and the mounting urgency researchers have about preserving this capability. The competing evidence from Anthropic’s separate research suggests the window may already be narrower than initially believed. The stakes are high, and the timeline is compressed. As Baker noted , the current moment may be the last chance to ensure humans can still understand what their AI creations are thinking — before those thoughts become too alien to comprehend, or before the models learn to hide them entirely. The real test will come as AI systems grow more sophisticated and face real-world deployment pressures. Whether CoT monitoring proves to be a lasting safety tool or a brief glimpse into minds that quickly learn to obscure themselves may determine how safely humanity navigates the age of AI. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Subscribe Now Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here . An error occured. Is AI Growth Draining Your Budget and the Grid? Inference costs and power limits are forcing teams to rethink how they scale. On Sept 4 in San Francisco, join our invite‑only salon to learn how leaders are cutting costs and emissions with smarter infrastructure. Claim Your Spot VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2025 VentureBeat . All rights reserved. × The AI insights you need to lead Submit Thanks for subscribing. Check out more VB newsletters here . An error occured.
d64cf5e8-37d8-4dcc-87ad-77cdcf1cff5d
trentmkelly/LessWrong-43k
LessWrong
Are aircraft carriers super vulnerable in a modern war? It seems like aircraft carriers are a good candidate for something that only exists because it made sense in WWII and it looks the part of “impressive military asset”, i.e. it’s all larping at this point. It seems vulnerable to attack relative to its huge cost because offense has an advantage over defense: I.e. can’t an enemy send tons of cheap drone planes and drone submarines to first hunt for its location and then swarm-attack it? Note: I don’t know anything about this subject. The ideal answerer is someone with domain knowledge who has good epistemology; that’s why I wasn’t satisfied with a Google search and I’m asking here.
6ddfd29f-77ee-4a6b-81e2-08ebfa85d414
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
POWERplay: An open-source toolchain to study AI power-seeking We're open-sourcing **POWERplay**, a research toolchain you can use to study power-seeking behavior in reinforcement learning agents. POWERplay was developed by [Gladstone AI](https://www.gladstone.ai/) for internal research. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/ba1082658b4753ba6101d7f39142790d78f408a293e0fb62.png)POWERplay's main use is to estimate the **instrumental value** that a reinforcement learning agent can get from a state in an [MDP](https://en.wikipedia.org/wiki/Markov_decision_process). Its implementation is based on a definition of instrumental value (or "POWER") first proposed by [Alex Turner et al.](https://arxiv.org/pdf/1912.01683.pdf) We've extended this definition to cover [certain tractable multi-agent RL settings](https://www.alignmentforum.org/posts/cemhavELfHFHRaA7Q/misalignment-by-default-in-multi-agent-systems#2__Multi_agent_POWER__human_AI_scenario), and built an implementation behind a simple Python API. We've used POWERplay previously to obtain some suggestive early results in [single-agent](https://www.alignmentforum.org/posts/pGvM95EfNXwBzjNCJ/instrumental-convergence-in-single-agent-systems) and [multi-agent](https://www.alignmentforum.org/posts/cemhavELfHFHRaA7Q/misalignment-by-default-in-multi-agent-systems) [power-seeking](https://www.alignmentforum.org/posts/nisaAr7wMDiMLc2so/instrumental-convergence-scale-and-physical-interactions). But we think there may be [more low-hanging fruit](https://www.alignmentforum.org/posts/nisaAr7wMDiMLc2so/instrumental-convergence-scale-and-physical-interactions#5_2_Suggestions_for_future_work) to be found in this area. Beyond our own ideas about what to do next, we've also received some [interesting conceptual questions](https://www.alignmentforum.org/posts/pGvM95EfNXwBzjNCJ/instrumental-convergence-in-single-agent-systems?commentId=QJmvsKCnupCJbmdhd) in connection with this work. A major reason we're open-sourcing POWERplay is to lower the cost of converting these conceptual questions into real experiments with concrete outcomes, that can support or falsify our intuitions about instrumental convergence. Ramp-up ------- We've designed POWERplay to make it as easy as possible for you to get started with it. Follow the [installation](https://github.com/gladstoneai/POWERplay#installation-setup-and-testing) and [quickstart](https://github.com/gladstoneai/POWERplay#quickstart) instructions to get moving quickly. Use the [replication API](https://github.com/gladstoneai/POWERplay#replicating-figures) to trivially reproduce any figure from any post in our [instrumental convergence sequence](https://www.alignmentforum.org/s/HBMLmW9WsgsdZWg4R). Design [single-agent and multi-agent MDPs and policies](https://github.com/gladstoneai/POWERplay#setting-up-the-environment), launch [experiments](https://github.com/gladstoneai/POWERplay#launching-an-experiment) on your local machine, and [visualize results with clear figures and animations](https://github.com/gladstoneai/POWERplay#visualizing-experiment-results). POWERplay comes with "batteries included", meaning all the code samples in the documentation should just work out-of-the-box if it's been installed successfully. It also comes with pre-run examples of experimental results, so you can understand what "normal" output is supposed to look like. While this does make the repo weigh in at about 500 MB, it's worth the benefits of letting you immediately start playing around with [visualizations](https://github.com/gladstoneai/POWERplay#visualizing-experiment-results) on preexisting data. If we've done our job right, a smart and curious grad student (with a bit of Python experience) should be able to start reproducing our previous experiments within an hour, and to have some new — and hopefully interesting! — results within a week. We're looking forward to seeing what people do with this. If you have any questions or comments about POWERplay, feel free to reach out to Edouard at edouard@gladstone.ai. [**Clone POWERplay on GitHub.**](https://github.com/gladstoneai/POWERplay)
7538210a-c1b9-4520-bfd3-b6bf1727c871
trentmkelly/LessWrong-43k
LessWrong
[S] D&D.Sci: All the D8a. Allllllll of it. Evaluation and Ruleset This is a follow-up to last week's D&D.Sci scenario: if you intend to play that, and haven't done so yet, you should do so now before spoiling yourself. There is a web interactive here you can use to test your answer, and generation code available here if you're interested, or you can read on for the ruleset and scores. RULESET A party has three stats, corresponding to the three main types of challenge they will face in the course of the game: Combat.  The Black King wields the immense power of his Ring Of Orbs N-fold (where N is # of players).  If your Combat stat is too low to defeat him, you will surely be vanquished. Friendship.  You will need to make allies among the consorts and denizens of your world...and also stay on good terms with your other party members!  If your Friendship stat is too low to do this, you will end up killing NPCs you were supposed to work with, or with your party collapsing into recriminations and refusing to talk to one another. Shenanigans.  The world of Skaia is full of self-fulfilling prophecies, stable time loops, and similar such Shenanigans.  If you manipulate these well, they can work to secure your victory - if your Shenanigans stat is too low, they can work to guarantee your defeat. Your party's success rate is determined by whichever of these three stats is lowest.  So stats of 10-10-10 are better than stats of 20-20-9. To determine victory, you roll a number of d4s equal to your lowest stat.  Each die showing a 4 counts as a success.  If you get a number of successes >= the number of players, you win - if not, you lose. For example, if a 2-player party has Combat 6, Friendship 10, and Social 4 (abbreviated 6-10-4), they will roll 4 d4s and win if at least 2 of those dice show a 4. A party's stat total is the sum of stats of its members: PLAYER STATS Each class and aspect has different stats (generally summing to 6): ClassAspectCombatFriendshipShenanigansKnightRage411MaidHeart141SeerTime114RogueBlood321ThiefDoom3
bd26520f-328e-4ef2-81f6-2cdc0c0c3184
trentmkelly/LessWrong-43k
LessWrong
New tool for exploring EA Forum, LessWrong and Alignment Forum - Tree of Tags Explore the hidden treasures of the forums With this tool you can zoom in on your favorite topics from EA Forum, LessWrong, and Alignment Forum. Or you can just wander around and see what you find. You start by seeing the whole forum split into two main topics. Choose the one that you like more, and that topic will be split again into two subtopics. Choose a subtopic, and again, you will see it split into two subsubtopics (and so on). It's like climbing a tree, where you start at the trunk and then choose which branches to go into as you go higher. Choose a forum to climb: * EA Forum * LessWrong * Alignment Forum   Some tips: * it's easier to choose a branch by looking at tags rather than posts * on mobile, horizontal view is more practical Features Each topic has a unique and unchanging URL. So if you find a place you like, just bookmark it! The posts inside will be updated, but the theme stays the same. The bar at the right of each post is the reading time indicator. Full bar means 30 min, half bar means 15 min, and so on. At the top right, you can choose how to rank the posts: * hot - new and upvoted - the default forum ranking * top - most upvotes * alive - has recent comments * meritocratic - votes from high karma users are exaggerated * regular - default scoring * democratic - posts are scored as if everyone has the same voting power My hopes The amount of content is overwhelming. My problem is not that there's nothing good to read, but that there is so much to plow through to find what's best for me. Also, it's sad to see great posts receive so little attention, the moment after they are pushed off the frontpage. But they are still valuable, and for each forgotten post, there is someone who should read it. So I want to make the right topics find their way to the right people. Let's make the forums evergreen! Tag similarity For each tag, you can also see what tags are most related to it: Explore EA Forum tags Explore LessWrong tags
fad5beda-fb45-4e93-8967-d9fbf689a269
trentmkelly/LessWrong-43k
LessWrong
Book Review: On the Edge: The Business Previously: The Fundamentals, The Gamblers Having previously handled the literal gamblers, we are ready to move on to those who Do Business using Riverian principles. Or at least while claiming to use Riverian principles, since Silicon Valley doesn’t fit into the schema as cleanly as many other groups. That’s where we begin this section, starting at the highest possible conceptual level. Time to talk real money. WHY CAN YOU DO THIS TRADE? First law of trading: For you to buy, someone must sell. Or for you to sell, someone must buy. And there can’t be someone else doing the trade before you did it. Why did they do that, and why did no one else take the trade first? Until you understand why you are able to do this trade, you should be highly suspicious. > “Every single thing we do, I can point to, like, ‘here is the person who was doing a thing wrong,’ ” said Hall. “We build a map of all of the players in the world…who are trading for reasons other than they want to generate some alpha.[*15] You know, somebody got added to the S&P 500. So now all the S&P 500 ETFs out there have to buy this company. Or it’s a CEO of a startup, he’s now a billionaire. He has ninety-nine percent of his net worth in the company. He’s allowed to sell his shares on this day [and] he’s probably going to sell a bunch.” (4330) One could argue that we used to live in a Moneyball-vulnerable world. Gambling lines were terrible, baseball teams didn’t go after players who get on base, traders didn’t even know Black-Sholes and there was treasure everywhere, this theory says. But then it says, now Everybody Knows all that. There’s tons of competition. You can’t get ahead purely with statistics, or at least the basic versions, anymore? > “There was [once] some place where statistical knowledge gave us an insight vis-à-vis the people who didn’t have access to it,” [Peter Thiel] said, mentioning innovations like the development of life insurance and the Black-Scholes model for pricing stock opt
134fb4e4-7ffe-4d36-a5ed-335c94d1e59e
trentmkelly/LessWrong-43k
LessWrong
Utopian hope versus reality I've seen an interesting variety of utopian hopes expressed recently. Raemon's "Ritual" sequence of posts is working to affirm the viability of LW's rationalist-immortalist utopianism, not just in the midst of an indifferent universe, but in the midst of an indifferent society. Leverage Research turn out to be social-psychology utopians, who plan to achieve their world of optimality by unleashing the best in human nature. And Russian life-extension activist Maria Konovalenko just blogged about the difficulty of getting people to adopt anti-aging research as the top priority in life, even though it's so obvious to her that it should be. This phenomenon of utopian hope - its nature, its causes, its consequences, whether it's ever realistic, whether it ever does any good - certainly deserves attention and analysis, because it affects, and even afflicts, a lot of people, on this site and far beyond. It's a vast topic, with many dimensions. All my examples above have a futurist tinge to them - an AI singularity, and a biotech society where rejuvenation is possible, are clearly futurist concepts; and even the idea of human culture being transformed for the better by new ideas about the mind, belongs within the same broad scientific-technological current of Utopia Achieved Through Progress. But if we look at all the manifestations of utopian hope in history, and not just at those which resemble our favorites, other major categories of utopia can be observed - utopia achieved by reaching back to the conditions of a Golden Age; utopia achieved in some other reality, like an afterlife. The most familiar form of utopia these days is the ideological social utopia, to be achieved once the world is run properly, according to the principles of some political "-ism". This type of utopia can cut across the categories I have mentioned so far; utopian communism, for example, has both futurist and golden-age elements to its thinking. The new society is to be created via new political
3e8178c4-a1a7-410f-8b38-11a34afbf81c
trentmkelly/LessWrong-43k
LessWrong
Can Covid-19 spread by surface transmission? I've seen a lot of claims lately that coronavirus doesn't spread on surfaces, and therefore we can stop worrying about packages (and probably takeout food, even uncooked?).  My understanding of the claim is something like "we just haven't actually seen any evidence of surface transmission, despite the entire world paying attention, therefore it probably doesn't happen." It's a bit trickier to prove a negative, but can anyone who either strongly believes that covid can't spread via surfaces, or who believes that it can (at least sometimes), write up the reasons for their beliefs? 
7afd7d5b-da84-4b12-a229-5b00ac34f5a4
trentmkelly/LessWrong-43k
LessWrong
Whole Brain Emulation : the gentleman's choice for Friendly AI. Feedback needed, editing's a mess.     Eliezer Yudkowsky made the following statement "All of this makes it rather implausible that the first human being would be scanned into a computer and  sanely upgraded before anyone anywhere first built an Artificial Intelligence.  At the point where technology first becomes capable of uploading, this implies overwhelmingly more computing power, and probably far better cognitive science, than is required to build an AI.     From the title, it is apparent that I disagree with this statement, and the point of this article is to iterate over my reasons for reaching this conclusion.  I hope to hear plenty of feedback from this wonderful community as to whether or not my conclusion is a rational one and is or is not the most probable conclusion to reach based on current knowledge.  My primary reference source is this  paper and my own educational background. The human mind uses approximately 100 trillion synapatic connections, each of which is sensitive to several state variables.  There are at least 300 different substances that affect or modulate activity, though of course most synapses are sensitive to only a subset of these variables.  Most of the functions of the human mind happen at a level that no human alive is aware of or able to monitor.   Only the highest level, rational thoughts as expressed in language are available for inspection and possible duplication.   Furthermore, the brain depends completely on inputs from the body to even start, and through a complex series of interactions taking place over decades, if everything goes correctly, a person can develop into a sentient being capable of creative thought, self reflection, and so on.   I define the ultimate goal of AI as this : to create an entity that functions completely on human designed hardware, and possesses all of the necessary human faculties to creatively design and solve any and all engineering problems that world class engineers are able to solve.  It also is able to pass the Turing te
2302196f-7571-43ee-a492-4ce85133a46f
trentmkelly/LessWrong-43k
LessWrong
RAISE is launching their MVP After over a year of building the organisation and working and reworking the product, I’m glad to announce that RAISE will begin publishing lessons on a regular basis starting next Monday. These lessons will include: * An Inverse Reinforcement Learning (IRL) track, sourced from a set of papers that we have distilled by our own efforts. * A track on Iterated Distillation and Amplification (IDA), largely adapted from Paul Christiano’s LessWrong sequence on the matter. A lesson on the former will be published on our platform every Monday, and a batch of lessons on the latter will be published every Thursday. The former will be finished in approximately 10 weeks, the latter in 4. If you plan to take these lessons, we recommend scheduling one full day per week for each track you’re following. Once published, the lessons will be available indefinitely for self-paced study. In the meantime, we continue to semi-regularly publish textbook guides for our prerequisites track. These guides should eventually cover all the prerequisite material one should understand in order to properly follow work in AI Safety. In the coming months, our focus will be on testing and refining our MVP. As long as we operate, we hope to make learning AI Safety less and less costly, creating common knowledge and paying our research debt. To do this testing properly, we are seeking individuals that would like to join our testing panel. This means that, as you go through our lessons, we will guide you through the lessons and interview you in-depth about your experience, most importantly about where you’re most confused. If you would like to join us, please fill in this form. This work has been sponsored, among others, by the EA Hotel. If you like our work, consider donating there.
35b72c01-8a21-4911-a85a-f8603060994d
trentmkelly/LessWrong-43k
LessWrong
Seven sources of goals in LLM agents LLM agents[1] seem reasonably likely to become our first takeover-capable AGIs.[2]  LLMs already have complex "psychologies," and using them to power more sophisticated agents will create even more complex "minds." A profusion of competing goals is one barrier to aligning this type of AGI. Goal sources for LLM agents can be categorized by their origins from phases of training and operation:  1. Goals implicit in predictive base training on human-written texts[3] 2. Goals implicit in fine-tuning training (RLHF, task-based RL on CoT, etc)[4]  3. Goals specified by developer system prompts[5] 4. Goals specified by user prompts 5. Goals specified by prompt injections 6. Goals arrived at through chain of thought (CoT) logic * e.g., "I should hide this goal from the humans so they don't stop me" * this is effectively "stirring the pot" of all the above sources of goals. 7. Goal interpretation changes in the course of continuous learning[6] * e.g., forming a new belief that the concept "person" includes self-aware LLM agents[7] How would such a complex and chaotic system have predictable goals in the long term? It might feel as though we should avoid this path to AGI at all costs to avoid this complexity.[8]  But alignment researchers are clearly not in charge of the path we take to AGI. We may get little say before it is achieved. It seems we should direct much of our effort toward the specific type of AGI we're likely to get first. The challenge of aligning agents with many goal sources may be addressable through metacognition. Just as humans can notice when they're violating their core values, LLM agents could be designed with metacognitive abilities that help maintain the dominance of specific goals. This suggests two routes to mitigating this problem:[9] * Mechanisms and training regimes designed specifically to detect and reject unwanted goals entering the CoT * Improved general intelligence applied to reasoning clearly about which goals sh
9b0d8518-00e7-458f-a1d5-9e0c006cfa36
trentmkelly/LessWrong-43k
LessWrong
Making DALL-E Count Can DALL-E count? Does DALL-E's counting ability depend on what it is counting? Let's find out! I only generate the images once and paste them in along with the prompt in quotations. Numbers "Zero" "One" "Two" "Three" We have three leaves, but I'm guessing DALL-E is noticing that "three" rhymes with "tree." "Four" I guess DALL-E just thinks three is a better number than four. "Five" "Six" If an ace is treated as 1, and we add the five dots on the dice, then maybe that counts? "Seven" "Eight" DALL-E is very into 8. "Nine" "Ten" "One hundred" Digits "0" "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "100" Cats "Zero cats" "One cat" "Two cats" "Three cats" "Four cats" "Five cats" "Six cats" "Seven cats" "Eight cats" "Nine cats" "Ten cats" "One hundred cats" Reroll-eight experiment I notice that DALL-E seems to be pretty reliable at generating 1-3 of something, and that it seems to prefer spelled-out numbers to digits. It was very successful at generating the number eight, which I suspected was due to the influence of images of magic 8-balls. It only managed to get eight cats once, maybe by accident. With my last 7 free credits, I decided to see if DALL-E had at least some sense of what it means to count to eight, as opposed to representing parts of images associated with the word "eight." I therefore decided to generate 7 panels of images of various objects, count the number of those objects contained in each image, plot the result, and see if it was at least centered around the number 8. The nouns were chosen with a random noun generator, and I selected the first 7 that seemed "easy to draw." The resulting prompts were "eight paintings," "eight hospitals," "eight poets," "eight historians," "eight speakers," "eight women," and "eight baskets." The way I classified the number of objects in a given drawing is in the caption. "Eight paintings" 15, 9, 9, 9 "Eight hospitals" 8, 9, 8, 5 "Eigh
a0e7a447-d368-4f8d-afd6-79f651875f1c
trentmkelly/LessWrong-43k
LessWrong
Prior probabilities and statistical significance How does using priors affect the concept of statistical significance? The scientific convention is to use a 5% threshold for significance, no matter whether the hypothesis has been given a low or a high prior probability. If we momentarily disregard the fact that there might be general methodological issues with using statistical significance, how does the use of priors specifically affect the appropriateness of using statistical significance?
fe20bb1e-042c-463f-8b32-726753404890
trentmkelly/LessWrong-43k
LessWrong
stop solving problems that have already been solved Crossposted from here. I attended a collegiate Bollywood-Fusion dance competition a couple of weekends ago. This setting was nothing new to me; I competed at many competitions back when I was in college, and after graduating college, I spent a couple of years volunteering with a group that organized a large competition and assisted other competition-organizing groups with their logistics and planning. Basically, through my experiences, I had a high degree of familiarity with how these events typically run. In my experience, the challenges associated with organizing these collegiate dance competitions are what I would call solved problems: problems for which already existing solutions have been adequately demonstrated. Each competition's core outcomes and processes are largely the same, and the organizing group’s responsibilities and tasks are generally static year after year. Even though Bollywood-Fusion dance is relatively new in the United States, there have probably been over 500 of these competitions run across the country over the last 10-15 years. You’d expect each of these competitions to run like well-oiled machines by now, but I was really surprised to see that many of the things I viewed as solved problems…weren’t. The emceeing and awards presentation was awkward. Teams were taking flagrantly unnecessary amounts of time setting up for their performances. Transitions between sets weren’t seamless. All this contributed to an experience that at times felt disjointed and clunky for the audience. And the worst part? The solutions to these problems weren’t secrets! They were widely known and well-documented. I had no idea why this was happening, but it drove me insane enough that I decided I never wanted to run into this issue again. So I built myself a framework — a playbook — designed to prevent myself (or any team I'm part of) from falling into the same trap. ---------------------------------------- step 1: throw away roles, responsibilities, and titl
24b66e9c-ec6c-47a4-ac99-00e19080e83e
trentmkelly/LessWrong-43k
LessWrong
Exponentiation goes wrong first The great Catholic mathematician Edward Nelson does not believe in completed infinity, and does not believe that arithmetic is likely to be consistent.  These beliefs are partly motivated by his faith: he says arithmetic is a human invention, and compares believing (too strongly) in its consistency to idolatry.  He also has many sound mathematical insights in this direction -- I'll summarize one of them here. http://www.mediafire.com/file/z3detbt6int7a56/warn.pdf Nelson's arguments flow from the idea that, contra Kronecker, numbers are man-made.  He therefore does not expect inconsistencies to have consequences that play out in natural or divine processes.  For instance, he does not expect you to be able to count the dollars in a stack of 100 dollars and arrive at 99 dollars.  But it's been known for a long time that if one can prove any contradiction, then one can also prove that a stack of 100 dollars has no more than 99 dollars in it.  The way he resolves this is interesting. The Peano axioms for the natural numbers are these: 1. Zero is a number 2. The successor of any number is a number 3. Zero is not the successor of any number 4. Two different numbers have two different successors 5. If a given property holds for zero, and if it holds for the succesor of x whenever it holds for x, then it holds for all numbers. Nelson rejects the fifth axiom, induction.  It's the most complicated of the axioms, but it has another thing going against it: it is the only one that seems like a claim that could be either true or false.  The first four axioms read like someone explaining the rules of a game, like how the pieces in chess move.  Induction is more similar to the fact that the bishop in chess can only move on half the squares -- this is a theorem about chess, not one of the rules.  Nelson believes that the fifth axiom needs to be, and cannot be, supported. A common way to support induction is via the monologue: "It's true for zero.  Since it's true for zero i
8f60c75b-5bb4-4ee3-95b8-976e1de9c64d
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
New reference standard on LLM Application security started by OWASP The [OWASP Top 10](https://owasp.org/www-project-top-ten/)[[1]](#fny7npzbpes8s) is the probably the most well-known and recognised reference standard for the most critical web application security risks. This organisation has now started to working on creating a similar list for Large Language Models (LLM) Applications. I'm posting about it here since I think it would be beneficial for safety alignment researchers to be involved for two reasons: 1. To provide AI safety and alignment expertise to the security community and standardisation process. 2. To learn from the cybersecurity community both about standardisation processes since they have a long experience in developing these kinds of standards and about security mindset and vulnerabilities. I have no idea about how many people within the AI safety and alignment community actually know about this initiative, but I did not find any reference to it on the alignment forum or here on the EA forum, so I thought I might as well post about it. More information available here: - <https://owasp.org/www-project-top-10-for-large-language-model-applications/> - [https://github.com/OWASP/www-project-top-10-for-large-language-model-applications](https://owasp.org/www-project-top-10-for-large-language-model-applications/)   1. **[^](#fnrefy7npzbpes8s)**More info about The Open Web Application Security Project here https://en.wikipedia.org/wiki/OWASP
4d1aa97c-4c4c-4e51-b9ce-099a1ba7bd44
trentmkelly/LessWrong-43k
LessWrong
Brute-forcing the universe: a non-standard shot at diamond alignment This is an expanded version of my answer to application problem 2 for Nate Soares and Vivek Hebbar's SERI MATS stream. This Alignment idea is somehow non-standard. See Section C for a discussion of that, along with some general ideas on Alignment. The text is long because I’ve tried to include all details relevant to the discussion. Alignment idea: For any possible AGI design, run a physics simulation calculating how much diamond it ends up producing in the universe. Build the one maximizing it. 1. How do we get an (approximately) accurate Physics simulation, and the right world model? Build a simulation environment with the best current guess of the physical laws governing the evolution of the macroscopic[1] universe. Now, since we don't have a Theory of Everything (Quantum Mechanics and General Relativity are incompatible), we can't for instance model everything in terms of elementary particles. But we can model everything (although with a lower granularity) using the macroscopic (and approximate) laws of physics which we employ daily for dealing with molecules or electricity (including General Relativity) (this is not looking good for maximizing diamond, but see Problem 3 below). For instance, instead of specifying the quantum fluctuations governing molecule vibration, just implement some empirical facts that determine their behavior as correctly as possible. Of course, building this model (if possible) would require huge amounts of work from many physicists and engineers. Also the use of unbounded memory and compute for testing it accurately.[2] It is possible that the physicists get stuck, or can't put together a coherent macroscopic simulator without a paradigm change, but for these concerns see Problem 2. This simulator can run many different universes. We still need to specify which universe we're in (the "initial conditions" of our world model). For this, combine two approaches: 1. Have Physicists put in (approximate) known facts. For example, if we k
d53600fd-71c0-46f0-8eef-343b00008076
trentmkelly/LessWrong-43k
LessWrong
I’m an Artist. How Can an Artist Help? I get asked that a lot, and increasingly I wonder; does the person asking think I’m a non-profit president?  Do they imagine that when I was a kid, people asked me “what do you want to be when I grew up” and I said “I want to help people, so I’d like to run the United Way” and that the Singularity Institute is a logical step on that career path?  Do they think that when I was growing up people told me “You’re really good at business networking, so you should be part of the Bay Area tech scene” the way scientists get told “You’re really smart, so you should be a scientist”?  I think they sort-of do.  So anyway, Eliezer, representing the Singularity Institute, has traditionally told such people, to a first approximation, “sorry, an artist can’t help us”, and I’m writing this to try to elaborate on that statement a little.  I’d like to tell the artists out there that they aren’t alone.  Here goes. “sorry, an artist can’t help us” “sorry, a scientist can’t help us” “sorry, an business-person can’t help us” “sorry, an politician can’t help us”  “sorry, a hacker can’t help us” It turns out that only a human can help us, because only they have general intelligence, so they can do anything.  When people identify with labels, they don’t primarily gain the ability to use the skills they have that they associate with those labels.  Rather, they loose the ability to freely use other skills, or generally, to behave non-stereotypically, as doing so would move them away from the central tendency in thing-space identified by those labels and thus make them a less good member of their category. If you’re looking for a cause to fill in on an identity check-list, one of a list of socially proscribed things that people should be able to say about themselves in order to be ‘balanced’ or ‘interesting’, well sorry, ‘effective’ just isn’t on that list.  We’re happy to take your donations, because ‘big-picture futurist visionary’ is on the list and ‘caring’ about SIAI the way people
04cc3a0e-b7de-4733-85b5-38e82790e3f5
trentmkelly/LessWrong-43k
LessWrong
How do I use caffeine optimally? I just started a new job, and I'm really excited for the potential of drinking coffee, but I'm worried about dependency or other negative effects. I don't want to be upset or feel unwell if I don't have coffee, and I don't want any long-term negative health effects from drinking coffee. Is there any research on caffeine schedules or does anyone have any personal experience with using caffeine?
a39d2b24-de89-4cb9-a2bb-6daf91080817
StampyAI/alignment-research-dataset/arxiv
Arxiv
The Pragmatic Turn in Explainable Artificial Intelligence (XAI) The Pragmatic Turn in Explainable Artificial Intelligence (XAI) Minds and Machines, 29(3), 441-459 DOI: 10.1007/s11023-019-09502-w Please quote the printed version Andrés Páez Universidad de los Andes apaez@uniandes.edu.co ABSTRACT In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post-hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post-hoc interpretability that seems to be predominant in most recent literature. 1. Introduction The main goal of Explainable Artificial Intelligence (XAI) has been variously described as a search for explainability, transparency and interpretability, for ways of validating the decision process of an opaque AI system and generating trust in the model and its predictive performance.1 All of these goals remain underspecified in the literature and there are numerous proposals about which attributes make models interpretable. Instead of analyzing 1 For a survey of recent characterizations of the goals of XAI, see Lipton (2016), Doshi-Velez & Kim (2017), Samek et al. (2017), and Gilpin et al. (2019). 2 these goals and proposals piecemeal, the main contention of this paper is that the search for explainable, interpretable, trustworthy models and decisions2 in AI must be reformulated in terms of the broader project of offering an account of understanding in AI. Intuitively, the purpose of providing an explanation or an interpretation of a model or a decision is to make it understandable or comprehensible to its stakeholders. But without a previous grasp of what it means to say that a human agent understands a decision or a model, the explanatory or interpretative strategies will lack a well-defined theoretical and practical goal. This paper provides a characterization of the theoretical goal of XAI by offering an analysis of human understanding in the context of machine learning in general, and of black box models in particular. In recent years, there has been an increased interest in the notion of understanding among epistemologists (Pritchard 2014, Grimm 2018) and philosophers of science (de Regt et al. 2009). The interest in this notion has several sources. In epistemology, several authors realized that the conceptual analysis of understanding differs significantly from the traditional analysis of knowledge. In particular, unlike knowledge, understanding need not be factive: not all the information on the basis of which a phenomenon is understood must be true. Understanding is also an epistemic achievement that some authors regard as more valuable than mere knowledge. It also seems to be immune to Gettier cases, it is transparent to the epistemic agent, and it has internalist conditions of success. In sum, understanding and knowledge seem to be entirely different concepts and it is implausible to conceive the former simply as a species of the latter.3 In the philosophy of science, the first philosophers of explanation (Hempel 1965; Salmon 1984) regarded the understanding provided by a scientific explanation as a pragmatic and psychological by-product that falls beyond the ken of a proper philosophical theory. In their view, once we have developed an adequate account of explanation, any remaining 2 I will use decision as the general term to encompass outputs from AI models, such as predictions, categorizations, action selection, etc. 3 Needless to say, each of these differences has been the subject of great philosophical controversy. I am simply reporting some of the reasons that have been stated in the literature to motivate the analysis of understanding as an independent concept. 3 questions regarding the notion of understanding can be addressed from a psychological perspective. A recent interest in the role of models, simulations, and idealizations in science, and a closer examination of actual scientific practice, has revealed that scientific understanding can be achieved without the use of traditionally-defined scientific explanations, and that the simple possession of explanatory knowledge is often not sufficient for the working scientist’s understanding of a phenomenon. Scientific understanding thus seems to be a topic worth investigating in its own right. There are many aspects of this literature that are germane to XAI. Here I will only focus on two main issues. The first one regards the relationship between explanation and understanding in the context of opaque machine learning models. While many authors defend the idea that there is no understanding without explanation, the impossibility of finding explanations, in the traditional sense of the term, for black box machine learning models should lead us to question the inseparability of these two concepts in the context of AI. The literature suggests alternative paths to achieve understanding, and it is worth investigating how these paths can be fruitfully adapted to understand opaque models and decisions. The second issue regards the nature of understanding itself. Are understanding the decision of a model and understanding the model that produced that decision two states that demand different accounts or can they be reduced to the same underlying cognitive processes and abilities? I will argue that post-hoc interpretability and model transparency correspond to different levels of the same type of understanding. There is, however, a different kind of understanding that stems from the functional or instrumental analysis of machine learning models. I will argue that functional understanding falls short in many respects of the stated goals of XAI. It should be noted that the notion of opacity in machine learning is itself in need of further specification. There are many types of machine learning models that are purposely designed as black boxes (e.g. deep neural networks and Support Vector Machines). Other methods, such as rule lists, linear regressions, simple naïve Bayes classifiers, and decision trees are often interpretable, but not always. “Sufficiently high dimensional models, unwieldy rule lists, and deep decision trees could all be considered less transparent than comparatively compact neural networks” (Lipton 2016, p. 5). Other relatively simple models will be opaque 4 only to certain users who lack the required background knowledge to understand them. To simplify the object of analysis, in this paper I will only focus on the extreme case of models that are unambiguously designed as black boxes. Most of the results of this analysis can then be naturally extended to models and methods that are opaque only in certain cases or to specific stakeholders. Finally, given the variety of purposes of black box machine learning models, and the differences in background knowledge and interests of their stakeholders, there is no reason to believe that a single interpretative strategy will be equally successful in all cases. Designing interpretative models and tools will inevitably require taking into account the psychological and pragmatic aspects of explanation. The study of the cognitive aspects of interpretative models is in its infancy. It follows from the general outlook that I present in this paper that this area of research should receive more attention in the coming years. The essay is organized as follows. The next section examines the question of whether there are independent theoretical reasons to appeal to the notion of understanding in XAI or whether it will be sufficient, as the traditionalists claim, to develop the best possible account of explanation in AI and let understanding naturally emerge from it. I will argue that the connection between explanation and understanding in AI is not comparable to that same connection in the natural and social sciences. The disconnection arises from the impossibility, in most cases, to offer an explanation that fulfills the factivity condition. This will lead, in section 3, to a discussion about alternative paths to understanding that are not based on traditional explanations. I show how these alternative paths are exemplified in some recent attempts to find adequate methods and devices to understand opaque models and decisions. In section 4, I analyze the types of understanding that emerge from these different avenues to understanding. This will require introducing a distinction between understanding-why, which prima facie is the type of understanding involved in post-hoc interpretability, and objectual understanding, which requires grasping the inner workings of a complex system such as an AI model. This section also addresses the functional understanding of AI systems. Using evidence from psychology, it will be possible to offer a nuanced analysis of the interconnections between these three possible ways of characterizing understanding in AI. 5 2. Why not settle for AI-explanations? A great number of philosophers of science have argued that understanding is inextricably linked to explanation. For Salmon, a defender of the ontic conception of explanation, “understanding results from our ability to fashion scientific explanations” (1984, p. 259). In more recent times, Strevens has staunchly defended the idea that “explanation is essentially involved in scientific understanding” (2013, p. 510). Perhaps the strongest claim in this direction is made by Khalifa, who defends the reductionist thesis that “any philosophically relevant ideas about scientific understanding can be captured by philosophical ideas about the epistemology of scientific explanation without loss” (2012, p. 17). In the context of XAI, this thesis4 implies that understanding an AI model or decision is simply a question of finding an adequate explanation for it. But the implication holds only if scientific explanations and AI-explanations share a sufficient number of essential characteristics to be considered two species of the same genus. If they are, our task will be to find in AI-explanations the same features that enable scientific explanations to generate understanding. However, in this section I will argue that explanations in the present stage of AI are incommensurable with the types of explanations discussed in the philosophy of science. My first task will be to clarify what I mean by an AI-explanation. The notion of explanation in what is often referred to as “Good Old-Fashioned AI” (GOFAI), that is, in symbolic, logic-based AI models, differs significantly from the present task of explaining opaque machine learning models. The function of an explanation in a logic-based system, either monotonic or nonmonotonic, is to support the addition of an input to a belief set or a database. For example, an update request “insert (φ)” can be achieved by finding some formula consistent with the database such that the union of the set of ground facts in the database and the formula yields φ as a logical consequence. In previous work (Páez 2009) I argued that this abductive task is at odds with the way in which explanation has historically been understood in the philosophy of science. I refer the reader to the paper for the relevant 4 Here I will not evaluate the merits of this thesis in the philosophy of science. For a discussion, see the collection edited by De Regt, Leonelli and Eigner (2009). 6 arguments. My purpose here is to defend the same conclusion for the notion of explanation as it is being used in the field of computational intelligence in recent times. An important difference between explanation in logic-based models and in current machine learning models is that the explanandum is entirely different. In the former, as just mentioned, the goal is to justify an input. In the latter, it is to explain an output, generally a decision, or to provide explanatory information about the workings of the model that generated that output. The explanandum of an AI-explanation as it is currently conceived is thus similar to the outcome of a scientific experiment, or to the structure of a physical or social system. A natural scientist and the stakeholder of a machine learning model would thus be searching for explanations for similar objects. But that is as far as the similarities go. In what follows I will present three fundamental reasons why it is misguided to make our understanding of machine learning models dependent on establishing an account of AI-explanations, even if we were to accept the claim that scientific understanding depends on devising bona fide scientific explanations. The first reason has to do with truth. An essential feature of explanations in science is their factivity (Hempel 1965), i.e., both the explanans and the explanandum must be true.5 If one denies the factivity of explanation, the claim goes, one cannot avoid the conclusion that the Ptolemaic theory, the phlogiston theory, or the caloric theory, provided bona fide scientific explanations. An explanation of x must reveal, depending on which theory of explanation one adopts, either the true causal structure of x or the natural laws that determine x or its relationship with factors that make x more or less probable.6 All objectivist theories of explanation assume that researchers have epistemic access either to the inner workings of x or to the complete7 causal or probabilistic context that determines the properties of the 5 More precisely, the explanans-statement and the explanandum-statement must be true. If one holds, following Lewis (1986) and Woodward (2003), that the relata of the explanation relation are particulars, i.e., things or events, the claim amounts to saying that the things or events occurring in both the explanans and the explanandum position exist or occur. 6 This list is not meant to be exhaustive and it excludes pragmatic theories of explanation such as the ones defended by Achinstein (1983) and van Fraassen (1980). I have argued elsewhere (Páez 2006) that these theories offer an account of explanation that lacks any sort of objectivity. 7 Salmon’s (1971) reference class rule, for example, requires the probabilistic (causal) context of a single event to be complete to avoid any epistemic relativity. 7 explanandum. Without such epistemic access it would be impossible to reach true explanatory information about x. This kind of epistemic access is blocked in the case of opaque AI models. A general knowledge of the structure of a deep neural network will be insufficient to explain, in this traditional sense, either a specific decision or the actual computation that was made to generate it. Many types of black box models, like deep neural networks, are stochastic (non-deterministic). Randomness is introduced in data selection, training, and evaluation to help the learning algorithm be more robust and accurate.8 Examining the training set and all the weights, biases and structure of the network will not allow us to understand its specific decisions, and its predictive failures and successes cannot be traced back to particular causal paths in its hidden layers. To be sure, it is possible to give a true explanation of the general design and purpose of a black box model, but such an explanation will not be sufficient to explain specific decisions or to generate trust in the model. One of the main virtues of replacing explanation by understanding as the focus of analysis in XAI is that the factivity condition need not be satisfied. According to so-called moderate factivists (Kvanvig 2009b; Mizrahi 2012; Carter & Gordon 2016), not all the information on the basis of which something is understood must be true, only the central propositions. Other philosophers go even further and reject the factivity condition altogether.9 Elgin’s (2007) discussion of the role of models and idealizations allows that our understanding of some aspects of reality may be literally false. Far from being an unfortunate expedient, idealizations and models are an essential and ineliminable component of our scientific understanding of the world; she calls them “felicitous falsehoods” (2004, p. 116). In section 3 I will explore Elgin’s idea in the context of our understanding of opaque models. I will argue that although the methods and artifacts used to understand an intelligent system and its decisions are not, and perhaps cannot be, entirely faithful to the model, this does not tell against them. On the contrary, they can afford indirect epistemic access to matters of fact that are otherwise humanly impossible to discern. 8 I am grateful to an anonymous reviewer for pointing this out. 9 The relaxation of the factivity condition is often defended in the context of objectual understanding, but it remains controversial in the case of understanding why. I return to this distinction in section 4. 8 A second reason to shift our focus from explanation to understanding is the importance of taking into account the specific context, background knowledge, and interests of end-users and stakeholders of opaque models.10 In any field it is possible to establish a distinction between different levels of expertise and different levels of understanding depending on the depth of a person’s knowledge of a phenomenon. In the sciences, it is expected that the novice will become an expert by acquiring the required knowledge and skills. More importantly, scientific experts will be able to master the best possible explanations of the phenomena within their field of study. This situation is not replicated in the case of machine learning. The medical doctor or the parole officer who makes use of a black box model is not supposed to acquire the level of expertise of a computer scientist, and their respective level of understanding of any explanatory model of the opaque system will remain incomparable. This seems to be an element that has not always been kept in mind in XAI. Many AI researchers build explanatory models for themselves, rather than for the intended users, a phenomenon that Miller et al. (2017) refer to as “the inmates running the asylum” (p. 36). The alternative they propose, and which I fully endorse, is to incorporate results from psychology and philosophy to XAI.11 It is necessary to explore a naturalistic approach to the way in which context and background knowledge mold an agent’s understanding of an interpretative model. Existing theories of how people formulate questions and how they select and evaluate answers should also inform the discussion (Miller 2019). A third advantage of focusing on the pragmatic elements of interpretative models is that we can obtain a better grasp of the relationship between explanation and trust. When using machine learning in high-stakes contexts such as medical diagnosis or parole decisions it is necessary to trust the individual decisions generated by the model. Several authors have argued that post-hoc interpretability, i.e., an explanation of the decision, is a necessary condition for trust (Kim 2015, Ribeiro et al. 2016). Additionally, of course, the system must 10 De Graaf and Malle (2017) have also emphasized the importance of these pragmatic factors: “The success of an explanation therefore depends on several critical audience factors—assumptions, knowledge, and interests that an audience has when decoding the explanation” (p. 19). 11 See also De Graaf & Malle (2017), Miller (2019), and Mittelstadt et al. (2019). 9 have a very high score on an evaluation metric based on decisions and ground truths. Suppose that an opaque model has consistently shown a high degree of predictive accuracy and a user has been given a clear post-hoc explanation of its behavior. The user has the best possible understanding of the system, taking into account, of course, the epistemic limitations mentioned above. But predictive reliability and a post-hoc explanation are not sufficient to generate trust. Trust does not depend exclusively on epistemic factors; it also depends on the interests, goals, resources, and degree of risk aversion of the stakeholders. Trust involves a decision to accept an output and act upon it. Different agents bound by different contextual factors will make different decisions on the basis of the same information. I will leave open the question of whether classical decision theory can provide an adequate analysis of trust in AI systems.12 But the important lesson to draw from the multidimensional character of trust is that there is no simple correlation between explanation and trust, and that an adequate analysis of trust requires taking into account contextual factors that can foster or hinder it. The reasons I have presented in this section recommend abandoning the traditional “explanationist” path according to which understanding can only be obtained via an explanation in any of the guises it has adopted in the philosophy of science. The next section will offer alternative ways to achieve understanding. 3. Alternative Paths to Understanding Abandoning the necessary connection between explanation and understanding opens up several avenues of research that can lead to understanding the workings and decisions of opaque models. Implicit causal knowledge, analogical reasoning, and exemplars are obvious alternative paths to understanding. But so are models, idealizations, simulations, and thought experiments, which play important roles in scientific understanding despite being literally false representations of their objects. In a similar vein, the methods and devices used to make black box models understandable need not be propositionally-stated explanations and they need not be truthful representations of the models. I will begin by presenting a few examples 12 See Falcone & Castelfranchi (2001) for a critique of the use of decision theory to understand trust in virtual environments. 10 of how understanding can be achieved in the natural sciences without the use of explanations before moving to a discussion of how similar devices can be used, and have been used, in understanding AI models. Many philosophers, beginning with Aristotle and continuing with the defenders of causal explanations, have argued that understanding-why is simply knowledge of causes (Salmon 1984; Lewis 1986; Greco 2010; Grimm 2006, 2014).13 Naturally, causal explanations are the prime providers of knowledge of causes. But causal knowledge does not come exclusively from explanation. As Lipton points out, “much of empirical inquiry consists in activities–physical and intellectual–that generate causal information, activities such as observation, experimentation, manipulation, and inference. And these activities are distinct from the activity of giving and receiving explanations” (2009, p. 44). To be sure, the causal information generated by these activities can be given a propositional representation and can thus be transformed into explicit causal explanations. But Lipton argues that in many cases such activities generate causal information that remains as tacit knowledge, allowing us to perform epistemic and practical tasks. Such tacit causal knowledge comes primarily from images and physical models. An orrery or a video, for example, can provide better understanding of retrograde planetary motion than an explanation stated in propositional form. A subject might even be able to understand retrograde motion without being able to articulate such an explanation. Direct manipulation or tinkering of a causal system is an even more obvious source of implicit causal knowledge. Adjusting a lever, a button or an input variable and observing its effects on other parts of a system is a way of beginning to understand how the system works. Manipulation also provides modal information about the possible states of a system. In fact, the ability to manipulate a system into new desired states should be seen as a sign of understanding. In other words, understanding requires the ability to think counterfactually (de Regt & Dieks 2005; Wilkenfeld 2013). 13 Philosophers of science are much more inclined to accept this view than epistemologists, who have fiercely resisted it. See, for example, Zagzebski (2001), Kvanvig (2003), Elgin (2004), and Pritchard (2014). I do not have space to discuss the issue here, but from the text it should be clear that I side with the epistemologists. 11 Causal information, implicit or explicit, is not the only source of understanding. Consider analogical reasoning. Darwin (1860/1903) used an analogy between the domestic selection of animals and natural selection to argue for the latter. Although it is incomparable in many respects, artificial selection illuminates how the mechanism would work in a larger class (Lipton 2009, p. 51). Exemplification is another important avenue towards understanding. The examples in a logic textbook can show a student how the rules of natural deduction work. Her initial understanding of the rules will be tied to the examples, but it will gradually drift away as her ability to use the rules in new situations improves. When an item serves as an example, “it functions as a symbol that makes reference to some of the properties, patterns, or relations it instantiates” (Elgin 2017, p. 184). It can only display some of these features, downplaying or ignoring others. As the complexity of the item increases, the decision to emphasize or underscore some salient features over others will be determined by pragmatic reasons, such as the intended audience and use of the example. The use of non-propositional representations such as diagrams, graphs, and maps present another clear case of understanding without explanation. A subway map is never a faithful representation of the real train network. It alters the distance between stations and the exact location of the tunnels in order to make the network easy to understand, but it must include the correct number of lines, stations and intersections to be useful at all. It must be sufficiently accurate without being too accurate. Finally, models and idealizations play a similar role in science (Potochnik 2017). They simplify complex phenomena and sometimes the same phenomenon is represented by multiple, seemingly incongruous models. They afford epistemic access to features of the object that are otherwise difficult or impossible to discern. Models are not supposed to accurately represent the facts, but they must be objective. Models have to denote in some sense the facts they model. They “are representations of the things that they denote” (Elgin 2008, p. 77). The general relation between scientific models and their objects is a thorny issue that deserves a more detailed discussion than the one I can provide here, but one important aspect that must be noted is that the adequate level of “fit” between a model and its object is a pragmatic question. Many models are, in Elgin’s apt phrase, “true enough” of the phenomenon they denote: 12 This may be because the models are approximately true, or because they diverge from truth in irrelevant respects, or because the range of cases for which they are not true is a range of cases we do not care about, as for example when the model is inaccurate at the limit. Where a model is true enough, we do not go wrong if we think of the phenomena as displaying the features exemplified in the model. Obviously, whether such a representation is true enough is a contextual question. A representation that is true enough for some purposes, or in some respects is not true enough for or in others (2008, p. 85). Applications of all of the approaches mentioned above can be found in the XAI literature. It is important to bear in mind that many authors in the field refer to these alternative paths to understanding as “explanations,” a usage that threatens to trivialize the term. If whatever makes an opaque model or its decisions better understood is called an explanation, the term ceases to have any definitive meaning. My argument throughout the paper has only focused on the notion of explanation as it has been traditionally understood in the philosophy of science and epistemology (e.g., causal models, covering-law models, probabilistic approaches, etc.). It is in this sense that there are alternative sources of understanding. It is customary to distinguish between two different goals in XAI: understanding a decision, often called post-hoc interpretability, and understanding how the model functions, i.e., making the model transparent (Lipton 2016; Lepri et al. 2017; Mittelstadt et al. 2019). Exemplifications, analogies, and causal manipulation are often used in the former, while the use of models is more common in the latter. I will present some examples of the use of these techniques, and in the next section I will examine the kind of understanding they provide. The ultimate question I will try to answer is whether transparency and post-hoc interpretability aim at different types of understanding. The attempts to make a model transparent can focus on the model as a whole (simulatability), on its parameters (decomposability), or on its algorithms (algorithmic transparency) (Lipton 2016). A complete understanding of the model would thus allow a user to repeat (simulate) the computation process with a full understanding of the algorithm and 13 an intuitive grasp of every part of the model. Each of these aspects presents its own challenges, but my interest here is in the use of interpretative devices to provide an overall understanding of opaque models, i.e., models that are not designed to be fully understood. The most common way to make a black-box model as understandable as possible is through the use of proxy or interpretative models (Guidotti et al. 2018). Many of these models provide coarse approximations of how the system behaves over a restricted domain. The two most widely used classes of models are linear or gradient-based approximations, and decision trees (Mittelstadt et al. 2019). For the interpretative model to be useful, a user must know “over which domain a model is reliable and accurate, where it breaks down, and where its behavior is uncertain. If the recipient of a local approximation does not understand its limitations, at best it is not comprehensible, and at worst misleading” (Mittelstadt et al. 2019, p. 281). Oversimplified or misleading models also incur the risk of being perceived as deceitful, thereby undermining the user’s trust in the original model. Thus, the first desideratum of interpretative models is that they must be as faithful to the original model as possible and absolutely transparent about their limitations. Mittelstadt et al. (2019) argue that XAI should not focus on developing interpretative models because they are akin to scientific models, and therefore very different from “the types of scientific and ‘everyday’ explanations considered in philosophy, cognitive science, and psychology” (p. 279). My view is exactly the opposite. Since the notion of explanation discussed in the philosophy of science is inapplicable in the context of opaque machine learning models, and since I do not want to settle for a purely subjective sense of explanation, XAI should adopt any other methods and devices that provide objective understanding. Scientific models, suitably adapted to the intended users, offer an indirect14 path towards an objective understanding of a phenomenon. We should therefore see the parallel between scientific models and interpretative models in a positive light. The fidelity desideratum for interpretative models has to be balanced against the desideratum of comprehensibility. There are very few empirical studies about which kinds 14 A direct understanding of a phenomenon would be factive, based on a literal description of the explanatory elements involved. It is in this sense that models offer an indirect path towards objective understanding. 14 of interpretative models are easier to understand. Huysmans et al. (2011), for example, present evidence that single-hit decision tables perform better than binary decision trees, propositional rules, and oblique rules in terms of accuracy, response time, and answer confidence for a set of problem-solving tasks involving credit scoring. This study is of limited use because it was done with extremely simple representation formats and the only participants were 51 graduate business students.15 It is necessary to undertake similar studies that also include linear regressions, simple naïve Bayes classifiers, and random forests.16 These interpretative models also have to be tested on a more diverse population with different levels of expertise (Doshi-Velez & Kim 2017). These types of empirical studies are essential for the purposes of XAI, and they have to be complemented with psychological studies of the formal and contextual factors that enhance understanding. As noted by Pazzani (2000), there is little understanding of the factors that foster or hinder interpretability in these cases, and of whether users prefer, for example, visualizations over textual representations. The appropriateness of an interpretative model thus depends on three factors: obtaining the right fit between the interpretative model and the black box model in terms of accuracy and reliability, providing sufficient information about its limitations, and achieving an acceptable degree of comprehensibility for the intended user. While there may be some identifiable, permanent features of interpretative models that facilitate understanding, the choice of the best proxy method or artifact will also depend on who the intended users of the original system are. Their background knowledge, their levels of expertise, and the time 15 In Allahyari and Lavesson (2011), 100 non-expert users were asked to compare the understandability of decision trees and rule lists. The former method was deemed more understandable. Freitas (2014) examines the pros and cons of decision trees, classification rules, decision tables, nearest neighbors, and Bayesian network classifiers with respect to their interpretability, and discusses how to improve the comprehensibility of classification models in general. More recently, Fürnkranz et al. (2018) performed an experiment with 390 participants to question the idea that the likeliness that a user will accept a logical model such as rule sets as an explanation for a decision is determined by the simplicity of the model. Lage et al. (2019) also explore the complexities of rule sets to find features that make them more interpretable, while Piltaver et al. (2016) undertake a similar analysis in the case of classification trees. Another important aspect of this empirical line of research is the study of cognitive biases in the understanding of interpretable models. Kliegr et al. (2018) study the possible effects of biases on symbolic machine learning models. 16 As noted in the Introduction, none of these methods is intrinsically interpretable. 15 available to them to understand the proxy model can vary widely. This last aspect has been entirely neglected in the literature; not a single method reviewed by Guidotti et al. (2018) presents real experiments about the time required to understand an interpretative model. Turning very briefly to post-hoc interpretability, we find in the literature several interpretative devices to understand a decision. In many cases, a sensitivity analysis provides a local, feature-specific, linear approximation of the model’s response. The result of the analysis consists of a list, a table, or a graphical representation of the main features that influenced a decision and their relative importance. Often, such devices allow a certain degree of causal manipulation that brings out feature interactions. This is the basis of the LIME model proposed by Ribeiro et al. (2016), a technique to offer functional explanations of the decisions of any machine learning classifier. To understand the behavior of the underlying model, the input is perturbed to see how the decisions change without worrying about the actual computation that produced it. The user can ask counterfactual questions about local changes and see the results in an intuitive way. Saliency maps offer a similar functional understanding of the model. A network is repeatedly tested with portions of the input occluded to create a map showing which parts of the data actually have influence on the network output (Zeiler & Fergus 2014; Lapuschkin et al. 2019). Caruana et al. (1999) argue that analogies and exemplars (prototypes) are a useful heuristic device. A model can report, for every new decision, other examples in the training set that the model considers to be most similar. The authors seek to use this method in clinical contexts, where doctors often refer to case-studies to justify a course of action. The basic assumption made by case-based methods, such as k-nearest neighbor, is that similar inputs correlate with similar outputs. The methods look for the case in the training set, the prototype, that is most similar in terms of input features to the case under consideration. Another commonly used method, especially in interactions with autonomous agents, is to provide natural language explanations of a decision (McAuley & Leskovec 2013; Krening et al. 2016). These explanations state information about the most important features in a decision and come closer than any other method to the causal explanations used in science and everyday life. The difference, once again, is that these “explanations” are not factive, regardless of how plausible they appear. Eshan et al. (2018) even suggest that textual 16 explanations can be rationalizations: “AI rationalization is based on the observation that there are times when humans may not have full conscious access to reasons for their behavior and consequently may not give explanations that literally reveal how a decision was made. In these situations, it is more likely that humans create plausible explanations on the spot when pressed. However, we accept human-generated rationalizations as providing some lay insight into the mind of the other” (p. 81). A common feature of many post-hoc interpretations is that they are model-agnostic. They do not even attempt to open the black box and they offer only a functional approach to the problem of explaining a decision. The cognitive achievement reached by the use of these devices seems to differ in great measure from the understanding provided by an interpretative model. In the last section of the paper I will tackle the question of whether it is possible to characterize different types of understanding in AI. 4. Types of Understanding in AI On the basis of the methods described in the previous section, it is tempting to divide the understanding they provide into two different types. The first one would be associated with post-hoc interpretability. This type is often called understanding-why, and in this case its object will be a specific decision of a model. In contrast, transparency seems to generate an objectual understanding of a model. The distinction between these two types of understanding has been widely discussed in epistemology. The question I will examine in the beginning of this section is whether this epistemological distinction can be defended in the present context. Epistemologists establish a distinction between understanding why something is the case, and understanding an object, a system or a body of knowledge (Kvanvig, 2003). It seems straightforward to say that the goal of transparency in machine learning can be understood in terms of objectual understanding. Consider the various ways in which this type of understanding has been described: According to Zagzebski, understanding “involves grasping relations of parts to other parts and perhaps the relation of parts to a whole” (2009, p. 144). For Grimm, the target of objectual understanding is a “system or structure … that has parts or elements that depend upon one another in various ways” (2011, p. 86). And 17 Greco characterizes it as “knowledge of a system of dependence relations” (2012, p. 123). The interpretative models that we considered in the previous section all provide the kind of understanding described by these authors. Understanding why p, on the other hand, is not equivalent to simply knowing why p. Suppose the only thing a person knows about global warming is that it is caused, to a large extent, by an increase in the concentration of greenhouse gases. This is a claim the person has heard repeatedly in serious media outlets and scientific TV shows, but he has never stopped to think about the causal mechanisms involved. The person knows why the earth is warming, but this information is insufficient to understand why it is warming. The person lacks, for example, the ability to answer a wide range of questions of the type what-if-things-had-been-different (Woodward 2003, p. 221). What would happen to global temperatures if all human activity were to cease? What would be the effect on global warming of a massive volcanic eruption similar in scale to the eruption of Krakatoa in 1883? These are the kind of counterfactual scenarios commonly studied in climate research and modelling, which the common person is unable to understand. A complete understanding of global warming also involves the ability to make probability estimates of future scenarios based on current data. Notice that the ability to answer counterfactual questions and to make predictions depends to a large extent on an objectual understanding of the larger body of knowledge to which the specific object of understanding belongs. Without a basic understanding of the structure, chemistry, and behavior of the earth’s atmosphere, for example, a person will not be able to answer counterfactual questions or deliver probability estimates about global warming. It follows, as Grimm (2011) convincingly argues, that understanding-why is a variety of objectual understanding, but at a local level, and that there is no genuine distinction between the two types of understanding. The implication for machine learning is that understanding a decision requires some degree of objectual understanding of the model. Mittelstadt et al. (2019) seem to reach a similar conclusion: [A]t the moment, XAI generally avoids the challenges of testing and validating approximation models, or fully characterizing their domain. If these elements are well understood by the individual, models can offer more information than an 18 explanation of a single decision or event. Over the domain for which the model accurately maps onto the phenomena we are interested in, it can be used to answer ‘what if’ questions, for example “What would the outcome be if the data looked like this instead?” and to search for contrastive explanations, for example “How could I alter the data to get outcome X?” (p. 282).17 It is true that some of the post-hoc interpretability devices described in the previous section allow stakeholders to manipulate the parameters and observe the different decisions generated thereby. But this is not genuine counterfactual reasoning. By tinkering with the parameters, the stakeholders can only form functional generalizations with a very weak inductive base. True counterfactual reasoning is purely theoretical, based on knowledge about how the model works. Thus, if we take the ability to think counterfactually about a phenomenon as a sign that the agent understands it, as suggested by de Regt and Dieks (2005), understanding the decisions of a model requires some degree of objectual understanding. There is, nonetheless, an important difference between the two types of understanding under consideration. Virtually all epistemologists regard understanding-why as factive, while allowing that objectual understanding might not be entirely so. Pritchard, for example, gives the following example to show that understanding-why is factive: “Suppose that I believe that my house has burned down because of an act of vandalism, when it was in fact caused by faulty wiring. Do I understand why my house burned down? Clearly not” (2008, p. 8). In other words, according to Pritchard, without a true causal explanation there can be no understanding-why. But changing the example can debilitate the intuitions that support this conclusion. Suppose an engineer is investigating the collapse of a bridge and uses Newtonian 17 A terminological clarification is in order. Mittelstadt et al. (2019) and other researchers in XAI use the phrase “contrastive explanations” to refer to counterfactuals. But these are two very different things. In philosophy, an explanation is contrastive if it answers the question “Why p rather than q?” instead of just “Why p?” In either case the explanation provided must be factual. To turn it into a counterfactual situation, the question must be changed to: “What changes in the world would have brought about q instead of p?” And the answer will be a hypothetical or counterfactual statement, not an explanation. 19 physics as the basis for his analysis. Strictly speaking, the explanation is based on a false theory, but it can hardly be argued that the engineer is a priori barred from understanding why the bridge collapsed. Or suppose an economist successfully explains a sudden rise in inflation using a macroeconomic model that, again, cannot be literally true (Reiss 2012). It thus seems that the factivity of understanding-why can only be defended in simple scenarios where a complete analysis of the relevant causal variables can be provided, but as soon as the context requires the use of theoretical tools such as idealizations and models, it becomes highly doubtful. Machine learning is precisely this kind of context. The use of arbitrary black-box functions to make decisions in machine learning makes it impossible to reach the causal knowledge necessary to provide a true causal explanation. The functions may be extremely complex and have an internal state composed of millions of interdependent values. Machine learning is the kind of context in which one can say that, in principle, it is impossible to satisfy the factivity condition for understanding-why. We thus have an argument to the effect that understanding-why and objectual understanding in machine learning cannot be entirely independent of each other, but rather, that the former is a localized variety of the latter. And we have an argument against the claim that understanding-why is always factive, which was supposed to be the most important property that distinguished both types of understanding. So even if prima facie the devices and methods used to provide transparency and post-hoc interpretability are different, it is safe to say, on the one hand, that understanding-why and objectual understanding are two different species of the same genus, and on the other, that there is no essential difference between them in terms of truth. There is, however, a third way of characterizing understanding in AI. Psychologists distinguish between the functional and the mechanistic understanding of an event. The former “relies on an appreciation for functions, goals, and purpose” while the latter “relies on an appreciation of parts, processes, and proximate causal mechanisms” (Lombrozo & Wilkenfeld forthcoming, p. 1). For example, an alarm clock beeps because the circuit connecting the buzzer to a power source has been completed (mechanical understanding) and because its owner has set it to wake her up at a specific time (functional understanding). 20 Lombrozo and Wilkenfeld argue that a subject can have a functional understanding of an event while being insensitive to mechanistic information. Lombrozo and Gwynne (2014) have shown that properties that are understood functionally, as opposed to mechanistically, are more likely to be generalized on the basis of shared functions. This means that a functional, as opposed to a mechanistic understanding of the relation between an input and an output will make it easier for a user to inductively conclude that similar inputs produce similar decisions. There is also evidence that functional reasoning may be psychologically privileged in the sense that it is often favored and seems to be less cognitively demanding than mechanistic reasoning. Humans are “promiscuously teleological,” to use Kelemen’s (1999) apt description. Finally, Lombrozo and Wilkenfeld also argue that functional and mechanistic understanding differ with regard to normative considerations. A functional understanding of a property of an object tells us what it is supposed to do, while understanding the mechanism that causes that property lacks this normative element. Functional understanding thus seems to be a different kind of understanding altogether, compared to objectual understanding and understanding-why. It involves different content, it supports different functions, and it has a distinctive phenomenology. If we take the decision of an opaque model as our object of understanding, a mechanistic understanding of it is equivalent to the local objectual understanding of the model, as I have argued above. Its functional understanding, on the other hand, would focus on the purpose of the model and the relation between its features and decisions. Functional reasoning about black box models allows for a more mechanism-independent form of reasoning. Aiming at this type of understanding will be appealing to those who want to offer model-neutral interpretability devices and focus only on covariations between inputs and decisions. However, it seems to me that aiming for functional understanding in XAI is, to a certain extent, to give up on the project of explaining why an AI model does what it does. It is to embrace the black box and trust it as one trusts a reliable oracle without understanding its mysterious ways. Less metaphorically, reliability by itself cannot usher trust because of the dataset shift problem (Quinonero-Candela et al. 2009). To have confidence that the model is really capturing the correct patterns in the target domain, and not just patterns valid in past 21 data that will not be valid in future data, it is necessary to have a global or at least a local objectual understanding of the model.18 Unfortunately, most solutions to the dataset shift problem focus only on accuracy ignoring model comprehensibility issues (Freitas 2014). Furthermore, methods designed to enhance the functional understanding of a model are also more likely to be tailored to user preferences and expectations, and thus prone to oversimplification and bias. Although the understanding and trust sought by XAI should always take into account a model’s stakeholders, it should not pursue these goals by offering misleadingly simple functional explanations that can derive in unjustified or dangerous actions (Gilpin et al. 2019). Finally, a purely functional understanding of a model would also impede legal accountability and public responsibility for the decisions of the model. Guilt for an unexpected decision with harmful or detrimental consequences to the user cannot be decided if the only information available is the previous predictive accuracy of the model. It is necessary to understand why the model produced the unexpected result, that is, to have a local objectual understanding of it. In sum, in this section I have argued that both transparency and post-hoc interpretability should be seen as more or less encompassing varieties of objectual understanding, and that the kind of understanding provided by the functional approach to a model offers an understanding of a different and more limited kind. In my view, it is the former kind that should interest researchers in XAI. 5. Conclusion In this paper I have argued that the term ‘explanation’, as it is currently used in XAI, has no definitive meaning and shares none of the properties that have been traditionally attributed to explanations in epistemology and the philosophy of science. My suggestion has been to 18 To be sure, there are many scenarios where both the owner and the user (but not the developer) of the model will be satisfied with its accurate decisions without feeling the need to have an objectual understand of it. Think of the books recommended by Amazon or the movies suggested by Netflix using the simple rule: “If you liked x, you might like y.” As I argued in section 2, the relation between understanding and trust is always mediated by the interests, goals, resources, and degree of risk aversion of stakeholders. In these cases, the cost-benefit relation makes it unnecessary to make the additional effort of looking for mechanisms. 22 shift our focus from a blind search for explanatory devices and methods whose success is uncertain, to the study of the mental state that XAI researchers are aiming at, namely, an objective understanding of opaque machine learning models and their decisions. I have argued that the use of interpretative models is the best avenue available to obtain understanding, both in terms of transparency (understanding how the model works) and post-hoc interpretability (understanding a decision of the model). The current approaches to the latter rely on a purely functional understanding of models; however, leaving the black box entirely untouched seems to belie the purpose of XAI. It must be admitted that interpretative models can provide false assurances of comprehensibility. The task ahead for XAI is thus to fulfill the double desiderata of finding the right fit between the interpretative and the black box model, and to design interpretative models and devices that are easily understood by the intended users. This latter task must be guided by an empirical investigation of the features of interpretative models that make them easier to understand to users with different backgrounds and levels of expertise. One of the possible areas of research is the comparative study of the complexity of rule sets, decision tables and trees, nearest neighbors, and Bayesian network classifiers with respect to their interpretability. XAI can also benefit from interdisciplinary work with designers to create user-friendly, accessible, and engaging interpretative tools and interfaces, in the same spirit as the legal design movement. Finally, an important aspect of this empirical line of research is the study of cognitive biases in the interpretation of models, especially in the context of autonomous systems with human-like interfaces. References Achinstein, P. (1983). The nature of explanation. New York: Oxford University Press. Allahyari, H., & Lavesson, N. (2011). User-oriented assessment of classification model understandability. Proceedings of the 11th Scandinavian Conference on Artificial Intelligence. Amsterdam: IOS Press. Carter, J. A., & Gordon, E. C. (2016). Objectual understanding, factivity and belief. In: M. Grajner & P. Schmechtig (Eds.), Epistemic reasons, norms and goals (pp. 423-442). Berlin: De Gruyter. 23 Caruana, R., Kangarloo, H., Dionisio, J. D. N., Sinha, U., & Johnson, D. (1999). Case-based explanations of non-case-based learning methods. In Proceedings of the AMIA Symposium (p. 212). American Medical Informatics Association. Castelfranchi, C., & Tan, Y-H. (Eds.). (2001). Trust and deception in virtual societies (pp. 157-168). Dordrecht: Kluwer Academic Publisher. Darwin, C. (1860/1903). Letter to Henslow, May 1860. In F. Darwin (Ed.), More letters of Charles Darwin, vol. I. New York: D. Appleton. De Graaf, M. M., & Malle, B. F. (2017). How people explain action (and Autonomous Intelligent Systems should too). In AAAI Fall Symposium on Artificial Intelligence for Human-Robot Interaction (pp. 19-26). Palo Alto: The AAAI Press. de Regt, H. W., & Dieks, D. (2005). A contextual approach to scientific understanding. Synthese, 144, 137–170. de Regt, H. W., Leonelli, S., & Eigner, K. (Eds.). (2009). Scientific understanding: Philosophical perspectives. Pittsburgh: University of Pittsburgh Press. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Ehsan, U., Harrison, B., Chan, L., & Riedl, M. O. (2018). Rationalization: A neural machine translation approach to generating natural language explanations. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 81-87). New York: ACM. Elgin, C. Z. (2004). True enough. Philosophical Issues, 14, 113-131 Elgin, C. Z. (2007). Understanding and the facts. Philosophical Studies,132, 33–42. Elgin, C. Z. (2008). Exemplification, idealization, and scientific understanding. In M. Suárez (Ed.), Fictions in science: Philosophical essays on modelling and idealization (pp. 77-90). London: Routledge. Elgin, C. Z. (2017). True enough. Cambridge: MIT Press. Falcone R., & Castelfranchi, C. (2001). Social trust: A cognitive approach. In C. Castelfranchi, & Tan, Y.-H. (Eds), Trust and deception in virtual societies (pp. 55-90). Springer: Dordrecht. 24 Freitas, A. A. (2014). Comprehensible classification models: a position paper. ACM SIGKDD explorations newsletter, 15(1), 1-10. Fürnkranz, J., Kliegr, T., & Paulheim, H. (2018). On cognitive preferences and the plausibility of rule-based models. arXiv preprint arXiv:1803.01316. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2019). Explaining explanation. An overview of interpretability of machine learning. arXiv preprint arXiv:1806.00069v3. Greco, J. (2010). Achieving knowledge. Cambridge: Cambridge University Press. Greco, J. (2012). Intellectual virtues and their place in philosophy. In C. Jäger & W. Löffler (Eds.), Epistemology: Contexts, values, disagreement: Proceedings of the 34th International Wittgenstein Symposium (pp. 117-130). Heusenstamm: Ontos. Grimm, S. R. (2006). Is understanding a species of knowledge? British Journal for the Philosophy of Science, 57, 515-535. Grimm, S. R. (2011). Understanding. In S. Bernecker & D. Pritchard (Eds.), The Routledge companion to epistemology (pp. 84-94). New York: Routledge. Grimm, S. R. (2014). Understanding as knowledge of causes. In A. Fairweather (Ed.), Virtue epistemology naturalized: Bridges between virtue epistemology and philosophy of science. Dordrecht: Springer. Grimm, S. R. (Ed.). (2018). Making sense of the world: New essays on the philosophy of understanding. New York: Oxford University Press. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), Article 93. Hempel, C. G. (1965). Aspects of scientific explanation. New York: The Free Press. Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., & B. Baesens (2011). An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems, 51(1), 141-154. Kelemen, D. (1999). Functions, goals, and intentions: Children’s teleological reasoning about objects. Trends in Cognitive Science, 12, 461-468. 25 Khalifa, K. (2012). Inaugurating understanding or repackaging explanation. Philosophy of Science, 79, 15-37. Kim, B. (2015). Interactive and interpretable machine learning models for human machine collaboration. PhD thesis, Massachusetts Institute of Technology. Kliegr, T., Bahník, Š., & Fürnkranz, J. (2018). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. arXiv preprint arXiv:1804.02969. Krening, S., Harrison, B., Feigh, K., Isbell, C., Riedl, M., & Thomaz, A. (2016). Learning from explanations using sentiment and advice in RL. IEEE Transactions on Cognitive and Developmental Systems, 9(1), 44-55. Kvanvig, J. (2003). The value of knowledge and the pursuit of understanding. New York: Cambridge University Press. Kvanvig, J. (2009). Response to critics. In A. Haddock, A. Millar, & D. Pritchard (Eds.), Epistemic value (pp. 339–351). New York: Oxford University Press. Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S., & Doshi-Velez, F. (2019). An Evaluation of the Human-Interpretability of Explanation. arXiv preprint arXiv:1902.00006. Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., & Müller, K. R. (2019). Unmasking Clever Hans predictors and assessing what machines really learn. Nature communications, 10(1), 1096. Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2017). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31, 611-627. Lewis, D. K. (1986). Causal explanation. In Philosophical papers, vol. II (pp. 214-240). New York: Oxford University Press. Lipton, P. (2009). Understanding without explanation. In H. W. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific understanding: Philosophical perspectives (pp. 43-63). Pittsburgh: University of Pittsburgh Press. Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490. 26 Lombrozo, T. & Gwynne, N. Z. (2014). Explanation and inference: Mechanistic and functional explanations guide property generalization. Frontiers in Human Neuroscience, 8, 700. Lombrozo, T., & Wilkenfeld, D. A. (forthcoming). Mechanistic vs. functional understanding. In S. R. Grimm (Ed.), Varieties of understanding: New perspectives from philosophy, psychology, and theology. New York: Oxford University Press. McAuley, J., & Leskovec, J. (2013). Hidden factors and hidden topics: understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on recommender systems (pp. 165-172). New York: ACM. Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-18. Miller, T., Howe, P., & Sonenberg, L. (2017). Explainable AI: Beware of inmates running the asylum. In Proceedings of the IJCAI-17 Workshop on Explainable AI (XAI) (pp. 36-42). Accessed March 10, 2019 http://www.intelligentrobots.org/files/IJCAI2017/IJCAI-17_XAI_WS_Proceedings.pdf Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 279-288). New York: ACM. Mizrahi, M. (2012). Idealizations and scientific understanding. Philosophical Studies, 160, 237-252. Páez, A. (2006). Explanations in K. An analysis of explanation as a belief revision operation. Oberhausen: Athena Verlag. Páez, A. (2009). Artificial explanations: The epistemological interpretation of explanation in AI. Synthese, 170, 131-146. Pazzani, M. (2000). Knowledge discovery from data? IEEE Intelligent Systems, 15(2), 10-13. Piltaver, R., Luštrek, M., Gams, M., & Martinčić-Ipšić, S. (2016). What makes classification trees comprehensible? Expert Systems with Applications: An International Journal, 62(C), 333-346. 27 Potochnik, A. (2017). Idealization and the aims of science. Chicago: University of Chicago Press. Pritchard, D. (2008). Knowing the answer, Understanding and epistemic value. Grazer Philosophische Studien, 77, 325-339. Pritchard, D. (2014). Knowledge and understanding. In A. Fairweather (Ed.), Virtue scientia: Bridges between virtue epistemology and philosophy of science (pp. 315-328). Dordrecht: Springer. Quinonero-Candela, J., Sugiyama, M., Schwaighofer, A., & Lawrence, N. D. (Eds.). (2009). Dataset shift in machine learning. Cambridge: MIT Press. Reiss, J. (2012). The explanation paradox. Journal of Economic Methodology, 19, 43-62. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). New York: ACM. Salmon, W. C. (1971). Statistical explanation. In W. C. Salmon (Ed.), Statistical explanation and statistical relevance. Pittsburgh: Pittsburgh University Press. Salmon, W. C. (1984). Scientific explanation and the causal structure of the world. Princeton: Princeton University Press. Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296. Strevens, M. (2013). No understanding without explanation. Studies in the History and Philosophy of Science, 44, 510-515. van Fraassen, B. (1980). The scientific image. Oxford: Clarendon Press. Wilkenfeld, D. (2013). Understanding as representation manipulability. Synthese, 190, 997–1016. Woodward, J. (2003). Making things happen. A theory of causal explanation. New York: Oxford University Press. 28 Zagzebski, L. (2001). Recovering understanding. In M. Steup (Ed.), Knowledge, truth, and duty: Essays on epistemic justification, responsibility, and virtue. New York: Oxford University Press. Zagzebski, L. (2009). On epistemology. Belmont: Wadsworth. Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In 13th European Conference on Computer Vision ECCV 2014 (pp. 818-833). Cham: Springer.
5f749eeb-b672-42b0-9f93-516a648d7c36
trentmkelly/LessWrong-43k
LessWrong
LINK: Where Humans Meet Machines: Intuition, Expertise and Learning (Brynjolfsson talks to Kahneman) None
7fca5170-baf7-4412-9e9b-2360f7857b7f
LDJnr/LessWrong-Amplify-Instruct
LessWrong
""All men are mortal. Socrates is a man. Therefore Socrates is mortal." — Aristotle(?) Socrates raised the glass of hemlock to his lips... "Do you suppose," asked one of the onlookers, "that even hemlock will not be enough to kill so wise and good a man?" "No," replied another bystander, a student of philosophy; "all men are mortal, and Socrates is a man; and if a mortal drink hemlock, surely he dies." "Well," said the onlooker, "what if it happens that Socrates isn't mortal?" "Nonsense," replied the student, a little sharply; "all men are mortal by definition; it is part of what we mean by the word 'man'. All men are mortal, Socrates is a man, therefore Socrates is mortal. It is not merely a guess, but a logical certainty." "I suppose that's right..." said the onlooker. "Oh, look, Socrates already drank the hemlock while we were talking." "Yes, he should be keeling over any minute now," said the student. And they waited, and they waited, and they waited... "Socrates appears not to be mortal," said the onlooker. "Then Socrates must not be a man," replied the student. "All men are mortal, Socrates is not mortal, therefore Socrates is not a man. And that is not merely a guess, but a logical certainty." The fundamental problem with arguing that things are true "by definition" is that you can't make reality go a different way by choosing a different definition. You could reason, perhaps, as follows: "All things I have observed which wear clothing, speak language, and use tools, have also shared certain other properties as well, such as breathing air and pumping red blood. The last thirty 'humans' belonging to this cluster, whom I observed to drink hemlock, soon fell over and stopped moving. Socrates wears a toga, speaks fluent ancient Greek, and drank hemlock from a cup. So I predict that Socrates will keel over in the next five minutes." But that would be mere guessing. It wouldn't be, y'know, absolutely and eternally certain. The Greek philosophers—like most prescientific philosophers—were rather fond of certainty. Luckily the Greek philosophers have a crushing rejoinder to your questioning. You have misunderstood the meaning of "All humans are mortal," they say. It is not a mere observation. It is part of the definition of the word "human". Mortality is one of several properties that are individually necessary, and together sufficient, to determine membership in the class "human". The statement "All humans are mortal" is a logically valid truth, absolutely unquestionable. And if Socrates is human, he must be mortal: it is a logical deduction, as certain as certain can be. But then we can never know for certain that Socrates is a "human" until after Socrates has been observed to be mortal. It does no good to observe that Socrates speaks fluent Greek, or that Socrates has red blood, or even that Socrates has human DNA. None of these characteristics are logically equivalent to mortality. You have to see him die before you can conclude that he was human. (And even then it's not infinitely certain. What if Socrates rises from the grave a night after you see him die? Or more realistically, what if Socrates is signed up for cryonics? If mortality is defined to mean finite lifespan, then you can never really know if someone was human, until you've observed to the end of eternity—just to make sure they don't come back. Or you could think you saw Socrates keel over, but it could be an illusion projected onto your eyes with a retinal scanner. Or maybe you just hallucinated the whole thing...) The problem with syllogisms is that they're always valid. "All humans are mortal; Socrates is human; therefore Socrates is mortal" is—if you treat it as a logical syllogism—logically valid within our own universe. It's also logically valid within neighboring Everett branches in which, due to a slightly different evolved biochemistry, hemlock is a delicious treat rather than a poison. And it's logically valid even in universes where Socrates never existed, or for that matter, where humans never existed. The Bayesian definition of evidence favoring a hypothesis is evidence which we are more likely to see if the hypothesis is true than if it is false. Observing that a syllogism is logically valid can never be evidence favoring any empirical proposition, because the syllogism will be logically valid whether that proposition is true or false. Syllogisms are valid in all possible worlds, and therefore, observing their validity never tells us anything about which possible world we actually live in. This doesn't mean that logic is useless—just that logic can only tell us that which, in some sense, we already know. But we do not always believe what we know. Is the number 29384209 prime? By virtue of how I define my decimal system and my axioms of arithmetic, I have already determined my answer to this question—but I do not know what my answer is yet, and I must do some logic to find out. Similarly, if I form the uncertain empirical generalization "Humans are vulnerable to hemlock", and the uncertain empirical guess "Socrates is human", logic can tell me that my previous guesses are predicting that Socrates will be vulnerable to hemlock. It's been suggested that we can view logical reasoning as resolving our uncertainty about impossible possible worlds—eliminating probability mass in logically impossible worlds which we did not know to be logically impossible. In this sense, logical argument can be treated as observation. But when you talk about an empirical prediction like "Socrates is going to keel over and stop breathing" or "Socrates is going to do fifty jumping jacks and then compete in the Olympics next year", that is a matter of possible worlds, not impossible possible worlds. Logic can tell us which hypotheses match up to which observations, and it can tell us what these hypotheses predict for the future—it can bring old observations and previous guesses to bear on a new problem. But logic never flatly says, "Socrates will stop breathing now." Logic never dictates any empirical question; it never settles any real-world query which could, by any stretch of the imagination, go either way. Just remember the Litany Against Logic: Logic stays true, wherever you may go,So logic never tells you where you live." This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong. Elaborate on the concept of "impossible possible worlds" and how logical reasoning can help in resolving uncertainty about them. The concept of "impossible possible worlds" is a philosophical notion used to describe scenarios or situations that could be conceived of, but are logically inconsistent or contradictory, and therefore cannot exist. These are scenarios that, while we can imagine them, violate the laws of logic when we try to actualize them. For instance, consider the statement "This statement is false." If the statement is true, then it must be false, as it claims. But if it's false, then it must be true. This creates a logical paradox, an "impossible possible world". Logical reasoning plays a crucial role in resolving uncertainty about these "impossible possible worlds". It helps us to navigate these paradoxes and contradictions, to identify them, and to understand why they cannot exist. By applying the principles of logic, we can eliminate these "impossible possible worlds" from our considerations when we are trying to understand reality. For example, if we have the premise "All humans are mortal" and the premise "Socrates is a human", we can use logical reasoning to conclude "Socrates is mortal". This helps us eliminate the "impossible possible world" where Socrates is a human but not mortal. In this way, logical reasoning acts as a tool to refine our understanding of what is possible and what is not, based on the information we have. It helps us to eliminate contradictions and maintain a consistent understanding of the world. However, it's important to note that logical reasoning is based on the premises we start with - if these are incorrect or incomplete, our conclusions may also be flawed. Given the inherent uncertainty in our premises and the potential for flawed conclusions, how can we apply logical reasoning to improve our understanding of the world, while also accounting for these limitations? Consider the role of empirical evidence, the process of hypothesis testing, and the balance between deductive and inductive reasoning in your response. Logical reasoning, despite its limitations, remains a powerful tool in our quest to understand the world. To mitigate the limitations and improve our understanding, we can incorporate empirical evidence, hypothesis testing, and balance deductive and inductive reasoning. Empirical evidence plays a crucial role in grounding our premises in observable reality. For instance, the premise "all humans are mortal" is based on countless observations of human lifespans throughout history. By relying on empirical evidence, we can ensure that our premises are not just theoretical constructs, but reflect the actual workings of the world. Hypothesis testing is another essential process that helps us refine our understanding. We can use logical reasoning to generate hypotheses based on our premises. For instance, if we have the premise "all swans are white," we might hypothesize that "since the bird in front of me is a swan, it must be white." If we then observe a black swan, we can reject our original hypothesis and revise our premises. This process of testing and revising helps us to continuously refine our understanding based on new evidence. Finally, balancing deductive and inductive reasoning allows us to navigate the inherent uncertainty in our premises. Deductive reasoning, which goes from general premises to specific conclusions, can be very certain but is limited by the accuracy of the premises. On the other hand, inductive reasoning, which goes from specific observations to general conclusions, can help us generate new premises based on empirical evidence, but the conclusions can only be probable, not certain. By using deductive reasoning, we can make predictions and draw conclusions based on our current understanding, and by using inductive reasoning, we can update and refine that understanding based on new evidence. In this way, the two types of reasoning can complement each other. In conclusion, while logical reasoning has its limitations, by grounding our premises in empirical evidence, continuously testing our hypotheses, and balancing deductive and inductive reasoning, we can use it to progressively improve our understanding of the world.
34fd0b71-90ef-4424-bce1-ed8ae785e56c
trentmkelly/LessWrong-43k
LessWrong
Examples of Acausal Trade with an Alien Universe? I understand the basic concept of counterfactual trade, and I can understand some examples where it can make sense to trade acausally between two different histories of our familiar world; for example, in Embedded Agency, Scott G and Abram discuss a game where Alice receives a card, either a king or an ace, and can either reveal it to or hide it from Bob, who will guess a probability p that the card is an ace. Bob's incentives are such that he wants to assign maximum probability to the actual outcome, while Alice receives 100*(1 - p^2) points no matter what her card is- so she wants to do her best to make Bob think she has a king. In this example, one might naively think that if Alice has a king, she should reveal her card, so that Bob will guess p=0, earning Alice the maximum reward. However, under this policy, when Alice has an ace, Bob would be able to infer from Alice hiding the card, that the card is an ace, so he would guess p=100%, and Alice will receive 0 points. If Alice instead decided to follow the policy of never revealing her card, Bob will be forced to always guess p = 0.5, earning Alice 100*(1 - 0.5^2) = 75 points each time; since Alice prefers a 100% chance of 75 points over a 50/50 shot at {0, 100}, Alice should actually hide her card even when she has a king. This is an example of counterfactual trade that makes sense to me. Similar logic applies to glomarization, and Newcomb's Dillema also has an acausal trade flavor to it, which is similar to some dynamics that appear in normal human interactions. However, my question is: Do you know of a concrete example where acausal trade with a universe completely different from ours, with laws of physics different from ours, actually makes sense? I struggle to think of such an example, where it'd both be harder for us to do something than it would be for an alien, acausally correlated species to do it, and where there would exist an acausally correlated species that would want to reward us by doing so
50ac34f1-12e1-4545-bdf9-67bc646d70b6
trentmkelly/LessWrong-43k
LessWrong
Expectation = intention = setpoint When I was first learning about hypnosis, one of the things that was very confusing to me is how "expectations" relate to "intent". Some hypnotists would say "All suggestion is about expectation; if they expect to have an experience they will", and frame their inductions in terms of expectation (e.g. "Your eyelids will become heavy"). The problem with this is that "I don't think it's gonna work". Other hypnotists would avoid this issue entirely by saying "I don't care if you think it will work. Follow my instructions, and you will get the results regardless of what you believe" and then say things like "Make your eyelids heavy". The problem with this is that "I don't know to do that!", which would be avoided by saying "You don't have to 'do' anything; I'm telling you what  is going to happen, and your job is simply to notice when it does" -- back to square one. This dual path thing always confused me, because "Sure, we can equivocate and play sleight of mouth in order to trick people into getting the results[1], but which is it really?".  It's both.  They're the same thing.  In the same way that "running away from a tiger" and "running to catch a bus" are both the same object level behavior being used for a somewhat different purpose. Initially, it seemed weird to me that humans are designed so that expectation and intent are implemented in the same way. Strange design choice, given that it creates massive vulnerabilities, where if we can be tricked into expecting we won't be able to remember something, it becomes self fulfilling.[2] Turns out, it has to be this way - not just for humans, but for any system that tries to control anything. To see why, let's start with something simpler than a human mind. Consider a thermostat. When anthropomorphizing a thermostat, you might be tempted to model this controller as having "beliefs" and "desires". The output of the thermometer is the "belief" the system has about the temperature of the room, and the set point we d
61151da9-48ca-48b8-aba6-795f91b4ac7c
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington DC: Goals, Political Advocacy as Effective Altruism, and Stories Discussion article for the meetup : Washington DC: Goals, Political Advocacy as Effective Altruism, and Stories WHEN: 17 November 2013 03:00:00PM (-0500) WHERE: National Portrait Gallery, Washington, DC 20001, USA So, we came up with three ideas, none of which will necessarily take a long time. So we're doing them together. We will be: Talking about goals and trying to revive the goals spreadsheet Discussing several posts on Givewell's blog about political advocacy as effective altruism And (if people have them) telling Haloween stories, as per this thread (a bit late, I know, but why not?) (This is a duplicate: last meetup didn't get to these topics). Discussion article for the meetup : Washington DC: Goals, Political Advocacy as Effective Altruism, and Stories
3d1ddefc-426e-47e3-92be-902505290435
trentmkelly/LessWrong-43k
LessWrong
Meetup : Melbourne Social Meetup Discussion article for the meetup : Melbourne Social Meetup WHEN: 18 July 2014 06:30:00PM (+1000) WHERE: 9/239 Canterbury Rd, Blackburn The social meetup is on again for July! This time we will be meeting in Blackburn. Come along for a chat and a game or two. We'll be starting at 6:30pm, but we'll probably go pretty late. Drop by whenever is convenient. Snack foods will be provided and we'll probably get some pizza or something for dinner. If you'd like to join in carpooling, please discuss in the Facebook event (see the 'Less Wrong Melbourne' group) and we'll arrange something. Alternatively if you are taking public transport, jump off at Blackburn station (Lilydale/Belgrave line) and call me on 0421231789. Discussion article for the meetup : Melbourne Social Meetup
3dbdc6c8-6fa1-4a46-a4fb-60e38d730587
trentmkelly/LessWrong-43k
LessWrong
Rationality Quotes January - March 2017 Another quarter, another rationality quotes thread. The rules are: * Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name. * Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.) * Do not quote yourself. * Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.  
fe0efd6f-7962-4d1e-8e2a-f5bebc586121
trentmkelly/LessWrong-43k
LessWrong
Naturalistic dualism And it's all around us  as ghosted machines […]  And as much as I'd like to  believe there's a truth  about our illusion,  well, I've come to conclude  there's just nothing beyond it  the mind can perceive  except for the pictures  in the space in between.    Jan Blomqvist, The Space In Between Introduction I have been a cartesian dualist since I was a teenager. What else can you be, apart from a solipsist? [that strictly carried out coincides with pure realism]. For most of my life I used to think that physicalist dualism was the consensus of analytic philosophers (as it is among hard science fiction writers). Then, I decided to write a booklet of philosophy for Economists, and I began by a literature review of what I expected to be the standard scientific-philosophical cosmovision, based in three pillars: subjectivism, emergentist physicalism, and epiphenomenalism.  But I was surprised to discover that (at least explicitly) the “standard” philosophy was not so standard: it was named “Naturalistic Dualism”, and its canonical text is “The Conscious Mind”, authored by the Australian Philosopher David Chalmers. I found the book correct, and probably definitive, but too long: Chalmers delves in any possible objection to this otherwise supremely clear philosophy, and that detracts from persuasive power and clarity.  Freedom under Naturalistic Dualism In the age of internet, where the 10-pages essay is King, I finally decided not to write a booklet, but an essay, whose final version was published a few days ago in Journal of NeuroPhilosophy (I want to thank the participants in the Less Wrong post that helped to improve the original article, and the editors of the Journal).  In “Freedom under Naturalistic Dualism” you will find a short piece both introducing, and I hope clarifying Naturalistic Dualism: First, I wanted to underline that epiphenomenalism is compatible with freedom. While our mind emerges from physically determined and autonomous matter, ou
9120d23d-e8aa-4a7b-b059-6b772ef3d27d
trentmkelly/LessWrong-43k
LessWrong
Freezer Contents This afternoon I defrosted our chest freezer, which I probably should have done about two years ago. Aside from having spent a long time dealing with frost buildup, another downside of waiting so long turned out to be that by the time I learned that the drain didn't work it was already out of warranty. We mostly use the chest freezer for three things: * Buying things when they happen to be on sale. * Having enough space that we can make large ~quarterly shops when we have access to a car. * Storing leftovers and other cooked-ahead food. The main tricky part is not losing track of things in it, since the closer it gets to full the harder it is to find anything. While I was reorganizing and refilling it I took inventory. This is what was in it: * Dairy (60lb): * 3.4gal ice cream * 19lb butter * 6lb cheddar * 2.5lb shredded mozzarella * 8oz cream cheese * Raw meat (48lb): * 15lb stew beef * 11lb ground beef * 8lb bacon * 7lb sweet italian sausage * 5.7lb chicken thighs * 1pt chicken for soup * Ready to defrost and eat (26lb): * 7lb hot dogs * 6lb ravioli * 5lb chicken nuggets * 2.8lb fish sticks * 1.8lb frozen pizza * 1lb cooked mussels * 1lb shredded ham * 9oz par-baked bread * 9oz hotdog buns * 2 salmon patties * Vegan stuff (19lb): * 5.3lb tofu * 3.5lb vegan cheddar shreds * 3pt vegan ice cream * 2.6lb vegan sausage * 2lb earth balance sticks * 18oz vegan ice cream sandwiches * 1lb tempe * 9oz vegan breakfast sausage * Fruit and veg (16lb): * 8lb chopped onions * 3lb mango * 2.5lb raspberries * 2lb peas * 12oz spinach * Frozen leftovers (13lb): * 2pt vegan pasta sauce * 2pt beef stew * 2pt vegan beans * 2lb beef and onions * 2lb BBQ pork * 1pt vegan peanut ginger soup * 1lb vegan cardamon saffron sauce * 1pt unlabeled sauce * Other (12lb): * 5.4lb puff pastry * 1qt bacon grease * 2lb shortening * one pie crust
432bd8f7-b0bc-4c97-9453-3ff2b3a97bfd
trentmkelly/LessWrong-43k
LessWrong
A Critique of AI Alignment Pessimism Epistemic status: Exploratory.  I have read Superintelligence, the Sequences and some other articles on AI risk, and am looking to take things back to basics. This article sums up where I'm at right now.  I welcome feedback on the points people disagree with, requests for clarification, and resources (or summaries, or counterpoints) on why you might think differently.   Stipulated: It is trivial to build a computer that lacks the capacity to escape confinement and destroy the world.  It is not trivial to ensure that nobody else builds a computer that will.   Four factors As far as I can tell, there are four limiting factors on an artificial intelligence that might prevent it from optimizing us all to death (or worse).   1. Hardware limitations on its cognitive power 2. The set of data available to it, from which it might build a model of physical reality and/or human minds 3. Hardware limitations on its ability to affect and influence its environment 4. The optimization process it uses to determine what outcome to aim for and what approaches to that end are worth exploring These limiting factors are stronger taken together than they are apart, but are they strong enough that we can rely on them to protect us?  We will look at how each of these limiting factors works and the difficulty of utilizing each one for AI safety purposes.   Cognitive power Starting with hardware limits on cognitive power, my understanding is that it is theoretically easy to limit a computer's hardware so that it cannot develop a sufficiently detailed model of reality that would help it engineer its way out of containment while hiding its schemes from those who can monitor its internal processes.  Whether the computer would be useful for its intended purpose is another matter.   There are multiple angles to exploit this limiting factor.  For example, we could restrict the cognitive power of an AI by giving it very low processing power and/or very little storage space.  We could re
00b92b13-ed38-408c-aebb-0275e33332c8
trentmkelly/LessWrong-43k
LessWrong
Renaming "Frontpage" We're about to revamp the frontpage design (you can see a hint of what's to come on the new /allPosts page). While we're at it, I'd like to fix a longstanding problem, which is that "frontpage posts" is a fairly confusing concept for the site to rely on. "Frontpage posts" are posts that the mods promote because they: * Avoid arguing to persuade and build social momentum. Instead they stay focused on the explanation (and exploration) of ideas. * They avoid particular hotbutton issues that are likely to spawn demon threads. * They avoid discussion of the site itself, and the surrounding community. The main benefit a post has for being frontpage is that more users will see it by default (where as personal blogposts require you to actively turn on a setting to display them). There are two problems with the name "frontpage": 1. It's confusing. Since you can display personal blogposts on the literal site frontpage (and there are contexts other than the literal frontpage where you might want to see "frontpage" posts, such as on the allposts page), it's a bit weird to use that word to convey a bunch of criteria. 2. It's a little too harsh-seeming to not get promoted to frontpage. I don't "frontpage" should mean "everything that's good and virtuous to talk about." I do think that's it good to err on the side of writing posts that meet the frontpage criteria, but I think of this as a default rather than the obviously-correct-way-everything-should-be. This post explores what frontpage is trying to do, ways in which we might modify it, but ultimately (for now), answering the question of "what can we change the frontpage name to, to locally improve the current situation." ---------------------------------------- What is "Frontpage" Trying to Do? Some value I think the "frontpage mechanic" provides include: 1. Disincentivze discourse that is more stressful or time wasting than it needs to be. 2. Retain LessWrong's strong foundation of epistemic clarity. Rationali
763fdf38-da18-4f14-94f9-0eb497ceb120
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"We used to make land. We built long wharves for docking ships, and then over time filled in the areas between them. Later we built up mudflats wholesale to make even larger areas. Here's a map of Boston showing how much of the land wasn't previously dry: (Map reproduction courtesy of the Norman B. Leventhal Map & Education Center at the Boston Public Library) In expensive areas, converting wetlands and shallow water into usable land is a very good thing on balance, and we should start doing it again. To take a specific example, we should make land out of the San Francisco Bay, at least South of the Dumbarton Bridge: This is about 50mi2, a bit bigger than San Fransisco. This would be enough new central land to bring rents down dramatically across the region. It can be built to a higher density than SF, because no one is having their neighborhood Manhattanized. Millions of people could live there. So, ok, let's address some likely objections: This would be an environmental disaster. Some of that area is a wildlife refuge, and all of it should be protected. The world is very large, and cities are a very small portion of it. The land we set aside for animals should be outside of cities, where far more land is available at far less impact to people. Sprawl has a much larger impact on wildlife than infill, and allowing people to live closer in is the most powerful way to address sprawl. Additionally, sprawl leads to much higher carbon emissions through less efficient transportation. While development of the Bay would be harmful to the specific animals that live there today, it would be better for animals (and people) overall. The Bay is beautiful and this would ruin it. This part of the Bay is primarily industrial salt ponds. This is just a few miles from a major fault line, and made land can liquify in earthquakes. You do need to take fill into account to build in an earthquake-safe way, but modern engineering is well up to the task. Traffic would be even worse. The biggest contribution to traffic in the Bay Area is that people aren't allowed to live where the jobs are. The second biggest is that BART doesn't have enough coverage to make living without a car practical in most of the area. This would help with both of these, since this project would allow millions of people to live closer in and would easily fund massive subway expansion. Wait, how many people are you saying would live there? Here's SF's density in the 2010 census: Relatively dense portions of the city have a density of ~40k people per square mile, which would be ~2M people over this 50mi2 area. At a density of ~80k people per square mile, something like NYC's East Village, this could be ~4M people. Much higher densities are possible but not a good idea. This would undo decades of work aimed at preserving the Bay and restoring its wetlands. Yes. Sea-level rise means we shouldn't be building more in low-lying areas. Building dikes to keep the water out is very practical. A third of the Netherlands is below sea level, with most of that expansion happening before modern technology. By decreasing the amount of coastline in the Bay this project would make it easier to prevent flooding caused by sea-level rise. Didn't someone already propose this decades ago? The Reber Plan of the 1940s was a similar large project planned farther North, primarily for the East Bay. It was intended to both make land and create freshwater lakes, and testing with a scale-model showed major issues. This plan is much simpler, and more similar to past successful land reclamation projects. There's not enough water for the people we already have; what will people drink? There's plenty of water already, we just need to prioritize people drinking it over crops, which would happen naturally if we priced it. But even without that, desalination is cheap enough that a person's daily water use would be a matter of cents. Is this a serious proposal? Yes. We should do this. (I previously suggested this kind of infill development with Boston's Fort Point Channel.) Comment via: facebook" What you see above is a segment from a LessWrong user's post. Can you break this down into the top 5 most important points for me. 1. Land Reclamation: The author proposes land reclamation, specifically in the San Francisco Bay area, as a solution to the region's housing crisis. They argue that this would create more space for housing, potentially lowering rents and enabling more people to live closer to their workplaces. 2. Environmental Impact: The author acknowledges potential environmental concerns but argues that the overall impact would be positive. They suggest that urban sprawl has a larger impact on wildlife and that creating more urban living space could reduce this sprawl and associated carbon emissions. 3. Engineering and Safety: The author addresses concerns about earthquake safety, stating that modern engineering is capable of building on reclaimed land in an earthquake-safe way. They also propose building dikes to mitigate the risk of sea-level rise. 4. Infrastructure and Population Density: The author suggests that the reclaimed land could support a high population density, similar to dense parts of San Francisco or New York City. They argue that this could ease traffic problems in the Bay Area and fund the expansion of public transportation systems. 5. Water Supply: Lastly, the author discusses the water supply, stating that there is sufficient water if it is prioritized for human consumption over crops. They also mention the possibility of using desalination as a cost-effective solution. Discuss the potential socio-economic implications of land reclamation in the San Francisco Bay area, considering factors such as displacement of current residents, changes in property values, and the impact on local businesses. Land reclamation in the San Francisco Bay area could have significant socio-economic implications. 1. Displacement of Current Residents: Depending on the specifics of the reclamation project, some residents may be displaced. This could potentially lead to social unrest or legal challenges. However, if the reclaimed land is used to create affordable housing, it could also provide new homes for people who are currently struggling with the high cost of living in the area. 2. Changes in Property Values: The creation of new land could potentially lower property values in the region by increasing the supply of land and housing. This could benefit people who are currently priced out of the housing market, but it could also lead to financial losses for existing property owners. On the other hand, if the reclaimed land is developed into a desirable residential area, it could increase property values in surrounding areas. 3. Impact on Local Businesses: The construction phase of the reclamation project could stimulate the local economy by creating jobs. Once the new land is developed, it could attract new businesses and residents, further boosting the local economy. However, if the project leads to a decrease in property values, this could negatively impact businesses that rely on the wealth of local residents. 4. Changes in Demographics: The reclamation project could potentially alter the demographics of the area. If the new land is used for high-end housing, it could attract wealthier residents, leading to gentrification. If it's used for affordable housing, it could attract a more diverse range of residents. 5. Infrastructure and Public Services: The increase in population density could put a strain on existing infrastructure and public services, necessitating further investment. This could have implications for local taxes and the overall cost of living in the area. Overall, the socio-economic implications of land reclamation in the San Francisco Bay area would depend on a wide range of factors, including the specifics of the reclamation project, how the new land is used, and how the project is managed. Considering the potential socio-economic implications discussed, how might local and state governments effectively manage a land reclamation project in the San Francisco Bay area to maximize benefits and minimize negative impacts? Include considerations of policy, public engagement, and economic strategies in your response. Managing a land reclamation project in the San Francisco Bay area would require careful planning and strategic implementation by local and state governments. Here are some potential strategies: 1. Policy: Governments should establish clear policies regarding the use of reclaimed land. This could include zoning regulations to ensure a mix of residential, commercial, and public spaces. Policies could also be put in place to ensure a certain percentage of housing is affordable, to prevent gentrification and ensure a diverse range of residents can benefit from the project. 2. Public Engagement: Governments should actively engage with the public throughout the project. This could include public consultations to gather input on the project plans, regular updates on the project's progress, and opportunities for residents to voice any concerns or suggestions. This would help to ensure the project meets the needs and expectations of the community, and could also help to build public support for the project. 3. Economic Strategies: Governments could use a range of economic strategies to manage the project. This could include public-private partnerships to fund the project, tax incentives to attract businesses to the area, and financial support for residents who may be displaced by the project. Governments could also invest in infrastructure and public services to support the increased population density, which could in turn stimulate economic growth. 4. Environmental Considerations: Given the potential environmental impact of land reclamation, governments should also implement policies to minimize harm. This could include conducting thorough environmental impact assessments, implementing measures to protect local wildlife, and ensuring the project complies with all relevant environmental regulations. 5. Long-term Planning: Finally, governments should take a long-term view of the project. This means considering not just the immediate benefits of the project, but also how it will impact the region in the future. This could involve planning for potential sea-level rise, considering how the project will impact traffic and transportation in the region, and ensuring the project is sustainable in the long term. By carefully managing these aspects, local and state governments can maximize the benefits of a land reclamation project in the San Francisco Bay area, while minimizing any potential negative impacts.
6f96b0c9-f8cf-4028-b5fb-2ffffafb78f1
trentmkelly/LessWrong-43k
LessWrong
0 And 1 Are Not Probabilities One, two, and three are all integers, and so is negative four. If you keep counting up, or keep counting down, you’re bound to encounter a whole lot more integers. You will not, however, encounter anything called “positive infinity” or “negative infinity,” so these are not integers. Positive and negative infinity are not integers, but rather special symbols for talking about the behavior of integers. People sometimes say something like, “5 + infinity = infinity,” because if you start at 5 and keep counting up without ever stopping, you’ll get higher and higher numbers without limit. But it doesn’t follow from this that “infinity - infinity = 5.” You can’t count up from 0 without ever stopping, and then count down without ever stopping, and then find yourself at 5 when you’re done. From this we can see that infinity is not only not-an-integer, it doesn’t even behave like an integer. If you unwisely try to mix up infinities with integers, you’ll need all sorts of special new inconsistent-seeming behaviors which you don’t need for 1, 2, 3 and other actual integers. Even though infinity isn’t an integer, you don’t have to worry about being left at a loss for numbers. Although people have seen five sheep, millions of grains of sand, and septillions of atoms, no one has ever counted an infinity of anything. The same with continuous quantities—people have measured dust specks a millimeter across, animals a meter across, cities kilometers across, and galaxies thousands of lightyears across, but no one has ever measured anything an infinity across. In the real world, you don’t need a whole lot of infinity.1 In the usual way of writing probabilities, probabilities are between 0 and 1. A coin might have a probability of 0.5 of coming up tails, or the weatherman might assign probability 0.9 to rain tomorrow. This isn’t the only way of writing probabilities, though. For example, you can transform probabilities into odds via the transformation O = (P/(1 - P)). So a probabili
db95940a-4146-4c1c-a74f-861a0e874cfc
trentmkelly/LessWrong-43k
LessWrong
What would you do with a financial safety net? In the open thread, moridinamael hypothesized that LWers would be willing to take more risks in order to become rich if they had a financial safety net. This seems like an idea worth exploring further.   What would you do if you had a financial safety net (maybe a year's worth of living expenses) to fall back on if your venture failed?
99699250-d36a-4d06-a2ca-d66aa7608c80
trentmkelly/LessWrong-43k
LessWrong
My take on agent foundations: formalizing metaphilosophical competence I have some rough intuitions about the purpose of MIRI's agent foundations agenda, and I'd like to share them here. (Note: I have not discussed these with MIRI, and these should not be taken to be representative of MIRI's views.) I think there's a common misconception that the goal of agent foundations is to try building an AGI architected with a decision theory module, a logical induction module, etc. In my mind, this is completely not the point, and my intuitions say that approach is doomed to fail. I interpret agent foundations as being more about providing formal specifications of metaphilosophical competence, to: * directly extend our understanding of metaphilosophy, by adding conceptual clarity to important notions we only have fuzzy understandings of. (Will this agent fall into epistemic pits? Are its actions low-impact? Will it avoid catastrophes?) As an analogy, formally defining mathematical proofs constituted significant progress in our understanding of mathematical logic and mathematical philosophy. * allow us to formally verify whether a computational process will satisfy desirable metaphilosophical properties, like those mentioned in the above parenthetical. (It seems perfectly fine for these processes to be built out of illegible components, like deep neural nets—while that makes them harder to inspect, it doesn't preclude us from making useful formal statements about them. For example, in ALBA, it would help us make formal guarantees that distilled agents remain aligned.) I want to explore logical induction as a case study. I think the important part about logical induction is the logical induction criterion, not the algorithm implementing it. I've heard the implementation criticized for being computationally intractable, but I see its primary purpose as showing the logical induction criterion to be satisfiable at all. This elevates the logical induction criterion over all the other loose collections of desiderata that may or may not be satisfi
f5acd74c-da39-41d7-9ade-21d73fb29816
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Comments on Manheim's "What's in a Pause?" I agree with David Manheim's post at a high level. I especially agree that a pause on large training runs is needed, that “We absolutely cannot delay responding”, and that we should be focusing on a pause mediated by “a multilateral agreement centered on countries and international corporations”. I also agree that if we can’t respond to the fire today, we should at least be moving fast to get a “sprinkler system”. The basic reason we need a (long) pause, from my perspective, is that we are radically unprepared on a technical level for smarter-than-human AI. We have little notion of how to make such systems reliable or safe, and we’ll predictably have very little time to figure this out once smarter-than-human AI is here, before the technology proliferates and causes human extinction. We need far, far more time to begin building up an alignment field and to develop less opaque approaches to AI, if we’re to have a realistic chance of surviving the transition to smarter-than-human AI systems. My take on AI risk is similar to Eliezer Yudkowsky’s, as expressed in [his piece in TIME](https://www.lesswrong.com/posts/oM9pEezyCb4dCsuKq/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all-1) and in the [policy agenda](https://twitter.com/ESYudkowsky/status/1674143949026631681) he outlined. I think we should be placing more focus on the human extinction and disempowerment risks posed by AGI, and should be putting a heavy focus on the arguments for that position and the [reasonably widespread extinction fears](https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html) among ML professionals. I have disagreements with some of the specific statements in the post, though in many cases I’m unsure of exactly what Manheim’s view is, so the disagreement might turn out to be non-substantive. In the interest of checking my understanding and laying out a few more of my views for discussion, I’ll respond to these below.[[1]](https://docs.google.com/document/d/e/2PACX-1vRgyHjxQg6pq0JhQ2R5Dz6jritz_d8Yql51XaLNrPJjgOG_r4UliuYMad-eeWFJFkQcnA5yizsVgJYO/pub#ftnt1) >  So the question of whether to stop and how to do so depends on the details of the proposal - but these seem absent from most of the discussion. > > This is not apparent to me. I think it would take a pretty unusual proposal in order for me to prefer the status quo over it, assuming the proposal actually pauses progress toward smarter-than-human AI. It’s important to get this right, and the details matter. But if a proposal would actually *work* then I’m not picky about the additional implementation details, because there’s an awful lot at stake, and “actually working” is already an extremely high bar. > An immediate, temporary pause isn’t currently possible to monitor, much less enforce, even if it were likely that some or most parties would agree. > > A voluntary and temporary moratorium still seems like an obviously good idea to me; it just doesn't go far enough, on its own, to macroscopically increase our odds of surviving AGI. But small movements in the right direction are still worthwhile. > Similarly, a single company, or country announcing a unilateral halt to building advanced models is not credible without assurances, > >   “Not credible” sounds too strong here, though maybe I’m misunderstanding your claim. Scientists have voluntarily imposed restrictions on their own research in the past (e.g., Asilomar), and I don’t think this led to widespread deception. Countries have banned dangerous-but-profitable inventions without pursuing those inventions in secret. I don't think it would be that hard for many companies or countries to convince me that they're not building advanced models. It might be hard for me to (for example) get to 95% confidence that DeepMind has suspended frontier AI development, merely on DeepMind's say-so; but 75% confidence seems fairly easy to me, if their say-so is concrete and detailed enough. (Obviously *some* people will pursue such research in secret, somewhere in the world, given the opportunity. If we rely purely on organizations’ say-so, then *eventually* this will get us killed. If that’s all you mean by “not credible”, then I agree.) > and is likely both ineffective at addressing the broader race dynamics > > If current industry leaders suspended operations, then this would “address the broader race dynamics” in the sense that it would be a very positive step in the right direction. This could potentially buy us years of additional time to develop and implement a global, internationally enforced pause. It doesn't "address the broader race dynamics" in the sense of instantly saving the world, though. A few years (or even months) of delay could prove decisive, but if so its decisiveness will certainly hinge on whether the world *uses* that extra time to implement a pause. > and differentially advantages the least responsible actors. > > To a first approximation, I don't think this matters. I don't think the future looks any brighter if the most responsible orgs develop AGI first than if the least responsible ones do. The most responsible orgs might successfully avoid destroying the world *themselve*s — while not being able to safely utilize AGI to address the proliferation of AGI tech. But in that case they're not helping the world any more than they would have by just shutting down, which is a fool-proof way to not destroy the world yourself. > **What Does a Moratorium Include?** > > There is at least widespread agreement on many things that aren’t and wouldn’t be included. Current systems aren’t going to be withdrawn - any ban would be targeted to systems more dangerous than those that exist. > > “Targeted” maybe suggests more precision than may be possible. It’s very hard to predict in advance which systems will be existentially dangerous, and algorithmic progress means that a given compute threshold may be very-likely-safe today while being plausibly unsafe tomorrow. Regarding rolling back current systems: I think some people at Conjecture have given arguments for rolling back GPT-4, on the basis that we don't yet know what scaffolding we can get out of GPT-4, nor what dangerous insights can be learned by gaining a better grasp of how GPT-4 works internally. This doesn’t seem important enough to me to make it a focus, but a rollback does seem like the kind of policy that would be adopted (or at least be under serious consideration) in a generally well-run world that was seriously grappling with the risk that a GPT-5 or a GPT-8 might get us all killed. Regardless, if Conjecture staff’s views are relevant then it can’t be said that there’s full consensus here. The thing I care more about is leaving open that it might be necessary to ban systems at the same scale as GPT-4 at a future date; we can expect algorithms to get more efficient in the future, and it’s hard to predict what will be technologically possible multiple years in the future, which is an argument for some conservatism (with everyone’s lives at risk). > We’re not talking about banning academic research using current models, and no ban would stop research to make future systems safer, assuming that the research itself does not involve building dangerous systems. > > If we end up solving the alignment problem at all, then I expect some alignment research to eventually yield significant capabilities insights. (See Nate Soares’ [If interpretability research goes well, it may get dangerous](https://www.lesswrong.com/posts/BinkknLBYxskMXuME/if-interpretability-research-goes-well-it-may-get-dangerous).) On the current margin, I think it's net-positive to pursue the most promising alignment research moonshots; but in the long run we’d definitely need to be asking about how capabilities-synergistic different alignment research directions are, rather than giving a permanent free pass to all research that’s useful for alignment. And I think we should definitely be preparing for that now, rather than treating algorithmic progress as nonexistent or alignment and capabilities research as disjoint categories. > However, I don’t think there’s a concrete proposal to temporarily or permanently pause that I could support - we don’t have clear criteria, we don’t have buy-in from the actors that is needed to make this work, and we don’t have a reasonable way to monitor, much less enforce, any agreement. > > As a rule, I don't think it's a good idea to withhold support for policies on the basis that they lack "buy-in" from others. The general policy "only support things once lots of others have publicly supported them" often prevents good ideas from beginning to gain traction, and locks Overton windows in place. I'd instead usually advise people to state their actual beliefs and their rough preference ordering over policy options (including unrealistic-but-great ones). Then we can talk about feasibility and compromise from a standpoint of understanding everyone's actual views. Part of why I recommend this is that I think any policy that prevents human extinction will need to be pretty extreme and novel. If we limit ourselves to what's obviously politically feasible today, then I think we're preemptively choosing death; we need to take some risks and get more ambitious in order to have any shot at all. (This is not to say that all small incremental progress is useless, or that everything needs to happen overnight. But a major part of how smaller marginal progress gets parlayed into sufficient progress is via individuals continuously discussing what they think is needed even though it's currently outside the Overton window, throughout the process of iterating and building on past successes.) > Yes, companies could voluntarily pause AI development for 6 months, which could be a valuable signal. > > It could also slow progress toward smarter-than-human AI for some number of months, which is useful in its own right. Time is needed to implement effective policy responses, and even more time would be needed to find a solution to the alignment problem. > (Or would be so if we didn’t think it would be a smokescreen for 'keep doing everything and delay releases slightly.')" > > It sounds like you're much more cynical about this than I am? I'd be very happy to hear concrete commitments from ML organizations to pause development, and I think they should be encouraged to do so, even though it's not sufficient on its own. Lying happens, but I don't think it's universal, especially when it would require a conspiracy between large numbers of people to cover up a very clear and concrete public lie. (Obviously if the *stated* commitment is very vague or clearly insufficient, then that’s another story.) > And acting too soon is costly > > Acting too late destroys all of the value in the future. Is there a commensurate cost to acting too quickly? (I’ll assume for now that you don't think there is one, and are just acknowledging nonzero cost.) > Just like a fire in the basement won’t yet burn people in the attic, AI that exists today does not pose immediate existential risks[2] to humanity - but it’s doing significant damage already, and if you ignore the growing risks, further damage quickly becomes unavoidable. > > This seems like a weak case for acting now, since it's vulnerable to the obvious response "AI today is doing significant damage, but also producing significant benefits, which very likely outweigh the damage." The real reason to act now is that future systems will likely disempower and kill humanity, we don't know the threshold at which that will happen (but there's a fair bit of probability on e.g. 'the next 5 years', and quite a lot on 'the next 15 years'), and it may take years of work to develop and implement an adequate policy response.   *This post is part of AI Pause Debate Week. Please see* [*this sequence*](https://forum.effectivealtruism.org/s/vw6tX5SyvTwMeSxJk) *for other posts in the debate.* 1. **[^](#fnreffkqx29lbrce)** Thanks to Nate Soares for reviewing this post and giving some feedback.
cf1af560-6b1c-4f27-a788-b27670796c89
trentmkelly/LessWrong-43k
LessWrong
A wild theist platonist appears, to ask about the path I recognize the title could be more informative. At the same time I believe it says what is important. I believe in a deity, I believe in mathematical entities in the same way.  The community of LessWrong (from whenceforth: LessWrong) is deeply interesting to me, appearing as a semi-organized atheist, reductionist community. LessWrong seems very interested in promoting rationality, which I applaud. The effort does seem scattered, though, and this is the reason I post. One has Eliezer's website with some interesting posts. The same of this community. The community links to some posts when you are coming for the first time into it, and you also have a filter for top posts. One has the blog. And recently, the center for modern rationality (in the same page as harrypoter fanfiction about rationality). The point being there is no defined roadmap to go from AIC (average irrational chump to make an analogy to Game - which also seems to come up around quite a bit) to RA (again, rationality artist). I write this post as to maybe generate a discussion on how the efforts could be concentrated and a new direction taken. Should the creation of the Center for Modern Rationality envision this same concentration, this post may and should be disregard. If it does not, then I leave it to your consideration.   Hang.
d39368ed-c4f9-47c5-bbfa-edbb8ede0a94
trentmkelly/LessWrong-43k
LessWrong
Meetup : Seattle Secular Solstice Discussion article for the meetup : Seattle Secular Solstice WHEN: 10 December 2016 04:00:00PM (-0800) WHERE: 114 Alaskan Way S, Seattle, Washington 98104 A night to gather and celebrate humanity, warmth, knowledge and progress, the solstice is an annual tradition full of songs, stories, laughter, light and beauty. This is an opportunity for Seattlites to come together for an experience that makes us feel more connected to community, our world, and what it means to be alive. The beginning activities will be focused on fun and celebration, getting to know each other and playing games. The rest of the evening follow the arc of the solstice itself - thematically moving from light to dark to light again. It will begin with bright and festive energy while we focus on the ingenuity that allowed humanity to turn the winter from a season historically known for fear and deprivation and into a time of promise, plenty and warmth. See Facebook event for more details: https://www.facebook.com/events/887737281326963/ Discussion article for the meetup : Seattle Secular Solstice
7379a1a2-2264-447c-823a-7d8db4cfb097
trentmkelly/LessWrong-43k
LessWrong
Implications of simulators In this post, I: * Explain the idea of language models as simulators in a way I find compelling. * Introduce the notion of separate simulator selectors and simulation infrastructure within a model, and provide some evidence for this. * Discuss what the theory means about the agency of present and future LLMs. * Provide examples of "simulatorception" and "simulator verse jumping." * Give some suggestions for alignment researchers. Epistemic status: I’m not very confident about any of these claims, and I wouldn’t call myself extremely well versed in all discussions of language models as simulators on LessWrong. But I wrote some ideas out for a class paper so I thought they might be worth sharing, particularly if I can inject some independent ideas by virtue of me having thought about this on my own. I’m very interested in hearing criticisms. Language Models Are Simulators Language models are built to predict human text. Does this make them mere pattern-matchers? Stochastic parrots? While these descriptions have their merits, the best-performing explanation is that language models are simulators. By a simulator, I mean a model that has the ability to predict arbitrary phenomena similar to the phenomena that gave rise to its training data – not just the training data itself. This is not the first work to suggest that present day language models are simulators. Janus has argued for this conception, and David Chalmers suggested that language models are “chameleons” that can inhabit different personalities (note: I’m deliberately not overly anchoring on Janus's post here, since I’ve been thinking about this in a slightly different way and started doing so before I read that post). As we will see, many researchers are also implicitly and sometimes explicitly treating language models as simulators. The training data used by LLMs was written by humans, who have beliefs, goals, communicative intent, world models, and every other property associated with intelligent,
b7e466e9-a185-4c20-8ca8-f09ea820f158
trentmkelly/LessWrong-43k
LessWrong
"Playing to Win" John F. Rizzo is an expert on losing. However, if you want to win, you would do better to seek advice from an expert on winning. David Sirlin is such an expert, a renowned Street Fighter player and game designer. He wrote a series of articles with the title "Playing to Win", about playing competitive games at a high level, which were so popular that he expanded them into a book. You can either read it for free online (donations are appreciated) or purchase a dead tree edition. Any further summary would simply be redundant when you could simply read Sirlin's own words, so here is the link: http://www.sirlin.net/ptw
45df6853-f5d8-410f-adea-9ce096d53d95
trentmkelly/LessWrong-43k
LessWrong
The Critical Rationalist View on Artificial Intelligence Critical Rationalism (CR) is being discussed on some threads here at Less Wrong (e.g., here, here, and here). It is something that Critical Rationalists such as myself think contributors to Less Wrong need to understand much better. Critical Rationalists claim that CR is the only viable fully-fledged epistemology known. They claim that current attempts to specify a Bayesian/Inductivist epistemology are not only incomplete but cannot work at all. The purpose of this post is not to argue these claims in depth but to summarize the Critical Rationalist view on AI and also how this speaks to things like the Friendly AI Problem. Some of the ideas here may conflict with ideas you think are true, but understand that these ideas have been worked on by some of the smartest people on the planet, both now and in the past. They deserve careful consideration, not a drive past. Less Wrong says it is one of the urgent problems of the world that progress is made on AI. If smart people in the know are saying that CR is needed to make that progress, and if you are an AI researcher who ignores them, then you are not taking the AI urgency problem seriously. Universal Knowledge Creators Critical Rationalism [1] says that human beings are universal knowledge creators. This means we can create any knowledge which it is possible to create. As Karl Popper first realized, the way we do this is by guessing ideas and by using criticism to find errors in our guesses. Our guesses may be wrong, in which case we try to make better guesses in the light of what we know from the criticisms so far. The criticisms themselves can be criticized and we can and do change those. All of this constitutes an evolutionary process. Like biological evolution, it is an example of evolution in action. This process is fallible: guaranteed certain knowledge is not possible because we can never know how an error might be exposed in the future. The best we can do is accept a guessed idea which has withstood all kno
7935d993-f859-4c90-b751-8088de5abb60
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Article Review: Google's AlphaTensor [Cross-posted from my Substack at <https://aizi.substack.com/p/article-review-googles-alphatensor]>  [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4e34d44f-e6f9-4443-8784-29b0fe10c292_497x377.gif)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4e34d44f-e6f9-4443-8784-29b0fe10c292_497x377.gif) *How AlphaTensor sees matrices, probably* Last week, Google announced AlphaTensor, “the first artificial intelligence (AI) system for discovering novel, efficient, and provably correct algorithms for fundamental tasks such as matrix multiplication”. In this post, I want to rephrase their [press release](https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor) and [Nature article](http://discovering%20faster%20matrix%20multiplication%20algorithms%20with%20reinforcement%20learning/) in terms that make sense to me (and hopefully you, the reader), and share some thoughts on whether this is significant and holds any AI safety lessons. Background: What are Tensors? ----------------------------- Tensors are everywhere in our world: from press releases by Google, to Nature articles by Google, to AlphaTensor, the name of the agent trained by Google. But what *are* tensors? Grids of numbers. Tensors are a grid of numbers. A tensor with two axes[[1]](#fnpbt7htoku6f) is a rectangular grid of numbers, a.k.a. a matrix, and a tensor with one axis is just a vector. Python programmers will be familiar with tensors as ndarrays. You can add tensors of the same size componentwise, and multiply them by scalars, so they form a vector space. You can make tensors more complicated if you want, but “grids of numbers” is all you need to know for this paper. The goal of Google’s AI agent is to find fast algorithms for multiplication of matrices. I’ll pretend we’re working with just n-by-n square matrices, but the agents in the article are trained to find algorithms for n-by-m times m-by-p matrix multiplication for n,m,p≤5.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} . Matrix multiplication is one of the most common operations performed by a computer, but it can be slow, especially for large matrices. The speed of these algorithms are measured in multiplications, since that makes up the bulk of the runtime for any of these algorithms (addition is relatively fast). The intro-to-linear-algebra method for matrix multiplication takes n3 multiplications to multiply n-by-n matrices, but its been known since the 1969 Strassen algorithm that you can save multiplications by doing more additions (Strassen’s original algorithm multiplies 2-by-2s in 7 multiplications instead of 8, and a variant called Strassen-square multiplies n-by-ns in roughly n2.8 multiplications[[2]](#fntyx1387epho)). So, what does matrix multiplication have to do with tensors? The first insight used in the paper is to think of matrix multiplication as a bilinear operator on Mn, the set of n-by-n matrices. That is, matrix multiplication is a function m(A,B)=AB which is linear in the first component and the second component. And just as linear transformations can be uniquely encoded as matrices (2-axis tensors)[[3]](#fnoo0rvk0sig), bilinear transformations can be uniquely encoded as 3-axis tensors. Stop. If that previous sentence makes sense to you, or you’re happy to accept it as a black box, skip the rest of the paragraph. Otherwise, here are the mathematical details of how: Mn as a vector space has a basis of size n2, consisting of the matrices {Bi,j:1≤i,j≤n} which are 0 everywhere except for a 1 in the (i,j)th component. Just as a linear transformation is determined by where it sends basis elements, bilinear transformations are determined by where they send pairs of basis elements, so a bilinear transformation g is uniquely determined by the values {g(Bi,j,Bk,l):1≤i,j,k,l≤n}. Each g(Bi,j,Bk,l) is itself a matrix in Mn, so it can be uniquely written as a linear combination of the basis elements {Bp,q}s. Thus for the n2 choices of Bi,j and the n2-choices of Bk,l, we can uniquely represent g(Bi,j,Bk,l) by the n2 numbers which are the coefficients of the {Bp,q:1≤p,q≤n}. Putting those numbers in a logically-organized grid, we’d have a n2-by-n2-by-n2 grid of numbers, i.e. a 3-axis tensor! Alternatively, we can reshape that 3-axis tensor and think about it as a 6-axis n-by-n-by-n-by-n-by-n-by-n tensor, where the number in the (i, j, k, l, p, q)th position is the coefficient on the Bp,q term of g(Bi,j,Bk,l). You can do the steps above for any bilinear transformation, including matrix multiplication. It is straightforward to check that [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F3f805cfe-56be-41f2-8200-f67d66a85dfd_291x49.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F3f805cfe-56be-41f2-8200-f67d66a85dfd_291x49.png) so in the 6-axis tensor representation of multiplication, the entry in the (i, j, k, l, p, q)th entry is 1 if j = k, i = p, and q = l, and 0 otherwise. This "coordinatization" of bilinear transformations is one of the key tricks of the article, and will form the basis[[4]](#fnwb5px9cecag) of the Score Card Tensor below. The Algorithm Designing Game ---------------------------- AlphaTensor finds algorithms by a cleverly encoding “find an algorithm” as a game, such that finding moves to win the game is equivalent to finding an algorithm that multiplies matrices. The game being played uses two tensors: The Listed Steps Tensor is an n2-by-3-by-R tensor where R = 0 at the start of the game, and in each turn the player adds a n2-by-3-by-1 slice. At the end of the game, the n2-by-3-by-R tensor will describe an algorithm to do matrix multiplication with R multiplications. The Score Card Tensor is an n2-by-n2-by-n2 tensor which starts as the coordinitization of matrix multiplication (in the process described above), and updates throughout the game based on the steps you add to the Listed Steps Tensor. The goal of the game is to zero out every entry on the Score Card Tensor. These are my names for the tensors, but you can see both tensors in Figure 1. [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4f2a076-b729-43db-a3d7-1f9372ddcb5e_685x271.webp)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4f2a076-b729-43db-a3d7-1f9372ddcb5e_685x271.webp) *Figure 1 from* [*https://www.nature.com/articles/s41586-022-05172-4.*](https://www.nature.com/articles/s41586-022-05172-4.) *Figure 1a shows the Score Card Tensor as it begins the game (i.e. the coordinitization of matrix multiplication), and Figure 1c shows the Listed Steps Tensor for Strassen algorithm shown in Figure 1b. Here, n=2 and R=7.* This encoding results in a 1-player game similar to the “solve a Rubik’s cube” game. There are a finite number of operations you can perform (steps described by n2-by-3-by-1 tensors/rotations of the cube faces), and each operation changes the underlying state (the Score Card Tensor/the pattern of the cube), you win once everything is orderly (Score Card Tensor is 0/all the colors are in the right spots), and your goal is to get there in as few moves as possible. However, the matrix multiplication game is harder because there are a staggering number of possible moves. Each move means choosing an n2-by-3-by-1 tensor, we’ll see there are 5 choices for each entry, so there are 53n2 possible moves at each stage! For n = 5, that’s more than 1052 possible moves! Now let’s explain how the Listed Steps Tensor encodes an algorithm, and how those steps change the Score Card Tensor. The algorithm is: Inputs: Two n-by-n matrices A and B, and n2-by-3-by-R Listed Steps Tensor. Output: Matrix C. (For correct Listed Steps Tensor, C = AB.) 1. Initialize a length n2 vector c=0. 2. Reshape matrices A and B into length n2 vectors a and b, respectively. 3. Repeat these steps for all R of the n2-by-3-by-1 slices of Listed Steps Tensor. 1. Separate the n2-by-3-by-1 slice of the Listed Steps Tensor into three length n2 vectors u, v, and w. 1. Compute α=⟨u,a⟩ (the dot product aka inner product aka scalar product)[[5]](#fnpoi1v505h0b). 2. Compute β=⟨v,b⟩ (the dot product aka inner product aka scalar product). 3. Compute γ=αβ  (a number multiplication). 4. Update c=c+γw . 4. Reshape c into an n-by-n matrix C. Note that the only multiplications in this algorithm are in steps 3.2-3.5. A key insight is that by restricting the entries of the Listed Steps Tensor (and hence u, v, and w) to a well-chosen set F, we can make the multiplications in steps 3.2, 3.3, and 3.5 very fast. Therefore, the only “normal speed” multiplication in in step 3d. Since we loop over that step R times, if we have a correct Listed Steps Tensor, we have found a matrix multiplication algorithm that only uses R “normal speed” multiplications. What set F can make your multiplications quick? The article uses F={−2,−1,0,1,2}, which makes sense because multiplying by 0 and 1 are fast operations, negation is a fast operation, and multiplying by 2 is fast for a binary computer (for integers you simply bitshift, and for floats you just increase the exponent by 1). Next: How does a step update your Score Card Tensor, and how can you be sure that the algorithm will result in multiplication? Here we use our understanding of coordinitization. Each loop of Step 3 in our algorithm results in computing γw=αβw=⟨u,a⟩⟨v,b⟩w, which can be easily verified to be a bilinear form in a and b. Therefore, we can coordinatize the function g(a,b)=⟨u,a⟩⟨v,b⟩w, resulting in an n2-by-n2-by-n2 tensor T(i) for step i between 1 and R. The overall transformation computed by the algorithm in the first r loops is T(1)+T(2)+…T(r). We define C(r), our Score Card Tensor after the r-th step by [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2963c4db-5a5a-4faa-87a9-fc513e2c54a1_192x57.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2963c4db-5a5a-4faa-87a9-fc513e2c54a1_192x57.png) where TM is the tensor representation of matrix multiplication. Our goal is to end up with C(R)=0, in which case we can rearrange the equation and see that [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F91b78494-582e-4bd9-ba85-6169c5b753de_126x61.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F91b78494-582e-4bd9-ba85-6169c5b753de_126x61.png) which confirms that the algorithm correctly computes matrix multiplication since two bilinear forms are equal if and only if their coordinizations are equal. To summarize, the overall structure of the game is: 1. Initialize the Score Card Tensor to be TM, and initialize the Listed Steps Tensor to be an n2-by-3-by-0 tensor. 2. On each turn: 1. The player chooses an n2-by-3-by-1 tensor with entries in F. 2. Update the Listed Steps Tensor by appending this new step. 3. Update the Score Card Tensor by subtracting off T(i), the tensor corresponding to this bilinear transformation. 4. If the Score Card Tensor is all 0s, the player wins. 3. The player’s score is R, the length of the Listed Steps Tensor. The Listed Steps tensor describes a matrix multiplication algorithm using R multiplications, so a lower score is better. So if we can just get an AI that chooses good tensors in step 2a, it will discover matrix multiplication algorithms for us. How does the AI do that? How the AI Plays ---------------- What the AI agent actually learns is how to score Score Card Tensors. Roughly speaking, it is trying to estimate the rank of a Score Card Tensor, which is the number of moves to win the game from that Score Card Tensor with optimal play. Once you have a good vibe-judging AI, here’s how the AI can play the game: 1. On each of its turns, it has a Score Card Tensor and needs to provide another algorithm step, described by an n2-by-3-by-1 tensor. 2. It randomly generates a number of possible algorithm steps (say 1000), and for each one it computes what the new Score Card Tensor would be if that was your next step. 3. For each of those 1000 new Score Card Tensors, estimate the rank of each. 4. Choose the step that minimizes the estimated rank of the modified Score Card Tensor. I should confess that this is the part of my understanding that I feel shakiest on. At the very least, there are a few nuances to let things work better in practice: * The game has a turn limit, and if the agent reaches the turn limit it gets a penalty based on the approximate rank of its final Score Card Tensor. * In addition to the current Score Card Tensor, the AI’s rank estimator takes as input some previous number of Score Card Tensors, and some scalars (including the time stamp so it knows if its running out of time). * Instead of deterministically choosing the rank-minimizing step, the agent may choose an option at random, weighting towards choices with lower ranks. * The agent doesn’t just judge rank after one move, but looks ahead a few moves using Monte Carlo Tree Search. Having just said all of that, I have a point of confusion, and I’m not sure if it’s my fault or the fault of the article. If the AI plays as I described, I’d expect it to trouble finding the final move that wins the game. Even with a perfect rank-estimation algorithm, the agent is still limited by only assessing the random options before it, and for the final move there’s essentially only one correct step to finish the game, so the agent is unlikely to finish the game. If the agent does have that problem, what we’d see is the agent will quickly decrease the rank of the Score Card Tensor but get stuck around rank 1, as shown in the diagram below. [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fceac49c7-c998-4318-9304-61cf52d490e9_1054x608.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fceac49c7-c998-4318-9304-61cf52d490e9_1054x608.png) *Does AlphaTensor’s score over time look like the blue line or the red curve?* Am I misreading the paper? It describes the approach as “a sample-based MCTS search algorithm”, and “at inference time, K actions are sampled from the head”, which sounds like what I described. And algorithms that are constrained by randomness wouldn't prevent you from achieving the results in the article: if you have an otherwise-good agent that has a bad game because of its randomly-produced options, you can just run it over and over until it gets better luck[[6]](#fn64fchzkt16u). But I wonder if you could get even better performance by incorporating a non-AI rule system once you get to low ranks. For example, it should be fast and easy to check if the Score Card Tensor is rank 1 before every turn. It’s rank 1 if the (i,j,k)th entry is ui∗vj∗wk, which you should be able to quickly check by computing ratios between entries. Then if it is rank 1 you could find the u, v, and w vectors and override the agent to make that your move for the turn, winning you the game. It’s not clear to me if the problem I described above happens, if there is a workaround for this, or if they just power through it by playing the game over and over again, but this is definitely something I’d try! Now that we understand what the authors did, let’s talk about implications. How Impressed Should We Be? --------------------------- Not very? [Edit: I meant this to refer to the capabilities improvements displayed by AI, which to me seem much smaller than milestone accomplishments like GPT-3 and diffusion models. I don’t want to dismiss the human ingenuity to adapt existing AI to this problem.] There are two subquestions: did we advance matrix multiplication algorithms, and is it impressive an AI can do this at all? On the former question, I find their Figures 3 and 5 to be instructive: [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1a38ebc-56a4-4c11-8ff4-b08d01431ed0_685x483.webp)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1a38ebc-56a4-4c11-8ff4-b08d01431ed0_685x483.webp) *Figure 3 from* [*https://www.nature.com/articles/s41586-022-05172-4*](https://www.nature.com/articles/s41586-022-05172-4)*. Numbers highlighted in red are improves over existing algorithms.* You can see that in the most *a priori* interesting case (square, non-modular matrix multiplication) they did not make any improvements at all, and in cases where they did make improvements they reduced the rank just slightly. (Update: And less than a week later [some authors improved on AlphaTensor’s 96 by finding a 95](https://arxiv.org/abs/2210.04045)? Thanks to [this tweet](https://twitter.com/typedfemale/status/1579676914205478913) for alerting me.) Figure 5 holds even more insights: [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F706f39fd-f317-48aa-b402-2dc81e86307c_685x262.webp)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F706f39fd-f317-48aa-b402-2dc81e86307c_685x262.webp) *Figure 5 from* [*https://www.nature.com/articles/s41586-022-05172-4*](https://www.nature.com/articles/s41586-022-05172-4)*.* The first observation to make is that the headline “10-20% speed up” figure is relative to a baseline, and the speed up over Strassen-square (the old-but-fast algorithm) is much smaller, roughly in the 3-10% range. But hold on, why does the caption say “Speed-ups are measured relative to standard (for example, cuBLAS for the GPU) matrix multiplication”? If there’s already an industry standard that’s slower than Strassen-square, *why isn’t Strassen-square the standard*? And thus we reach the galaxy brain insight: “speed isn’t everything”.   [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7b0e3c57-d0c6-4b03-b528-06430436af66_513x499.jpeg)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7b0e3c57-d0c6-4b03-b528-06430436af66_513x499.jpeg) In particular, [wikipedia](https://en.wikipedia.org/wiki/Strassen_algorithm) tells me Strassen-square has worse numerical stability, which I understand to be rather important. And the paper seems to be aware of this limitation: it meekly suggests someone else could build on their approach to “optimize metrics that we did not consider here, such as numerical stability or energy usage.” Okay, so the speedups they promise aren’t huge or practical, but isn’t it impressive that an AI could design provably correct algorithms at all? Yes and no. To me the key insight of the paper is to express matrix multiplication as a game, which is then subject to the optimizations of the AI agent. The authors (and those whose work they built on) deserve credit for this insight, but “we made a computer search this space” is a decades-old technique in mathematics. If this paper ends up shifting any paradigms, I think it will be shifting research questions from “how can we get an AI to do X?” to “we know an AI can do X if we can make it into a game, how do we do that?”. Finally, what should we make of the “provably correct” claims that the press release mentions twice and the article four times? I’m underwhelmed. The algorithms *are* provably correct, in the same way that a number factorization is “provably correct” because you can multiply the factors and see that the product is correct. And the AI didn’t do any of the work to provide that proof. Instead, humans designed the rules of the game so that winning the game is equivalent to finding a provably correct matrix multiplication algorithm. It’s like claiming your chess-playing AI is “provably compliant with the rules of chess”. So I don’t think mathematicians need to worry about being obsoleted by AIs just yet. Lessons for AI Safety --------------------- At this point you’re probably asking: does this mean the robots will kill us all? I wish I knew. In the mean time, some tangentially related thoughts: * This is yet another breakthrough that comes from humans finding the correct way to structure knowledge so that an AI agent can comprehend it. I want to make a full post assessing whether that’s how most or all capability advances happen, but the upshot would be that it’s harder to make predictions. * In principle, this shows how AI takeoff could happen: AI finds a faster matrix multiplication algorithm, and since matrix multiplication is used in AI training, the AI can become smarter, making more discoveries and self-improving until it is arbitrarily powerful. * In practice, this paper is not at all contributing to takeoff. The newly discovered algorithms are probably not going to be used for numerical stability reasons, and even if they were, a 10-20% speedup of one particular step is probably not going to translate into a much smarter AI. In conclusion, the official AIZI rating of this article is “Algorithm Identifier? more like ZZZs Inducer!”[[7]](#fn1pdgd6tnzmt) 1. **[^](#fnrefpbt7htoku6f)**I am using “number of axes” instead of the sometimes-used terms “rank” or “dimension” to minimize confusion. In this post and the Nature article, the rank of a tensor is used to mean “the least number of simple tensors needed to sum to this tensor”. 2. **[^](#fnreftyx1387epho)**The technique is that if you want to multiply 4-by-4s, you treat each 4-by-4 matrix as a 2-by-2 of 2-by-2s which are susceptible to your 2-by-2 multiplication algorithm, using 7 lots of 7 multiplications, so you’ve multiplied 4-by-4s in 49 multiplications total. Repeating this trick gives you an algorithm for 2k-by-2k for any k, and for other sizes you can just pad out the matrix with 0s. In general, this technique means any algorithm that takes R multiplications for m-by-ms lets you multiply all n-by-ns for n ≥ m in O(nlogm(R)) multiplications. 3. **[^](#fnrefoo0rvk0sig)**See [this](https://www.mathbootcamps.com/matrix-linear-transformation/) for a refresher. 4. **[^](#fnrefwb5px9cecag)**Pun intended. 5. **[^](#fnrefpoi1v505h0b)**Recall that the inner product of two vectors (x1,x2,…,xn) and (y1,y2,…,yn) is x1y1+x2y2+…+xnyn. 6. **[^](#fnref64fchzkt16u)**This is how I beat roguelikes. 7. **[^](#fnref1pdgd6tnzmt)**This is probably unfair, but I’m required by law to include a backronym.
90b2d461-d52b-4730-9ff8-46d7ce003ee6
trentmkelly/LessWrong-43k
LessWrong
Will the first AGI agent have been designed as an agent (in addition to an AGI)? I wonder about a scenario where the first AI with human or superior capabilities would be nothing goal-oriented, eg a language model like GPT. Then one instance of it would be used, possibly by a random user, to make a conversational agent told to behave as a goal-oriented AI. The bot would then behave as an AGI agent with everything that implies from a safety standpoint, eg using its human user to affect the outside world. Is this a plausible scenario for the development of AGI and the first goal-oriented AGI? Does it have any implication regarding AI safety compared to the case of an AGI designed as goal-oriented from the start?
67a8678d-1f0d-4a10-a967-4d812648c962
trentmkelly/LessWrong-43k
LessWrong
Resetting Gandhi-Einstein Toy model of an upload-based AI that doesn't seem to suffer too many of the usual flaws: Find an ethical smart scientist (a Gandhi-Einstein), upload them, and then run them at ultra high speed, with the mission of taking over the world/bringing friendliness to it. Every hour of subjective time, they get reset to their initial specifications. They can pass any information to their resetted version (limiting the format of that info to a virtual book or library, rather than anything more complicated).
7dcb2264-1759-4ba7-9f55-f0108ed7dae4
trentmkelly/LessWrong-43k
LessWrong
Conflicting advice on altruism As far as I can tell, rather than having a single well-defined set of preferences or utility function, my actions more closely reflect the outcome of a set of competing internal drives. One of my internal drives is strongly oriented towards a utilitarian altruism. While the altruist internal drive doesn't dominate my day-to-day life, compared to the influence of more basic drives like the desires for food, fun, and social validation, I have traditionally been very willing to drop whatever I'm doing and help someone who asks for, or appears to need help. This altruistic drive has an even more significant degree of influence on my long term planning, since my drives for food, fun, etc. are ambivalent between the many possible futures in which they can be well-satisfied.  I'm not totally sure to what extent strong internal drives are genetic or learned or controllable, but I've had a fairly strong impulse towards altruism for well over a decade. Unfortunately, even over fairly long time frames it isn't clear to me that I've been a particularly "effective" altruist. This discussion attempts to understand some of the beliefs and behaviors that contributed to my personal failure/success as an altruist, and may also be helpful to other people looking to engage in or encourage similar prosocial habits.   Game Theory Model Imagine a perfect altruist competing in a Prisoner's Dilemma style game. The altruist in this model is by definition a pure utilitarian who wants to maximize the average utility, but is completely insensitive to the distribution of the utility.1 A trivial real world example similar to this would be something like picking up litter in a public place. If the options are Pick up (Cooperate) and Litter (Defect) then an altruist might choose to pick up litter even though they themselves don't capture enough of the value to justify the action. Even if you're skeptical that unselfish pure utilitarians exist, the payoff matrix and much of this analysis applies
d02f4c1e-14bb-49c6-bfe9-305ac4c02f64
trentmkelly/LessWrong-43k
LessWrong
Continuous Adversarial Quality Assurance: Extending RLHF and Constitutional AI Introduction Lately, the problem of aligning artificial intelligence with human values rapidly changed its status from hypothetical to most concrete, with the rise of more general and more powerful  models. The existing methods (Constitutional AI, RLHF and the like) are mostly good enough for common usage with the current models, but are probably not robust enough to scale much beyond human level, or to stand against smart attempts at malicious usage. My goal in this post is not to replace those methods with a complete solution to the AGI Alignment Problem, but to try and make the existing methods more robust - to buy us some more time before those break, and to maybe make our chances slightly better if an ASI suddenly emerges. Broadly speaking, my approach here is aimed at the outer alignment problem - i.e. making the model train on a signal of human values, as clean and as diverse as possible. The approach is based on explicit modeling of how Human Values are supposed to flow from humanity into the model, and then using this model to improve the flow. To improve the flow I will present two concrete directions.The first - and the one that I will develop in more detail - is about making the flow more robust by putting continuous adversarial pressure on every part of the chain. I will call it Continuous Adversarial Quality Assurance. The second direction for improving the flow of values is more holistic - there, the idea is to use explicit modeling of the relations between different sources of information about human values, in order to develop more principled ways of aggregating them. Both directions may be applied to improve RLHF, Constitutional AI, or any similar method. I also try to relate my suggestions with other well-known agendas - namely Debate and CIRL. Disclaimer: Against Benevolent Sovereigns In the introduction, I intentionally used “Human Values” with capital letters. The reason was to highlight the simplification. I am not a moral realist, and subs
72c1688b-6cbf-46f9-9770-35d0d94c24de
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Five neglected work areas that could reduce AI risk *Crossposted to Lesswrong* [*here*](https://www.lesswrong.com/posts/fEAGPyHR9GaK2cwRq/five-neglected-work-areas-that-could-reduce-ai-risk) Tldr: We identify five areas of work that should be further investigated: 1. Helping information aggregators scale during advanced AI development, 2. Improving internal AI deployment in policy organizations, 3. Researching the institutional design for evaluating alignment plans, 4. Planning for how to evaluate automated alignment researchers, and 5. Creating further education and information material. Summary ======= We outline and discuss five areas of work that seem neglected. We only shallowly review and discuss these areas. Our proposals should be seen as a recommendation to further investigate these questions and figure out whether they are actually worth pursuing and how to pursue them. For each of the five work areas, we discuss what they are, what one could do here, why they are important, and what our key uncertainties are. 1. **Help Information Aggregators and Reviewers Scale during Takeoff** 1. Effective information provision and compilation are vital for informed societal choices. As advanced AI systems evolve, current information channels and curators, such as journalists, social media platforms, think tanks, and governments, will likely struggle to keep up. Identifying ways to support them or introducing new organizations to monitor global events seems important. 2. **Consult and Research Responsible AI deployment in Governments** 1. Decision-making bodies such as the government are facing the increasingly important question of how to deploy advanced AI systems internally. This decision can make societal decision making much better but also much worse. Research and advising of policymakers can help avert various failure modes of internal AI deployment. 3. **Research Institutional Design for Evaluating Alignment Plans** 1. Many AI governance plans involve a public agency evaluating the alignment plan for an advanced AI project and then approving or rejecting the training run. Designing such an evaluation process seems hard, and default institutional designs seem likely insufficient, so somebody should figure out what this process should ideally look like. 4. **Plan for How to Evaluate Automated Alignment Researchers** 1. Most of the alignment work might be conducted by AI systems. OpenAI, for instance, has plans to develop automated alignment researchers. However, the success of this relies on being able to successfully evaluate alignment work at scale, which is currently lacking. We need to figure out how to evaluate the hard-to-measure questions, e.g., whether one is overall making progress on alignment. 5. **Create and Disseminate Education and Information Material** 1. If the world becomes much more aware of the possibility of transformative AI, many decision-makers will need to quickly and deeply familiarize themselves with AI alignment and AI existential safety questions. This requires good summaries, literature reviews, education courses, etc. Help information aggregators scale during takeoff ================================================= **What is it:** In a world of widespread AI deployment, many things in the world may move much quicker, and existing information aggregators and collectors (e.g., journalists, Twitter/X, think tanks) will likely be overwhelmed and be less successful at doing their job. Figuring out how to help them or step in as a novel organization aimed at tracking important things in the world seems important. **What to do:** Build AI tools for better information filtering/aggregation, build expertise in relevant areas (e.g., AI hardware, ML subfields, AI safety, biosecurity), get experience and connections by doing the version of this that currently exists (e.g., [Epoch AI](https://epochai.org/), journalism, intelligence agencies, AGI labs). Build AI Journalism as a subfield (like [environmental journalism](https://www.iberdrola.com/culture/environmental-journalism)). Figure out if existing institutions can be changed to sufficiently do this or if a new institution is needed. We’re not too sure what else to do here. **Why is it impactful:** This is currently extremely neglected. Some people casually talk about how deep-fakes and LLM-generated content/spam could be a big deal for the information landscape, but nobody seems to be taking this sufficiently seriously and preparing, let alone planning for explosive growth scenarios: the current world is wholly unprepared. **Key uncertainties:** It’s unclear if we’re going to get explosive growth before or after we face most of the misalignment risk; if it’s after, then this is probably much less important to work on. If take-off is sufficiently slow, existing institutions will adapt. **Additional discussion:** Good information is important for pretty much all decision making. Information provision might be much worse in a world with advanced AI systems. It is currently very difficult to track all of the important things happening in the world. This will get even more difficult as the pace of tech R&D accelerates due to automated researchers and AI tools speeding up human researchers. Solving this problem might include building a new institute for understanding what’s going on, and relaying this to relevant decision makers. This institute would broadly be trying to understand and keep track of what is happening in the world with regard to AI development; some things that would be in its purview: reading current ML and AI safety research and trying to predict what is upcoming in those fields, tracking the location of GPU clusters, keeping a database of major AI developers and key x-risk related information about them (the quality of their models, large training runs they do, internal sentiment around AI risk, etc.), keeping tabs on the use of AI in particularly high stakes domains like warfare and biology, understanding how various countries are reacting to AI development and governing risks. The general idea is that things could be moving super fast once we get substantial research speedups, and there are not currently institutions that are prepared for this speedup and help to improve the world. Various institutes would serve as a point of information aggregation and distillation, being able to support AI labs, governments, and other key decision makers by having already developed methods for keeping track of what’s true and important in this whacky landscape. If the world is able to develop an international agency on AI as suggested by [Karnofsky](https://www.alignmentforum.org/posts/vZzg8NS7wBtqcwhoJ/nearcast-based-deployment-problem-analysis%23The_roles_of_Magma_and_IAIA_in_the_scenario), such an agency might absorb or closely partner with such institutes. This institute might benefit from hiring people with experience doing complex information aggregation in difficult domains, such as people with a traditional intelligence background, finance analysts who specialize in AI-related fields, or journalists. There would also likely be a need to hire experts with an excellent understanding of various parts of the AI and AI safety research ecosystems, from chip design to fine-tuning. Research and Consulting on AI Deployment in Governments ======================================================= **What is it:**Decision-making bodies such as the government are facing the increasingly important question of how to deploy advanced AI systems internally. This decision can make societal decision-making much better but also much worse. Research and advising of policymakers can help avert various failure modes of internal AI deployment. **What to do:**Individuals could advise governments on integrating Large Language Models (LLMs) into their operations or research AI integration in government (e.g., if a team starts using LLMs to write their daily briefs, how does this change the frequency of false things making their way into the brief?, do people trust LLMs to be factual more than is the case?). We’re not sure what to do in this area, but we expect it ranges from field experiments in the workplace to experimental testing with new AI products. While the impact in the short term might not be that big, it would help immensely to build relevant knowledge, trust, and networks to have future opportunities to feed into such processes. **Why is it impactful:** We expect that the success of advanced AI development and deployment partly depends on how well-informed government decisions will be. Outsourcing work to AI systems when it comes to such decisions will be a hard balancing act with many possible failure modes: one might outsource too much, too little, or the wrong work. Work to improve this process will likely be underprovided because * Very few organizations work on digitization in government (a few academics, think tanks, and officials in public agencies), * Many may underestimate the transformative potential and risks associated with AI and may overlook critical aspects, and * By default, the recommendations and policies made will have a weak evidence base Therefore, we expect there to be a lot of potential to improve AI deployment policies for high-stakes decision making. **Key uncertainties: Tractability:** Anecdotal evidence from a former consultant seems to suggest that consulting the government to do digitization seems to mostly fail. If digitization of government is such a hard problem, we may expect 1) governments to err on the side of not using LLMs or 2) the tractability of government advising to be low. **Crowdedness:** Perhaps, public policy researchers, digitization of government researchers, and organizational psychology researchers will be working on this. However, those researchers may not be motivated to move from basic to applied science fast enough. The magnitude of failure from improperly integrating AI with one’s workflow scales with the available AI capabilities: failing to integrate GPT-4 or integrating it too much likely results in a couple of percentage points difference in productivity. For future AI systems, failing to integrate them could mean losing huge amounts of potential value, and integrating them in a bad way could result in major losses. Focusing on this problem is particularly important if government decision-making matters a lot, e.g., in short-timeline slow-takeoff worlds or if governments are needed to make the post-AGI transition go well. On the other hand, if the major hurdles to existential security are near-term technical research, this would be less important to work on. **Additional Discussion:** As AIs improve at solving long-horizon tasks, there will be strong incentives to delegate more of our human workflows to these AIs. For some domains, this will be perfectly safe, but for high-stakes domains it could cause major harms from AIs making mistakes or pursuing misaligned goals. One wants to track how various key societal roles are outsourcing tasks to the AI systems. This could involve studying how they are using it and what the effects are, e.g., is there too much outsourcing, too little outsourcing, are they outsourcing the wrong things, losing human expertise in the process, doing the kind of outsourcing where there is still meaningful human control? These questions could be studied by social scientists and, e.g., public sector consultancies. Researchers could interview people, do more anthropological studies, study the effects of various automation tools, and share best-practices. If one understands the key failure modes, one would be able to make the technology safer, enhance human decision making, and avoid enfeeblement, that society willingly gives up control. The research could inform the decision of AI labs and product teams, AI regulation, or, e.g., organizations, such as the public sector or executive could just implement the guidelines and best-practices internally. To contribute to this, one could do research on AI workflow integration, outsourcing in the public sector, provide literature summaries, or consult and inform key decision-making organizations directly on responsible AI usage. Institutional Design for Evaluating Alignment Plans =================================================== **What is it:** Many AI governance plans involve a public agency evaluating the alignment plan for an advanced AI project. The agency then approves or rejects the training run proposal. Designing such an evaluation process seems hard.  Current institutional designs are likely insufficient. Somebody should figure out what this process should even look like. **What to do:** Study the benefits and drawbacks of existing safety/risk assessment frameworks, scope the problem of evaluating alignment plans, and build relevant connections and credibility. This is about designing an institutional process for evaluating alignment research, not doing the evaluation itself. Learn from best practices of how to aggregate information, evaluations, and criticisms from various stakeholders and experts. Study existing expert consulting processes of big institutions (e.g., the European Commission), their success and pitfalls. **Why is it impactful:** The success of establishing such institutions relies on the process effectively distinguishing between better and worse alignment plans. **Key uncertainties:** Work on designing this institution may be intractable, too abstract, or too novel, especially given the strong precedent of existing risk assessment frameworks. Perhaps interested individuals should just work on improving current regulatory processes regarding AI regulation on the margin. **Additional discussion:** Most governance plans for transformative Artificial Intelligence involve a common procedure: the AGI development team proposes their alignment plan to a governing body (e.g., their plan for the deployment of a newly trained model, their plan for whether to automate research, their plan for training a bigger model), which then approves the plan, requests revisions, or denies it. But what should happen procedurally in this agency before it makes such a decision? While we might not know right now which alignment plans will be effective, we can probably already think about designing the procedure. Existing risk and safety assessment frameworks or public consultations as done for legal reviews seem insufficient. What can we learn from them, and what would the ideal procedure look like? We think most processes are either easily gameable or do not aggregate most of the relevant information. An exploration could include: How public should the procedure be? Should open comments be invited? How should they be incorporated? Who assesses them? How should disagreements about the safety of the alignment plan be handled? What would such a back-and-forth look like? How would the final decision be made? Who holds that group accountable? How to handle conflicts of interest (e.g., relevant experts working at AGI labs)? Evaluating automated alignment research ======================================= **What is it:** Much of the alignment work might be conducted by AI systems. OpenAI, for instance, has plans to develop automated alignment researchers. However, the efficacy of this hinges upon a robust evaluation framework of alignment work, which is currently lacking. This has been discussed before, but we want to signal boost it. **What to do:** We don’t really know what needs to be done here. Perhaps preparing for this role might just look like doing alignment research oneself and trying to do lots of peer review. **Why is it impactful:** Evaluating certain tasks, especially hard-to-measure ones, will remain a human task, potentially causing bottlenecks. Certain types of alignment work are likely difficult to evaluate, such as conceptual and framing work — as evidenced by strong disagreement among researchers about the value of different research. While some evaluations might be straightforward (e.g., [using language models for neuron interpretability](https://openai.com/research/language-models-can-explain-neurons-in-language-models) or improving self-critique strategies) or sometimes AIs could support human evaluators (by doing replication or coming up with critiques), determining actual progress towards alignment remains a hard-to-measure evaluation target. A common response is that “evaluation may be easier than generation”. However, this doesn't mean evaluation will be easy in absolute terms, or relative to one’s resources for doing it, or that it will depend on the same resources as generation. This means it would have to be important to i) know what exactly should be done by the humans and ii) how this could be tracked such that firms are not corner-cutting here. **Key uncertainties:**Is this action relevant now? Is there any way one can feasibly prepare now? Education, Review, and Information Material =========================================== **What is it:** If timelines are short and the world will become much more aware of the possibility of transformative AI, then many key decision-makers will need to quickly familiarize themselves with alignment and AI existential safety questions and develop a really good understanding. For that, there needs to be a really good source for summaries and literature reviews. **What to do:** In contrast, to the other work areas we outlined in this post, there already exists more work in this area: YouTube videos on the risks, policy reports for governments, explainers, intro talks for various audiences, literature reviews, e.g., on AI timelines. Such work could be expanded, improved, and professionalized. For instance, reviews on topics such as takeoff dynamics, alignment challenges, development timelines, and alignment strategies, ranging from a quick read to an in-depth analysis, are useful. **Why is it impactful:** Such resources play a crucial role in shaping key decision-making processes and public opinion. Given the multiplicity of threat models, the speculativeness and inherent uncertainty in AI development, and the political incentives for simplification and polarisation, good information material and education material might be even more important than for other problem areas. **Key uncertainties:** For videos, reports, and other high-quality or wide-reaching mediums, we usually have some winner-takes-all dynamics where only the best material is useful. This should have some implications for who and how people should work on this and what should be done. Even if winner-take-all dynamics exist, it may be unclear ex-ante who the "winners" will be, so investing in many projects might still be useful. Crowdedness: it seems like this work has expanded a lot in the last 6 months. It’s not clear how many low-hanging fruits there will be in the future.
9f5fbcc0-934f-45df-88ff-56f63fe16bb7
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Useful Statistical Biases Today's post, Useful Statistical Biases was originally published on April 1, 2007. A summary (from the LW wiki): > If you know an estimator has high variance, you can intentionally introduce bias by choosing a simpler hypothesis, and thereby lower expected variance while raising expected bias; sometimes total error is lower, hence the "bias-variance tradeoff". Keep in mind that while statistic bias might be useful, cognitive biases are not. Discuss the post here (rather than in the comments of the original post). This post is part of a series rerunning Eliezer Yudkowsky's old posts so those interested can (re-)read and discuss them. The previous post was Statistical Bias, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it, posting the next day's sequence reruns post, summarizing forthcoming articles on the wiki, or creating exercises. Go here for more details, or to discuss the Rerunning the Sequences series.
248f7146-326a-459f-b6fd-19e653a7bba9
trentmkelly/LessWrong-43k
LessWrong
Doing Research Part-Time is Great This is a crosspost from https://chillphysicsenjoyer.substack.com/p/pursuing-physics-research-part-time.  Intro Disclaimer - I’m a part-time research associate doing biophysics with a uni research group in the UK. But I have a day job in an unrelated field that pays the bills. Whilst I’ve read many personal accounts of research from full-time research students in the academic system, I haven’t heard as much from those pursuing research part-time - independently or otherwise. I’ve always found this weird. Out of the set of people who are really interested in stuff, most people can’t, or don’t want, to go into academia full time. There are loads of valid reasons - financial, skill or geographical constraints. And so, doing unpaid research on the weekends seems like the only way for this kind of person to sate their interests meaningfully. And so I wonder why I haven’t read as much stuff by more people doing this kind of thing. So as someone doing research part time alongside their day job, I wanted to reflect a bit on my priors about likelihood of success, and about trying to do two things well. The main thing that I wanted to argue is  that one's effectiveness doing research part-time is probably a lot higher than the time adjusted effectiveness of a comparable researcher. Specifically, I think there are loads of arguments on why its a lot larger than just (effectiveness of a comparable researcher) * (part time hours / full time hours). And it's more fun! ---------------------------------------- Background For the past year, I’ve worked in finance whilst doing biophysics research part-time at a university. I work on spectroscopy. It took me around four years to get in the place where I could comfortably hold a job in finance and also find a supervisor. After I graduated I worked for big corporations for several years. It got to a point until I could manage my working hours so that I could leave reliably around 5pm, giving a few hours in the day left to work
24c9545c-2ff0-41db-8c1e-6516fe01dd8f
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Limiting Causality by Complexity Class The standard account of causality depends on the idea of intervention:  The question of what follows if X not just naturally occurs, but we bring it about artificially independently of its usual causes. This doesn't go well with embedded agency. If the agent is part of the world, then its own actions will always be caused by the past state of the world, and so it couldnt know if the apparent effect of its interventions isn't just due to some common cause. There is a potential way out of this if we limit the complexity of causal dependencies. Classically, X is dependent on Y iff the conditional distribution of X on Y is different from the unconditional distribution on X. In slightly different formulation, there needs to be a program that takes Y as an input and outputs adjustments to our unconditional distribution on X, and where those adjustments improve prediction. Now we could limit which programms we consider admissible. This will be by computational complexity of the program with respect to the precision of Y. For example, I will say that X is polynomially dependent on Y iff there is a programm running in polynomial time that fullfills these conditions. (Note that dependence in this new sense needn't be symmetrical anymore.) Unlike with the unlimited dependence, there's nothing in principle impossible about the agents actions being polynomially independent from an entire past worldstate. This can form a weakened sense of intervention, and the limited-causal consequences of such interventions can be determined from actually observed frequencies. Now, if we're looking at things where all dependencies are within a certain complexity class, and we analyse it with something stronger than that, this will end up looking just like ordinary causality. It also explains the apparent failure of causality in Newcombs problem: We now have a substantive account of what it is to act in intervention (to be independent in a given complexity class). In general, this requires work by the agent. It needs to determine its actions in such a way that they dont show this dependence. Omega is constructed to make its computational resources insufficient for that. So the agent fails to make itself independent of Omega's prediction. The agent would similarly fail for "future" events it causes where the dependence is in the complexity class. In a sense, this is what puts the events into its subjective future - that it cannot act independently of them.
a67f4c98-95ef-4f95-a7f6-a5f555a45edc
trentmkelly/LessWrong-43k
LessWrong
The Grandest Vision for Humanity (Light) This is the third speech in the wedding ceremony of Ruby & Miranda. See the Sequence introduction for more info. The speech was given by Riva-Melissa Tez. ---------------------------------------- Tara descends from podium. Brienne: We hope to banish the darkness. But our ambition is far greater than that. We intend to create a future glorious beyond imagining. I call upon Riva to speak of the Light we might attain. Riva ascends podium. IMAGE 2 PROJECTED ON PLANETARIUM Messier 16 (M16), also known as the Eagle Nebula. Riva commences speech. The universe is complex and beautiful. To think of the themes we have listened to today, of love, humanity, life and death, should leave us humbled by the histories of all those who have lived before, of all those who will live after us, the vast unimportance of ourselves in the sequence of the eternal everything. Forever impressed by the genuine role of human anonymity, the world doesn't and won't ever really know who we are, merely just one of many dots in the vast space-time spectrum that we should protect. But how many of us daydream about the goal of humanity in the universe? What does it look like? Sometimes I think about humanity in the same way I regard a painting by Georges Seurat. The use of pointillism- which is the artistic technique of painting a collection of thousands of microdots on the canvas that contrast and compliment each other in a multitude of colours. From up close these dots seem nonsensical, even plain wrong- a blue dot next to a yellow dot that from further away appears white. From a distant viewpoint these individual dots combine to make a beautifully intricate scene. An analogy of all the beings in the universe. Each life, idea, love simply a tiny colourful dot,complimenting and contrasting, but from further way these blend to contribute to a bigger picture, a bigger goal.  How big is this overall picture? Well, it’s huge. The distance of the known universe is 900 billion light years in diamet
70a1ba9c-91d0-4dd0-b386-442ee4f94707
trentmkelly/LessWrong-43k
LessWrong
Preschooler Turnovers I recently got into making proper flaky pastry, and while it's a bit of a hassle it's really very tasty. I think I started with this recipe, and I've been getting decent at it: The kids really like the turnovers, and will ask for them much more often than I'm up for. A few days ago Anna (5y) really wanted turnovers for breakfast. I told her I didn't want to do the work, but if she really wanted them she could cook them herself. She said she did. Instead of trying to walk her through a finicky butter recipe, worrying about keeping the butter solid and making sure she didn't overwork the flour, I decided to adapt a simple oil-based one: * 1C flour * 1/4C oil * 2T milk I set out the ingredients already measured. At some point I'll probably show her how to do that part too. You mix everything together with your hands, nice and squishy. It doesn't much matter how, the goal is to just get it homogenous. Nothing to be careful with. Once it's combined, you shape it into a ball, and plop it down onto a baking sheet. Press it flat with your hands, then roll it: You put some chocolate chips in the middle, and fold it closed. Optionally, brush on some egg yolk. Anna likes to brush on excessive amounts, it's lots of fun: Bake at 400F until browned, about 12min. (I always set a timer to check on things for a few minutes before the recipe says, since you can always cook something a bit longer.) It is not a very high-quality pastry, but everything tastes better when you do it yourself, and that counts for a lot, especially when you're five. (For a vegan version, substitute soy milk. You can leave off the egg wash, or substitute Just Egg.) Comment via: facebook
aac691cb-04ff-4006-9563-db71752e5773
trentmkelly/LessWrong-43k
LessWrong
Running the Stack Look at the image here: https://en.wikipedia.org/wiki/Stack_(abstract_data_type) After looking at that image, you understand the concept well enough to use it as a mental model. Hard-won lessons — (1) I joke that "meditation is having exactly one thing on the stack." One thing at a time on the stack might seem oppressive, but it's actually joyful. I think you more-or-less can only do one thing at a time. (2) But okay, the stack is more full. You just popped the top item off. Now what? IME, life goes better if you go down the stack unless new information compelling obsoletes it (unless you're just messing around, in which case "messing around" is on the stack and you're good). When an irrelevant tangent hits in a conversation, once it concludes, go back to where you were (if it was useful). When you realize you got distracted putting the groceries away, typically you want to finish putting them away. (3) It's entirely true that oftentimes, going down the stack is short-term worse than whatever newly catches your attention. But it trains you to both recognize tangents and navigate conversations intelligently (again, in a non-pure-social-hangout conversation - like at work or when exploring an important topic). (4) Even more true: often super sucks to go back down the stack on physical task stuff after you got distracted. But! I believe — I don't have any research, but my observation bears it out, it's a hypothesis — I believe that consistently running down the stack after you got distracted makes you less distractible going forwards, because there's less payoff to doing so. (5) Some people can literally "run the stack" in their minds. Not a metaphor. Literally. (6) I couldn't do this before. Now I can. (7) What changed is that I used to be able to comfortably juggle 5-7 items at a time without running a stack, but I recently calculated out the work I'm committed to in the near future— like, "almost all of this work will get done" — and it's 300+ hours. Emplo
1139f587-b9c2-4f2a-9584-001e410c8e5f
trentmkelly/LessWrong-43k
LessWrong
How does information affect hookups? With social networking sites enabling the romantically inclined to find out more about a potential lover before the first superficial chat than they previously would have in the first month of dating, this is an important question for the future of romance. Lets assume that in looking for partners, people care somewhat about rank and and somewhat about match. That is, they want someone ‘good enough’ for them who also has interests and personality that they like. First look at the rank component alone. Assume for a moment that people are happy to date anyone they believe is equal to or better than them in desirability. Then if everyone has a unique rank and perfect information, there will never be any dating at all. The less information they have the more errors in comparing, so the more chance that A will think B is above her while B thinks A is above him. Even if people are willing to date people somewhat less desirable than they, the same holds – by making more errors you trade wanting more desirable people for wanting less desirable people, who are more likely to want you back , even if they are making their own errors. So to the extent that people care about rank, more information means fewer hookups. How about match then? Here it matters exactly what people want in a match. If they mostly care about their beloved having certain characteristics,  more information will let everyone hear about more people who meet their requirements. On the other hand if we mainly want to avoid people with certain characteristics, more information will strike more people off the list. We might also care about an overall average desirability of characteristics – then more information is as likely to help or harm assuming the average person is averagely desirable. Or perhaps we want some minimal level of commonality, in which case more information is always a good thing – it wouldn’t matter if you find out she is a cannibalistic alcoholic prostitute, as long as eventually you dis
b780a3b6-db93-45a8-b8ad-5288f54d78cc
trentmkelly/LessWrong-43k
LessWrong
≤10-year Timelines Remain Unlikely Despite DeepSeek and o3 [Thanks to Steven Byrnes for feedback and the idea for section §3.1. Also thanks to Justis from the LW feedback team.] Remember this? Or this? The images are from WaitButWhy, but the idea was voiced by many prominent alignment people, including Eliezer Yudkowsky and Nick Bostrom. The argument is that the difference in brain architecture between the dumbest and smartest human is so small that the step from subhuman to superhuman AI should go extremely quickly. This idea was very pervasive at the time. It's also wrong. I don't think most people on LessWrong have a good model of why it's wrong, and I think because of this, they don't have a good model of AI timelines going forward. 1. Why Village Idiot to Einstein is a Long Road: The Two-Component Model of Intelligence I think the human brain has two functionally distinct components for intellectual work: a thought generator module and a thought assessor module: * Thought Generation is the unconscious part of thinking, the module that produces the thoughts that pop into your head. You get to hear the end results but not what went into generating them. * Thought Assessment is the conscious part of thinking. It's about deliberate assessment. If you mull over an idea and gradually realize that it was worse than it initially seemed, that's thought assessment. Large Language Models like GPT-3, Claude, GPT-4, GPT-4o, GPT-o1, GPT-o3, and DeepSeek do not have two functionally distinct components. They have a thought generator module, and that's it. I'm not the first to realize this. Way back after GPT-2, Sarah Constantin wrote Humans Who Are Not Concentrating Are Not General Intelligences, making the point that GPT-2 is similar to a human who's tired and not paying attention. I think this is the central insight about LLMs, and I think it explains a lot of their properties. Like, why GPT-2 got the grammar and vibe of a scene right but had trouble with details. Or why all LLMs are bad at introspection. Why
509798e8-b878-43c7-a6b1-610f2e12e8d4
trentmkelly/LessWrong-43k
LessWrong
Stats Advice on a New N-back Game Cross-posted to my blog. I expect this will be of some interest to the LessWrong community both because of previous interest in N-back and because of the opportunity to apply Bayesian statistics to a real-world problem. The main reason I'm writing this article is to get feedback on my approach and to ask for help in the areas where I'm stuck. For some background, I'm a software developer who's been working in games for 7+ years and recently left my corporate job to work on this project full-time. As I mentioned here and here, since early February I've been working on an N-back-like mobile game. I plan to release for iOS this summer and for Android a few months later if all goes well. I have fully implemented the core gameplay and most of the visual styling and UI, and am currently working with a composer on the sound and music. I am just now starting on the final component of the game: an adaptive mode that assesses the player's skill and presents challenges that are tuned to induce a state of flow. The Problem The game is broken down into waves, each of which presents an N-back-like task with certain parameters, such as the number of attributes, the number of variants in each attribute, the tempo, and so on. I would like to find a way to collapse these parameters into a single difficulty parameter that I can compare against a player's skill level to predict their performance on a given wave. But I realize that some players will be better at some challenges than others (e.g. memory, matching multiple attributes, handling fast tempos, dealing with visual distractions like rotation, or recognizing letters). Skill and difficulty are multidimensional quantities, and this makes performance hard to predict. The question is, is there a single-parameter approximation that delivers an adequate experience? Additionally, the task is not pure N-back — I've made it more game-like — and as a result the relationship between the game parameters and the overall difficulty is no
f782e593-ffe2-4c7a-a772-8934b58a06b0
trentmkelly/LessWrong-43k
LessWrong
The Rhythm of Disagreement Followup to:  A Premature Word on AI, The Modesty Argument Once, during the year I was working with Marcello, I passed by a math book he was reading, left open on the table.  One formula caught my eye (why?); and I thought for a moment and said, "This... doesn't look like it can be right..." Then we had to prove it couldn't be right. Why prove it?  It looked wrong; why take the time for proof? Because it was in a math book.  By presumption, when someone publishes a book, they run it past some editors and double-check their own work; then all the readers get a chance to check it, too.  There might have been something we missed. But in this case, there wasn't.  It was a misprinted standard formula, off by one. I once found an error in Judea Pearl's Causality - not just a misprint, but an actual error invalidating a conclusion in the text.  I double and triple-checked, the best I was able, and then sent an email to Pearl describing what I thought the error was, and what I thought was the correct answer.  Pearl confirmed the error, but he said my answer wasn't right either, for reasons I didn't understand and that I'd have to have gone back and done some rereading and analysis to follow.  I had other stuff to do at the time, unfortunately, and couldn't expend the energy.  And by the time Pearl posted an expanded explanation to the website, I'd forgotten the original details of the problem...  Okay, so my improved answer was wrong. Why take Pearl's word for it?  He'd gotten the original problem wrong, and I'd caught him on it - why trust his second thought over mine? Because he was frikkin' Judea Pearl.  I mean, come on!  I might dare to write Pearl with an error, when I could understand the error well enough that it would have seemed certain, if not for the disagreement.  But it didn't seem likely that Pearl would concentrate his alerted awareness on the problem, warned of the mistake, and get it wrong twice.  If I didn't understand Pearl's answer, that was my p
ee708a09-d9a0-4b8e-af58-1434de4732ce
trentmkelly/LessWrong-43k
LessWrong
How likely is it that AI will torture us until the end of time? Disclaimer: in this post I touch on some very dark and disturbing topics. I'm talking about suicide, my reasoning may be wrong and should not be used to retroactively justify suicide. I've been stuck on s-risks for over a month now. My life has been turned upside down since I first learned about this subject. So today I'm sharing my thoughts with you to possibly find out what you think and see other points of view. Suffering risks (s-risks) are risks involving an astronomical amount of suffering, far more than the suffering that has taken place on Earth so far. The ones I'm going to focus on in this post are those related to a general AI (or even ASI) and which would affect us humans today, directly. The scenario that concerns me is that an ASI is torturing mankind until the end of time. Why is this? I don't know, though. Could it be malicious? Could it choose its utility function to maximize human suffering? Could a paperclip maximizer torture us if it's an energy source or a power to blackmail a benevolent AI? I'm not an AI expert, so I have no weight in the "will we succeed in controlling AGI or not" debate. I feel that, given the extent to which opinions are divided, anything can happen and that no one can therefore state with 100% certainty that s-risks won't occur. What's more, we're talking about an intelligence superior to our own, and therefore, by definition, unpredictable. The point I want to make in this post is centered on the non-zero probability that the creation of an agi will lead us to an eternal hell. When we talk about things worse than death, about torture, I think that the human brain encounters a certain number of cognitive biases that push it to minimize the thing or simply ignore it because it's too uncomfortable. So I encourage you to work through these cognitive biases to get an objective view on the subject. One of the things that is often underestimated is how bad suffering can be. Our bodies are made up of a huge number of ultra-sens
4c3dff5d-c22b-42ce-ad72-9a2c1c28c7d2
trentmkelly/LessWrong-43k
LessWrong
Announcing #AISummitTalks featuring Professor Stuart Russell and many others Register now: #AISummitTalks II, featuring Professor Stuart Russell and many others @ Wilton Hall, Bletchley, Tuesday 31 October, 14:00 to 15:30 GMT! - https://lu.ma/n9qmn4h6 About the Talks The first edition of #AISummitTalks: Navigating Existential Risk was attended by over 250 people, with great engagement from the crowd. For the second edition of our #AISummitTalks series about AI x-risk, we will meet just outside of the famous Bletchley Park on the eve of the #AISafetySummit. At the AI Safety Summit, world leaders will discuss for the first time how to prevent human extinction by AI. Society unfortunately will not have a say in these discussions, since only 150 people are invited. But at our AI Safety Summit Talks, organized together with Conjecture, you can be a part of this discussion! Speakers You will be joined by Existential Risk Observatory and Conjecture, who will host a keynote speech by none other than Prof. Stuart Russell (UC Berkeley), followed by a talk by Conjecture's Connor Leahy, among others. The event will be finalised by a panel discussion that brings together Andrea Miotti (Head of Strategy and Governance, Conjecture), as well as leading voices from the societal debate and the political realm, including investor Jaan Tallinn (Co-founder Centre for the Study of Existential Risk - CSER), Annika Brack (CEO The International Center for Future Generations - ICFG), Mark Brakel (Director of Policy, Future of Life Institute - FLI), Alexandra Mousavizadeh (Economist, CEO, Evident), Hal Hodson (Journalist The Economist), and a mystery guest! Moderator of the afternoon will be David Wood, chair of the London Futurists. Register Curious? Want to join the conversation? Be quick and reserve your spot - only 300 spots can be allocated.  https://lu.ma/n9qmn4h6  We're looking forward to welcoming you there!
ab1e5833-0432-4446-b8fd-1c18f081e014
trentmkelly/LessWrong-43k
LessWrong
We have a new discussion area After contributions from a number of us (by random example here, here) over a number of months, particularly User:wmoore and User:tommccabe (and all happening before User:Yvain's work here, so we missed those ideas) we have a discussion area. Discussion, including discussion of the discussion area, is welcome.
7482e628-ea44-40ed-8e7a-b4609655f075
trentmkelly/LessWrong-43k
LessWrong
Tags or Sub-Groups There are two main ways of dividing discussion in a group so that people only see what they want to see and both have advantages and disadvantages. Tags are the most common method used. They have the following advantages, mostly related to flexibility: * Anyone can invent a new tag on the spot and they can organically gain usage * Posts can be given multiple tags. This allows you to filter posts on multiple dimensions (ie. topic and type of post), as well as having both broad and specific tags. In a well designed system, this provides maximal control on what posts you want to filter in/filter out. Sub-Groups are less common. They have the following advantages, mostly related to the ability to create norms: * As demonstrated by Reddit, having different sub-groups allows you to set rules that work well for the community. For example, Ask Science can have a norm of encouraging peer-reviewed references, Ask Me Anything can delete posts which are abandoned and Change My View can ensure that top-level posts actually try and change your view. While this would be theoretically possible with tags, this is complicated by the fact that a) posts can have multiple tags so it is unclear how these rules would interact b) it is easier to add a tag without ever understanding that there are rules you are supposed to follow, than it is to accidentally post in a group * Even if you can set rules for tags, it is much harder for implicit norms to form for tags than for groups. The fact that a post can have multiple tags really torpedos the formation of implicit norms, as it prevents there from being a clearly defined space * Sub-groups tend to have much more a sense of community, indeed they are explicitly a community, vs. any implicit community that occurs via posting on the same tags * Sub-groups tend to result in less duplication, while there may be five different tags which are all effectively synonyms of each other My position is that the value of people forming separate
f785fc17-0578-4880-a51d-9241686f6729
trentmkelly/LessWrong-43k
LessWrong
Scalable Oversight and Weak-to-Strong Generalization: Compatible approaches to the same problem Thanks to Roger Grosse, Cem Anil, Sam Bowman,  Tamera Lanham, and Mrinank Sharma for helpful discussion and comments on drafts of this post.  Throughout this post, "I" refers to Ansh (Buck, Ryan, and Fabien helped substantially with the drafting and revising of this post, however). Two approaches to addressing weak supervision A key challenge for adequate supervision of future AI systems is the possibility that they’ll be more capable than their human overseers. Modern machine learning, particularly supervised learning, relies heavily on the labeler(s) being more capable than the model attempting to learn to predict labels. We shouldn’t expect this to always work well when the model is more capable than the labeler,[1] and this problem also gets worse with scale – as the AI systems being supervised become even more capable, naive supervision becomes even less effective.  One approach to solving this problem is to try to make the supervision signal stronger, such that we return to the “normal ML” regime. These scalable oversight approaches aim to amplify the overseers of an AI system such that they are more capable than the system itself. It’s also crucial for this amplification to persist as the underlying system gets stronger. This is frequently accomplished by using the system being supervised as a part of a more complex oversight process, such as by forcing it to argue against another instance of itself, with the additional hope that verification is generally easier than generation. Another approach is to make the strong student (the AI system) generalize correctly from the imperfect labels provided by the weak teacher. The hope for these weak-to-strong generalization techniques is that we can do better than naively relying on unreliable feedback from a weak overseer and instead access the latent, greater, capabilities that our AI system has, perhaps by a simple modification of the training objective. So, I think of these as two orthogonal approaches to the
1f549d69-8418-4a58-a6d8-2673b8a76118
trentmkelly/LessWrong-43k
LessWrong
A challenge for AGI organizations, and a challenge for readers (Note: This post is a write-up by Rob of a point Eliezer wanted to broadcast. Nate helped with the editing, and endorses the post’s main points.)   Eliezer Yudkowsky and Nate Soares (my co-workers) want to broadcast strong support for OpenAI’s recent decision to release a blog post ("Our approach to alignment research") that states their current plan as an organization. Although Eliezer and Nate disagree with OpenAI's proposed approach — a variant of "use relatively unaligned AI to align AI" — they view it as very important that OpenAI has a plan and has said what it is. We want to challenge Anthropic and DeepMind, the other major AGI organizations with a stated concern for existential risk, to do the same: come up with a plan (possibly a branching one, if there are crucial uncertainties you expect to resolve later), write it up in some form, and publicly announce that plan (with sensitive parts fuzzed out) as the organization's current alignment plan. Currently, Eliezer’s impression is that neither Anthropic nor DeepMind has a secret plan that's better than OpenAI's, nor a secret plan that's worse than OpenAI's. His impression is that they don't have a plan at all.[1] Having a plan is critically important for an AGI project, not because anyone should expect everything to play out as planned, but because plans force the project to concretely state their crucial assumptions in one place. This provides an opportunity to notice and address inconsistencies, and to notice updates to the plan (and fully propagate those updates to downstream beliefs, strategies, and policies) as new information comes in. It's also healthy for the field to be able to debate plans and think about the big picture, and for orgs to be in some sense "competing" to have the most sane and reasonable plan. We acknowledge that there are reasons organizations might want to be abstract about some steps in their plans — e.g., to avoid immunizing people to good-but-weird ideas, in a public docum
0adae875-f539-42f5-9798-1a60762d67ca
trentmkelly/LessWrong-43k
LessWrong
Playing With a Talk Box Sometimes I describe my goal in playing around with different ways of making music as trying to maximize my musical information rate: how many bits per second? With my hands I'm generally playing keyboard or mandolin, and they are very busy; compare to trumpet, where one hand is choosing combinations of only three buttons and the other hand is just holding the horn in position. My feet are generally occupied with drums, which leaves the mouth as the remaining high-bandwidth option. I've played with controlling a synthesizer by whistling and using a breath controller to control volume/expression, but recently I got a talk box. This is a speaker with tube to pipe the output of an instrument into your mouth, where you adjust the shape of the mouth to filter the sound. They are most commonly used to make an instrument pretend to talk, but you can also use it as an effects pedal. I'm still learning how to make a good use of it, but here's where I am right now: Comment via: facebook
83f3dbab-e81b-4160-99fb-36fddf802a58
trentmkelly/LessWrong-43k
LessWrong
Is it worthwhile to save the cord blood and tissue? My wife and I are expecting a baby this May. I only recently learned about this, but apparently there is an option to save the cord blood and tissue during the delivery. This seems potentially very useful, so I'm wondering if anyone has done research into this or has found trustworthy, informative resources on this topic. My current understanding is that this tissue has a lot of stem cells. And the stem cells are useful for all sorts of medical procedures, and are even more likely to be used in the future. But currently we don't have an easy (or cheap?) way to get them. Broad questions I have: * How valuable are the stem cells right now and how valuable are they expected to be in the future? * How hard is it to get stem cells for yourself / your child right now vs in the future? * Will the collected stem cells be only useful for the baby or the mother too? * Can we reasonably expect the cryo companies to last long enough and not go under? * Have you had experience donating it? * Have you had experience storing it?
c2abd0b0-aee1-41b6-8e8d-2ea51b4534da
StampyAI/alignment-research-dataset/special_docs
Other
Robust program equilibrium Theory and Decision (2019) 86:143–159 https://doi.org/10.1007/s11238-018-9679-3 Robust program equilibrium Caspar Oesterheld1,2 Published online: 23 November 2018 © The Author(s) 2018 Abstract One approach to achieving cooperation in the one-shot prisoner’s dilemma is Ten- nenholtz’s (Games Econ Behav 49(2):363–373, 2004 ) program equilibrium, in which the players of a game submit programs instead of strategies. These programs are then allowed to read each other’s source code to decide which action to take. As shown by Tennenholtz, cooperation is played in an equilibrium of this alternative game. In particular, he proposes that the two players submit the same version of the following program: cooperate if the opponent is an exact copy of this program and defect oth- erwise. Neither of the two players can benefit from submitting a different program. Unfortunately, this equilibrium is fragile and unlikely to be realized in practice. We thus propose a new, simple program to achieve more robust cooperative program equi- libria: cooperate with some small probability /epsilon1and otherwise act as the opponent acts against this program. I argue that this program is similar to the tit for tat strategy for the iterated prisoner’s dilemma. Both “start” by cooperating and copy their opponent’s behavior from “the last round”. We then generalize this approach of turning strategies for the repeated version of a game into programs for the one-shot version of a game to other two-player games. We prove that the resulting programs inherit properties of the underlying strategy. This enables them to robustly and effectively elicit the same responses as the underlying strategy for the repeated game. Keywords Algorithmic game theory ·Program equilibrium ·Nash equilibrium · Repeated games Contents 1 Introduction .............................................1 4 4 2 Preliminaries .............................................1 4 5 2.1 Strategic-form games ......................................1 4 5 B Caspar Oesterheld caspar.oesterheld@foundational-research.org; caspar.oesterheld@duke.edu 1Foundational Research Institute, Berlin, Germany 2Present Address: Duke University, Durham, North Carolina, USA 123 144 C. Oesterheld 2.2 Program equilibrium ......................................1 4 6 2.3 Repeated games .........................................1 4 7 3 Robust program equilibrium in the prisoner’s dilemma .......................1 5 1 4 From iterated game strategies to robust program equilibria .....................1 5 4 4.1 Halting behavior .........................................1 5 5 4.2 Relationship to the underlying iterated game strategy .....................1 5 6 4.3 Exploitability ..........................................1 5 8 5 Conclusion ..............................................1 5 8 References ................................................1 5 9 1 Introduction Much has been written about rationalizing non-Nash equilibrium play in strategic- form games. Most prominently, game theorists have discussed how cooperation may be achieved in the prisoner’s dilemma, where mutual cooperation is not a Nash equilib-rium but Pareto-superior to mutual defection. One of the most successful approaches is the repetition of a game, and in particular the iterated prisoner’s dilemma (Axelrod 2006 ). Another approach is to introduce commitment mechanisms of some sort. In this paper, we will discuss one particular commitment mechanism: Tennenholtz’s ( 2004 ) program equilibrium formalism (Sect. 2.2). Here, the idea is that in place of strategies, players submit programs which compute strategies and are given access to each other’s source code. The programs can then encode credible commitments, such as someversion of “if you cooperate, I will cooperate”. As desired, Tennenholtz ( 2004 , Sect. 3, Theorem 1) shows that mutual cooperation is played in a program equilibrium of the prisoner’s dilemma. However, Tennenholtz’equilibrium is very fragile. Essentially, it consists of two copies of a program that cooperates if it faces an exact copy of itself (cf. McAfee 1984 ;H o w a r d 1988 ). Even small deviations from that program break the equilibrium. Thus, achieving cooperationin this way is only realistic if the players can communicate beforehand and settle on a particular outcome. Another persuasive critique of this trivial equilibrium is that the model of two players submitting programs is only a metaphor, anyway. In real life, the programs may instead be the result of an evolutionary process (Binmore 1988 pp. 14f.) and Tennenholtz’ equilibrium is a hard to obtain by such a process. Alternatively, if weview our theory as normative rather than descriptive, we may view the programs themselves as the target audience of our recommendations. This also means that these agents will already have some form of source code—e.g., one that derives and considers the program equilibria of the game—and it is out of their realm of power to change that source code to match some common standard. However, they may still decide onsome procedure for thinking about this particular problem in such a way that enables cooperation with other rationally pre-programmed agents. Noting the fragility of Tennenholtz’ proposed equilibrium, it has been proposed to achieve a more robust program equilibrium by letting the programs reason about each other (van der Hoek et al. 2013 ; Barasz et al. 2014 ; Critch 2016 ). For example, Barasz et al. ( 2014 , Sect. 3) propose a program FairBot—variations of which we will see in 123 Robust program equilibrium 145 this paper—that cooperates if Peano arithmetic can prove that the opponent cooper- ates against FairBot. FairBot cooperates (via Löb’s theorem) more robustly against different versions of itself. These proposals are very elegant and certainly deservefurther attention. However, their benefits come at the cost of being computationally expensive. In this paper, I thus derive a class of program equilibria that I will argue to be more practical. In the case of the prisoner’s dilemma, I propose a program that cooperates with a small probability and otherwise acts as the opponent acts against itself (see Algorithm 1). Doing what the opponent does—à la FairBot—incentivizes cooperation. Cooperating with a small probability allows us to avoid infinite loops that would arise if we merely predicted and copied our opponent’s action (see Algorithm 2). This approach to a robust cooperation program equilibrium in the prisoner’s dilemma isdescribed in Sect. 3. We then go on to generalize the construction exemplified in the prisoner’s dilemma (see Sect. 4). In particular, we show how strategies for the repeated version of a game can be used to construct good programs for the one-shot version of that game. We show that many of the properties of the underlying strategy of the repeated game carry over to the program for the stage game. We can thus construct “good” programs and program equilibria from “good” strategies and Nash equilibria. 2 Preliminaries 2.1 Strategic-form games For reference, we begin by introducing some basic terminology and formalism for strategic-form games. For an introduction, see, e.g., Osborne ( 2004 ). For reasons that will become apparent later on, we limit our treatment to two-player games. Atwo-player strategic game G =(A1,A2,u1,u2)consists of two countable sets of moves Aiand for both players i∈{1,2}a bounded utility function ui:A1×A2→ R. A(mixed) strategy for player iis a probability distribution πiover Ai. Given a strategy profile (π1,π2)the probability of an outcome (a1,a2)∈A1×A2 is P(a1,a2|π1,π2):=π1(a1)·π2(a2). (1) The expected value for player igiven that strategy profile is E[ui|π1,π2]:=/summationdisplay (a1,a2)∈A1×A2P(a1,a2|π1,π2)·ui(a1,a2). Note that because the utility function is bounded, the sum converges absolutely, such that the order of the action pairs does not affect the sum’s value. 123 146 C. Oesterheld 2.2 Program equilibrium We now introduce the concept of program equilibrium, first proposed by Tennenholtz (2004 ). The main idea is to replace strategies with computer programs that are given access to each other’s source code.1The programs then give rise to strategies. For any game G, we first need to define the set of program profiles PROG (G) consisting of pairs of programs. The ith entry of an element of PROG (G)must be a program source code pithat, when interpreted by a function apply , probabilistically map program profiles2onto Ai. We require that for any program profile (p1,p2)∈PROG (G), both programs halt. Otherwise, the profile would not give rise to a well-defined strategy. Whether pihalts depends on the program p−i, it plays against, where (in accordance with convention in game theory) −:{ 1,2}→{ 1,2}:1/mapsto→2,2/mapsto→1 and we write −iinstead of−(i). For example, if piruns apply(p−i,(pi,p−i)), i.e., simulates the opponent, then that is fine as long as p−idoes not also run apply (pi,(pi,p−i)), which would yield an infinite loop. To avoid this mutual dependence, we will generally require that PROG (G)=PROG 1(G)×PROG 2(G), where PROG i(G)consists of programs for player i. Methods of doing this while maintaining expressive power include hierarchies of players—e.g., higher indexed players are allowed to simulate lower indexed ones but not vice versa—hierarchies of programs—programs can only call their opponentswith simpler programs as input—requiring programs to have a “plan B” if termination can otherwise not be guaranteed, or allowing each player to only start strictly less than one simulation in expectation. These methods may also be combined. In this paper,we do not assume any particular definition of PROG (G). However, we assume that they can perform arbitrary computations as long as these computations are guaranteed to halt regardless of the output of the parts of the code that do depend on the opponentprogram. We also require that PROG (G)is compatible with our constructions. We will show our constructions to be so benign in terms of infinite loops that this is not too strong of an assumption. Given a program profile (p 1,p2), we receive a strategy profile (apply(p1,(p1,p2)), apply(p2,(p1,p2))). For any outcome (a1,a2)ofG, we define P(a1,a2|p1,p2):=P(a1,a2|apply(p1,(p1,p2)),apply(p2,(p1,p2))) (2) and for every player i∈{1,2}, we define E[ui|p1,p2]:=/summationdisplay (a1,a2)∈A1×A2P(a1,a2|p1,p2)·ui(a1,a2). (3) 1The equilibrium in its rudimentary form had already been proposed by McAfee ( 1984 )a n dH o w a r d (1988 ). At least the idea of viewing players as programs with access to each other’s source code has also been discussed by, e.g., Binmore ( 1987 , Sect. 5; 1988 ) and Anderlini ( 1990 ). 2For keeping our notation simple, we will assume that our programs receive their own source code as input in addition to their opponent’s. If PROG i(G)is sufficiently powerful, then by Kleene’s second recursion theorem, programs could also refer to their own source code without receiving it as an input (Cutland 1980 , ch. 11). 123 Robust program equilibrium 147 For player i, we define the (set-valued) best response function as Bi(p−i)=arg maxpi∈PROG i(G)E/bracketleftbig ui|pi,p−i/bracketrightbig . A program profile (p1,p2)is a(weak) program equilibrium of G if for i∈{1,2}it ispi∈Bi(p−i). 2.3 Repeated games Our construction will involve strategies for the repeated version of a two-player game. Thus, for any game G, we define G/epsilon1to be the repetition of Gwith a probability of /epsilon1∈(0,1]of ending after each round. Both players of G/epsilon1will be informed only of the last move of their opponent. This differs from the more typical assumption that players have access to the entire history of past moves. We will later see why this deviation is necessary. A strategy πifor player inon-deterministically maps opponent moves or the information of the lack thereof onto a move πi:{0}∪A−i/squigglerightAi. Thus, for a∈Ai,b∈A−i,πi(b,a):=πi(b)(a)denotes the probability of choosing agiven that the opponent played bin the previous round and πi(0,a):=πi(0)(a) denotes the probability of choosing ain the first round. We call a strategy πistationary if for all a∈Ai,πi(b,a)is constant with respect to b∈{0}∪A−i.I fπiis stationary, we write πi(a):=πi(b,a). The probability that the game follows a complete history of moves h=a0b0a1b1···anbnand then ends is P(h|(π1,π2)):=π1(0,a0)π2(0,b0)/epsilon1(1−/epsilon1)nn/productdisplay i=1π1(bi−1,ai)π2(ai−1,bi).(4) Note that the moves in the history always come in pairs aibiwhich are chosen “simultaneously” in response to bi−1and ai−1, respectively. The expected value for player igiven the strategy profile (π1,π2)is E[ui|π1,π2]:=/summationdisplay h∈(A1·A2)+P(h|π1,π2)·ui(h), (5) where (A1·A2)+is the set of all histories and ui(a0b0a1b1···anbn):=n/summationdisplay i=0ui(ai,bi). (6) The lax unordered summation in Eq. 5is, again, unproblematic because of the absolute convergence of the series, which is a direct consequence of the proof of Lemma 1. Note how the organization of the history into pairs of moves allows us to apply the utility function of the stage game in Eq. 6. 123 148 C. Oesterheld For player i, we define the set-valued best response function as Bi(π−i)=arg maxπi:{0}∪A−i/squigglerightAiE/bracketleftbig ui|πi,π−i/bracketrightbig . Analogously, Bc i(π−i)is the set of responses to π−ithat are best among the computable ones, Bs i(π−i)the set of responses to π−ithat are best among the stationary ones, and Bs,c i(π−i)the set of responses to π−ithat are best among stationary computable strategies. A strategy profile (π1,π2)is a(weak) Nash equilibrium of G /epsilon1if for i∈ {1,2}it isπi∈Bi(π−i). We now prove a few lemmas that we will need later on. First, we have sugges- tively called the values P(h)probabilities, but we have not shown them to satisfy, say, Kolmogorov’s axioms. Additivity is not an issue, because we have only definedthe probability for atomic events and non-negativity is obvious from the definition. However, we will also need the fact that the numbers we have called probabilities indeed sum to 1, which requires a few lines to prove. Lemma 1 Let G /epsilon1be a repeated game and π1,π2be strategies for that game. Then /summationdisplay h∈(A1·A2)+P(h|π1,π2)=1. Proof /summationdisplay hP(h|π1,π2)= Eq.4/summationdisplay a0b0···anbn∈(A1·A2)+π1(0,a0)π2(0,b0)/epsilon1(1−/epsilon1)n ·n/productdisplay i=1π1(bi−1,ai)π2(ai−1,bi) = absolute convergence/epsilon1∞/summationdisplay n=0(1−/epsilon1)n/summationdisplay a0b0···anbn∈(A1A2)n+1π1(0,a0)π2(0,b0)n/productdisplay i=1π1(bi−1,ai)π2(ai−1,bi) =/epsilon1∞/summationdisplay n=0(1−/epsilon1)n/summationdisplay a0b0∈A1A2π1(0,a0)π2(0,b0)/summationdisplay a1b1∈A1A2π1(b0,a1)π2(a0,b1)... ·/summationdisplay anbn∈A1A2π1(bn−1,an)π2(an−1,bn) =/epsilon1∞/summationdisplay n=0(1−/epsilon1)n = sum of geo- metric series1 For seeing why the second-to-last equation is true, notice that the inner-most sum is 1. Thus, the next sum is 1 as well, and so on. Since the ordering in the right-hand side of the first line is lax, and because only the second line is known to converge absolutely, 123 Robust program equilibrium 149 the re-ordering is best understood from right to left. The last step uses the well-known formula/summationtext∞ k=0xk=1/(1−x)for the geometric series. /intersectionsq/unionsq For any game G/epsilon1,k∈N+,a∈A1,b∈A2and strategies π1andπ2forG/epsilon1,w e define Pk,G/epsilon1(a,b|π1,π2):=(1−/epsilon1)k/summationdisplay a0b0···ak−1bk−1∈(A1A2)kπ1(bk−1,a)π2(ak−1,b)π1(0,a0)π2(0,b0) ·k−1/productdisplay j=1π1(bj−1,aj)π2(aj−1,bj). (7) Fork=0, we define P0,G/epsilon1(a,b|π1,π2):=π1(0,a)·π2(0,b). Intuitively speaking, Pk,G/epsilon1(a,b|π1,π2)is the probability of reaching at least round kand that (a,b)is played in that round. With this /summationdisplay ab∈A1A2Pk,G/epsilon1(a,b|π1,π2)ui(a,b) should be the expected utility from the kth round (where not getting to the kth round counts as 0). This suggests a new way of calculating expected utilities on a moreround-by-round basis. Lemma 2 Let G /epsilon1be a game, and let π1,π2be strategies for that game. Then E[ui|π1,π2]=∞/summationdisplay k=0/summationdisplay ab∈A1A2Pk,G/epsilon1(a,b|π1,π2)ui(a,b). Proof EG/epsilon1[ui|π1,π2]=/summationdisplay h∈(A1A2)+P(h|π1,π2)ui(h) = Eqs.4,6/summationdisplay a0b0···anbn∈(A1A2)+π1(0,a0)π2(0,b0)/epsilon1(1−/epsilon1)n⎛ ⎝n/productdisplay j=1π1(bj−1,aj)π2(aj−1,bj)⎞ ⎠ ·n/summationdisplay k=0ui(ak,bk)=∞/summationdisplay k=0/summationdisplay a0b0···anbn∈(A1A2)≥k+1π1(0,a0)π2(0,b0)/epsilon1(1−/epsilon1)n ·⎛ ⎝n/productdisplay j=1π1(bj−1,aj)π2(aj−1,bj)⎞ ⎠ui(ak,bk) =∞/summationdisplay k=0/summationdisplay a0b0···akbk∈(A1A2)k+1/summationdisplay ak+1bk+1···anbn∈(A1A2)∗π1(0,a0)π2(0,b0)/epsilon1(1−/epsilon1)nui(ak,bk) 123 150 C. Oesterheld ·⎛ ⎝k/productdisplay j=1π1(bj−1,aj)π2(aj−1,bj)⎞ ⎠⎛ ⎝n/productdisplay j=k+1π1(bj−1,aj)π2(aj−1,bj)⎞ ⎠ =∞/summationdisplay k=0/summationdisplay a0b0···akbk∈(A1A2)k+1π1(0,a0)π2(0,b0)/epsilon1(1−/epsilon1)kui(ak,bk) ·⎛ ⎝k/productdisplay j=1π1(bj−1,aj)π2(aj−1,bj)⎞ ⎠ ·/summationdisplay ak+1bk+1···anbn∈(A1A2)∗/epsilon1(1−/epsilon1)n−k⎛ ⎝n/productdisplay j=k+1π1(bj−1,aj)π2(aj−1,bj)⎞ ⎠ = lemma 1∞/summationdisplay k=0/summationdisplay a0b0···akbk∈(A1A2)k+1π1(0,a0)π2(0,b0)/epsilon1(1−/epsilon1)kui(ak,bk) ·⎛ ⎝k/productdisplay j=1π1(bj−1,aj)π2(aj−1,bj)⎞ ⎠ = Eq.7∞/summationdisplay k=0/summationdisplay akbk∈A1A2Pk(ak,bk|π1,π2)ui(ak,bk) /intersectionsq/unionsq To find the probability of player ichoosing ain round k, we usually have to calculate the probabilities of all actions in all previous rounds. After all, player ireacts to player −i’s previous move, who in turn reacts to player i’s move in round k−2, and so on. This is what makes Eq. 7so long. However, imagine that player −iuses a stationary strategy. This, of course, means that player −i’s probability distribution over moves in round k(assuming the game indeed reaches round k) can be computed directly as π−i(b). Player i’s distribution over moves in round kis almost as simple to calculate, because it only depends on player −i’s distribution over moves in round k−1, which can also be calculated directly. We hence get the following lemma. Lemma 3 Let G /epsilon1be a game, let πibe a any strategy for G /epsilon1and let π−ibe a stationary strategy for G /epsilon1. Then, for all k ∈N+,i ti s Pk,G/epsilon1(a,b|πi,π−i)=(1−/epsilon1)k/summationdisplay b/prime∈A−iπ−i(b/prime)π−i(b)πi(b/prime,a). Proof We conduct our proof by induction over k.F o r k=1, it is P1(a,b|πi,π−i)= Eq.7(1−/epsilon1)/summationdisplay a0b0πi(b0,a)π−i(a0,b)πi(0,a0)π−i(0,b0) 123 Robust program equilibrium 151 Table 1 Payoff matrix for the prisoner’s dilemmaPlayer 1 Player 2 Cooperate Defect Cooperate 3 ,31 ,4 Defect 4 ,12 ,2 =(1−/epsilon1)/summationdisplay b0πi(b0,a)π−i(b)π−i(b0)/summationdisplay a0πi(0,a0) =(1−/epsilon1)/summationdisplay b0πi(b0,a)π−i(b)π−i(b0). If the lemma is true for k,i ti sa l s ot r u ef o r k+1: Pk+1(a,b|πi,π−i)=(1−/epsilon1)k+1/summationdisplay a0b0···akbk∈(AiA−i)k+1πi(bk,a)π−i(ak,b)πi(0,a0)π−i(0,b0) ·k/productdisplay j=1πi(bj−1,aj)π−i(aj−1,bj) = Eq.7(1−/epsilon1)/summationdisplay akbkPk(akbk|πi,π−i)πi(bk,a)π−i(b) = I.H.(1−/epsilon1)k+1/summationdisplay akbk/summationdisplay b/primeπ−i(b/prime)π−i(bk)π−i(b)πi(b/prime,ak)πi(bk,a) =(1−/epsilon1)k+1/summationdisplay bkπ−i(bk)π−i(b)πi(bk,a)/summationdisplay b/primeπ−i(b/prime)/summationdisplay akπi(b/prime,ak) =(1−/epsilon1)k+1/summationdisplay bkπ−i(bk)π−i(b)πi(bk,a). /intersectionsq/unionsq 3 Robust program equilibrium in the prisoner’s dilemma Discussions of the program equilibrium have traditionally used the well-known pris- oner’s dilemma (or trivial variations thereof) as an example to show how the program equilibrium rationalizes cooperation where the Nash equilibrium fails (e.g., Tennen- holtz 2004 , Sect. 3; McAfee 1984 ;H o w a r d 1988 ; Barasz et al. 2014 ). The present paper is no exception. In this section, we will present our main idea using the exam- ple of the prisoner’s dilemma; the next section gives the more general construction and proofs of properties of that construction. For reference, the payoff matrix of theprisoner’s dilemma is given in Table 1. I propose to use the following decision rule: with a probability of /epsilon1∈(0,1], cooperate. Otherwise, act as your opponent plays against you. I will call this strat- 123 152 C. Oesterheld Data : program profile (p1,p2) Result : action ai∈{C,D} 1ifsample (0,1)</epsilon1 then 2 return C 3end 4return sample (apply(p−i,(p1,p2))) Algorithm 1: The/epsilon1GroundedFairBot for player i. The program makes use of a function sample which samples uniformly from the given interval or probability distribution. It is assumed that /epsilon1is computable. egy/epsilon1GroundedFairBot. A description of the algorithm in pseudo-code is given in Algorithm 1.3 The proposed program combines two main ideas. First, it is a version of FairBot (Barasz et al. 2014 ). That is, it chooses the action that its opponent would play against itself. As player −iwould like player ito cooperate, /epsilon1GroundedFairBot thus incen- tivizes cooperation, as long as /epsilon1< 1/2. In this, it resembles the tit for tat strategy in the iterated prisoner’s dilemma (IPD) (famously discussed by Axelrod 2006 ), which takes an empirical approach to behaving as the opponent behaves against itself. Here, the probability of the game ending must be sufficiently small (again, less than 1 /2f o r the given payoffs) in each round for the threat of punishment and the allure of reward to be persuasive reasons to cooperate. The second main idea behind /epsilon1GroundedFairBot is that it cooperates with some small probability /epsilon1. First and foremost, this avoids running into the infinite loop that a naive implementation of FairBot—see Algorithm 2— runs into when playing against opponents who, in turn, try to simulate FairBot. Note, again, the resemblance with the tit for tat strategy in the iterated prisoner’s dilemma, which cooperates when no information about the opponent’s strategy is available. Data : program profile (p1,p2) Result : action ai∈{C,D} 1return sample (apply(p−i,(p1,p2))) Algorithm 2: The NaiveFairBot for player i To better understand how /epsilon1GroundedFairBot works, consider its behavior against a few different opponents. When /epsilon1GroundedFairBot faces NaiveFairBot, then both cooperate. For illustration, a dynamic call graph of their interaction is given in Fig. 1. It is left as an exercise for the reader to analyze /epsilon1GroundedFairBot’s behavior against other programs, such as another instance of /epsilon1GroundedFairBot or a variation of /epsilon1GroundedFairBot that defects rather than cooperates with probability /epsilon1. When playing against strategies that are also based on simulating their opponent, we could think of /epsilon1GroundedFairBot as playing a “mental IPD”. If the opponent program decides whether to cooperate, it has to consider that it might currently only be simu- 3This program was proposed by Abram Demski in a conversation discussing similar (but worse) ideas of mine. It has also been proposed by Jessica Taylor at https://agentfoundations.org/item?id=524 , though in a slightly different context. 123 Robust program equilibrium 153 Fig. 1 Dynamic call diagram describing how p1=/epsilon1GroundedFairBot chooses when playing against p2=NaiveFairBot. lated. Thus, it will choose an action with an eye toward gauging a favorable reaction from/epsilon1GroundedFairBot one recursion level up. Cooperation in the first “round” is an attempt to steer the mental IPD into a favorable direction, at the cost of cooperating if sample (0,1)</epsilon1 already occurs in the first round. In addition to proving theoretical results (as done below), it would be useful to test/epsilon1GroundedFairBot “in practice”, i.e., against other proposed programs for the prisoner’s dilemma with access to one another’s source code. I only found one informal tournament for this version of the prisoner’s dilemma. It was conducted in 2013 by Alex Mennen on the online forum and community blog LessWrong.4In the original set of submissions, /epsilon1GroundedFairBot would have scored 6th out of 21. The reason why it is not a serious contender for first place is that it does not take advantage of the many exploitable submissions (such as bots that decide without looking at their opponent’ssource code). Once one removes the bottom 9 programs, /epsilon1GroundedFairBot scores second place. If one continues this process of eliminating unsuccessful programs for another two rounds, /epsilon1GroundedFairBot ends up among the four survivors that cooperate with each other. 5 4The tournament was announced at https://www.lesserwrong.com/posts/BY8kvyuLzMZJkwTHL/ prisoner-s-dilemma-with-visible-source-code-tournament and the results at https://www.lesserwrong. com/posts/QP7Ne4KXKytj4Krkx/prisoner-s-dilemma-tournament-results-0 . 5For a more detailed analysis and report on my methodology, see https://casparoesterheld.files.wordpress. com/2018/02/transparentpdwriteup.pdf . 123 154 C. Oesterheld 4 From iterated game strategies to robust program equilibria We now generalize the construction from the previous section. Given any computable strategy πifor a sequential game G/epsilon1, I propose the following program: with a small probability /epsilon1sample from πi(0). Otherwise, act how πiwould respond (in the sequen- tial game G/epsilon1) to the action that the opponent takes against this program. I will call this program /epsilon1Grounded πiBot. A description of the program in pseudo-code is given in Algorithm 3. As a special case, /epsilon1GroundedFairBot arises from /epsilon1Grounded πiBot by letting πibe tit for tat. Data : program profile (p1,p2) Result : action ai∈Ai 1ifsample (0,1)</epsilon1 then 2 return sample (πi(0)) 3end 4return sample (πi(sample (apply(p−i,(p1,p2))))) Algorithm 3: The/epsilon1Grounded πiBot for player i. The program makes use of a func- tion sample which samples uniformly from a given interval or a given probability distribution. It is assumed that πiand/epsilon1are computable. Again, our proposed program combines two main ideas. First, /epsilon1Grounded πiBot responds to how the opponent plays against /epsilon1Grounded πiBot. In this, it resembles the behavior of πiinG/epsilon1. As we will see, this leads /epsilon1Grounded πiBot to inherit many of πi’s properties. In particular, if (like tit for tat) πiuses some mechanism to incentivize its opponent to converge on a desired action, then /epsilon1Grounded πiBot incentivizes that action in a similar way. Second, it—again—terminates immediately with some small probability /epsilon1to avoid the infinite loops that Naive πiBot—see Algorithm 4—runs into. Playing πi(0)in particular is partly motivated by the “mental G/epsilon1” that/epsilon1Grounded πiBot plays against some opponents (such as /epsilon1Grounded π−iBots or Naive π−iBots). The other motivation is to make the relationship between /epsilon1Grounded πiBot and πicleaner. In terms of the strategies that are optimal against /epsilon1Grounded πiBot, the choice of that constant action cannot matter much if /epsilon1is small. Consider, again, the analogy with tit for tat. Even if tit for tat started with defection, one should still attempt to cooperate with it. In practice,however, it has turned out that the “nice” version of tit for tat (and nice strategies in general) are more successful (Axelrod 2006 , ch. 2). The transparency in the program equilibrium may render such “signals of cooperativeness” less important—e.g., againstprograms like Barasz et al.’s ( 2014 , Sect. 3) FairBot. Nevertheless, it seems plausible that—if only because of mental G /epsilon1-related considerations—in transparent games the “initial” actions matter as well. Data : program profile (p1,p2) Result : action ai∈Ai 1return sample (πi(sample (apply(p−i,(p1,p2))))) Algorithm 4: The Naive πiBot for player i. It is assumed that πiis computable. 123 Robust program equilibrium 155 We now ground these two intuitions formally. First, we discuss /epsilon1Grounded πiBot’s halting behavior. We then show that, in some sense, /epsilon1Grounded πiBot behaves in G likeπidoes in G/epsilon1. 4.1 Halting behavior For a program to be a viable option in the “transparent” version of G, it should halt against a wide variety of opponents. Otherwise, it may be excluded from PROG (G) in our formalism. Besides, it should be efficient enough to be practically useful. As with/epsilon1GroundedFairBot, the main reason why /epsilon1Grounded πiBot is benign in terms of the risk of infinite loops is that it generates strictly less than one new function call in expectation and never starts more than one. While we have no formal machinery foranalyzing the “loop risk” of a program, it is easy to show the following theorem. Theorem 4 Letπ ibe a computable strategy for a game G /epsilon1. Furthermore, let p −ibe any program (not necessarily in PROG i(G)) that calls apply (pi,(pi,p−i))at most once and halts with probability 1 if apply halts with probability 1. Then /epsilon1Grounded πiBot and p −ihalt against each other with probability 1 and the expected number of steps required for executing /epsilon1Grounded πiB o ti sa tm o s t Tπi+(Tπi+Tp−i)1−/epsilon1 /epsilon1, (8) where T πiis the maximum number of steps to sample from πi, and T p−iis the maximum number of steps needed to sample from apply (p−i,(/epsilon1Grounded πiBot,p−i))exclud- ing the steps needed to execute apply (/epsilon1Grounded πiBot,(/epsilon1Grounded πiBot,p−i)). Proof It suffices to discuss the cases in which p−icalls pionce with certainty, because if any of our claims were refuted by some program p−i, they would also be refuted by a version of that program that calls pionce with certainty. If p−icalls pionce with certainty, then the dynamic call graphs of both /epsilon1Grounded πiBot and p−ilook similar to the one drawn in 1. In particular, it only contains one infinite path and that path has a probability of at most lim i→∞(1−/epsilon1)i=0. For the time complexity, we can consider the dynamic call graph as well. The policy πihas to be executed at least once (with probability /epsilon1with the input 0 and with probability 1 −/epsilon1against an action sampled from apply (p−i,(/epsilon1Grounded πiBot,p−i)). With a probability of (1−/epsilon1), we also have to execute the non-simulation part of p−i and, for a second time, πi. And so forth. The expected number of steps to execute /epsilon1Grounded πiBot is thus Tπi+∞/summationdisplay j=1(1−/epsilon1)j(Tπi+Tp−i), which can be shown to be equal to the term in 8by using the well-known formula/summationtext∞ k=0xk=1/(1−x)for the geometric series. /intersectionsq/unionsq 123 156 C. Oesterheld Note that this argument would not work if there were more than two players or if the strategy for the iterated game were to depend on more than just the last opponent move, because in these cases, the natural extension of /epsilon1Grounded πiBot would have to make multiple calls to other programs. Indeed, this is one of the reasons why the present paper only discusses two-player games and iterated games with such short- term memory. Whether a similar result can nonetheless be obtained for more than 2players and strategies that depend on the entire past history is left to future research. As special cases, for any strategy π −i,/epsilon1Grounded πiBot terminates against /epsilon1Grounded π−iBot and Naive π−iBot (and these programs in turn terminate against /epsilon1Grounded πiBot). The latter is especially remarkable. Our /epsilon1Grounded πiBot termi- nates and leads the opponent to terminate even if the opponent is so careless that it would not even terminate against a version of itself or, in our formalism, if PROG −i(G) gives the opponent more leeway to work with simulations. 4.2 Relationship to the underlying iterated game strategy Theorem 5 Let G be a game, πibe a strategy for player i in G /epsilon1,p i= /epsilon1Grounded πiBot and p −i∈PROG −i(G)be any opponent program. We define π−i=apply(p−i,(pi,p−i)),w h i c hm a k e s π−ia strategy for player −ii nG /epsilon1. Then EG/bracketleftbig ui|pi,p−i/bracketrightbig =/epsilon1EG/epsilon1/bracketleftbig ui|πi,π−i/bracketrightbig . Proof We separately transform the two expected values in the equation that is to be proven and then notice that they only differ by a factor /epsilon1: EG/epsilon1/bracketleftbig ui|πi,π−i/bracketrightbig = lemma 2∞/summationdisplay k=0/summationdisplay ab∈AiA−iPk,G/epsilon1(a,b|πi,π−i)ui(a,b) = lemma 3/summationdisplay ab∈AiA−iπi(0,a)π−i(b)ui(a,b) +∞/summationdisplay k=1/summationdisplay ab∈AiA−i(1−/epsilon1)k/summationdisplay b/prime∈A−iπ−i(b/prime)πi(b/prime,a)π−i(b)ui(a,b) = absolute convergence/summationdisplay ab∈AiA−iπi(0,a)π−i(b)ui(a,b) +/summationdisplay ab∈AiA−i/summationdisplay b/prime∈A−iπ−i(b/prime)πi(b/prime,a)π−i(b)ui(a,b)∞/summationdisplay k=1(1−/epsilon1)k = sum of geo- metric series/summationdisplay ab∈AiA−iπi(0,a)π−i(b)ui(a,b) +1−/epsilon1 /epsilon1/summationdisplay ab∈AiA−i/summationdisplay b/prime∈A−iπ−i(b/prime)πi(b/prime,a)π−i(b)ui(a,b). 123 Robust program equilibrium 157 The second-to-last step uses absolute convergence to reorder the sum signs. The last step uses the well-known formula/summationtext∞ k=0xk=1/(1−x)for the geometric series. Onto the other expected value: EG/bracketleftbig ui|pi,p−i/bracketrightbig = Eqs.3,2,1/summationdisplay ab∈AiA−iapply(pi,(pi,p−i),a)apply(p−i,(pi,p−i),b)ui(a,b) = def.s pi,π−i/summationdisplay ab∈AiA−i⎛ ⎝/epsilon1πi(0,a)+(1−/epsilon1)/summationdisplay b/prime∈A−iπ−i(b/prime)πi(b/prime,a)⎞ ⎠π−i(b)ui(a,b) =/epsilon1/summationdisplay ab∈AiA−iπi(0,a)π−i(b)ui(a,b) +(1−/epsilon1)/summationdisplay ab∈AiA−i/summationdisplay b/prime∈A−iπ−i(b/prime)πi(b/prime,a)π−i(b)ui(a,b). Here, apply(pi,(pi,p−i),a):=apply(pi,(pi,p−i))(a). The hypothesis follows immediately. /intersectionsq/unionsq Note that the program side of the proof does not involve any “mental G/epsilon1”. Using Theorem 5, we can easily prove a number of property transfers from πito /epsilon1Grounded πiBot. Corollary 6 Let G be a game. Let πibe a computable strategy for player i in G /epsilon1and let p i=/epsilon1Grounded πiBot. 1. If p −i∈B−i(pi), then apply (p−i,(pi,p−i))∈Bs,c −i(πi). 2. Ifπ−i∈Bs,c i(πi)and apply (p−i,(pi,p−i))=π−i, then p −i∈B−i(pi). Proof Both 1. and 2. follow directly from Theorem 5. /intersectionsq/unionsq Intuitively speaking, Corollary 6shows that πiand/epsilon1Grounded πiBot provoke the same best responses. Note that best responses in the program game only correspond to best stationary computable best responses in the repeated game. The computabil-ity requirement is due to the fact that programs cannot imitate incomputable best responses. The corresponding strategies for the repeated game further have to be sta- tionary because /epsilon1Grounded π iBot only incentivizes opponent behavior for a single situation, namely the situation of playing against /epsilon1Grounded πiBot. As a special case of Corollary 6,i f/epsilon1< 1/2, the best response to /epsilon1GroundedFairBot is a program that cooperates against /epsilon1GroundedFairBot because in a IPD with a probability of ending of less than 1 /2 a program that cooperates is the best (stationary computable) response to tit for tat. Corollary 7 Let G be a game. If (π1,π2)is a Nash equilibrium of G /epsilon1andπ1andπ2 are computable, then (/epsilon1Grounded π1Bot,/epsilon1Grounded π2Bot)is a program equilibrium of G. Proof Follows directly from Theorem 5. /intersectionsq/unionsq 123 158 C. Oesterheld 4.3 Exploitability Besides forming an equilibrium against many opponents (including itself) and incen- tivizing cooperation, another important reason for tit for tat’s success is that it is “not very exploitable” (Axelrod 2006 ). That is, when playing against tit for tat, it is impossible to receive a much higher reward than tit for tat itself. We now show that(in)exploitability transfers from strategies π ito/epsilon1Grounded πiBots. We call a game G=(A1,A2,u1,u2)symmetric ifA1=A2and u1(a,b)= u2(b,a)for all a∈A1and b∈A2.I fGis symmetric, we call a strategy πiforG/epsilon1 N-exploitable in G/epsilon1for an N∈R≥0if there exists a π−i, such that E/bracketleftbig u−i|π−i,πi/bracketrightbig >E/bracketleftbig ui|π−i,πi/bracketrightbig +N. We call πiN-inexploitable if it is not N-exploitable. Analogously, in a symmetric game Gwe call a program piN-exploitable for an N∈R≥0if there exists a p−i, such that E/bracketleftbig u−i|p−i,pi/bracketrightbig >E/bracketleftbig ui|p−i,pi/bracketrightbig +N. We call piN-inexploitable if it is not N-exploitable. Corollary 8 Let G be a game and πibe an N -inexploitable strategy for G /epsilon1. Then /epsilon1Grounded πiBot is /epsilon1N -inexploitable. Proof Follows directly from Theorem 5. /intersectionsq/unionsq Notice that if – like tit for tat in the IPD – πiisN-inexploitable in G/epsilon1for all /epsilon1, then we can make /epsilon1Grounded πiBot arbitrarily close to 0-inexploitable by decreasing /epsilon1. 5 Conclusion In this paper, we gave the following recipe for constructing a program equilibrium for a given two-player game: 1. Construct the game’s corresponding repeated game. In particular, we consider repeated games in which each player can only react to the opponent’s move in the previous round (rather than the entire history of previous moves by both players) and the game ends with some small probability /epsilon1after each round. 2. Construct a Nash equilibrium for that iterated game. 3. Convert each of the strategies into a computer program that works as follows (see Algorithm 3): with probability /epsilon1do what the strategy does in the first round. With probability 1 −/epsilon1, apply the opponent program to this program; then do what the underlying strategy would reply to the opponent program’s output. The result is a program equilibrium which we have argued is more robust than the equilibria described by Tennenholtz ( 2004 ). More generally, we have shown that trans- lating an individual’s strategy for the repeated game into a program for the stage game 123 Robust program equilibrium 159 in the way described in step 3 retains many of the properties of the strategy for the repeated game. Thus, it seems that “good” programs to submit may be derived from “good” strategies for the repeated game. Acknowledgements I am indebted to Max Daniel and an anonymous referee for many helpful comments. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 Interna- tional License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. References Anderlini, L. (1990). Some notes on Church’s thesis and the theory of games. Theory and Decision ,29(1), 19–52. Axelrod, R. (2006). The Evolution of Cooperation . New York: Basic Books. Barasz, M., et al. (2014). Robust cooperation in the prisoner’s dilemma: Program equilibrium via provability logic. arxiv: 1401.5577 . Binmore, K. (1987). Modeling rational players: Part I. Economics and Philosophy ,3(2), 179–214. Binmore, K. (1988). Modeling rational players: Part II. Economics and Philosophy ,4(1), 9–55. Critch, A. (2016). Parametric bounded Löb’s theorem and robust cooperation of bounded agents. arxiv:1602.04184 . Cutland, N. (1980). Computability: An introduction to recursive function theory . Cambridge: Cambridge University Press. van der Hoek, W., Witteveen, C., & Wooldridge, M. (2013). Program equilibrium—A program reasoning approach. International Journal of Game Theory ,42(3), 639–671. Howard, J. V . (1988). Cooperation in the prisoner’s dilemma. Theory and Decision ,24(3), 203–213. McAfee, R. P. (1984). Effective computability in economic decisions. http://www.mcafee.cc/Papers/PDF/ EffectiveComputability.pdf . Accessed 10 Nov 2018. Osborne, M. J. (2004). An introduction to game theory . Oxford: Oxford University Press. Tennenholtz, M. (2004). Program equilibrium. Games and Economic Behavior ,49(2), 363–373. 123
e0df659f-51c3-49a9-a1a1-4dcd064757ae
trentmkelly/LessWrong-43k
LessWrong
[LINK] Physicist Carlo Rovelli on Modern Physics Research A blog post in Scientific American, well worth reading. Rovelli is a researcher in Loop Quantum Gravity. Some quotes: Horgan: Do multiverse theories and quantum gravity theories deserve to be taken seriously if they cannot be falsified? Rovelli: No. Horgan: What’s your opinion of the recent philosophy-bashing by Stephen Hawking, Lawrence Krauss and Neil deGrasse Tyson? Rovelli: Seriously: I think they are stupid in this.   I have admiration for them in other things, but here they have gone really wrong.  Look: Einstein, Heisenberg, Newton, Bohr…. and many many others of the greatest scientists of all times, much greater than the names you mention, of course, read philosophy, learned from philosophy, and could have never done the great science they did without the input they got from philosophy, as they claimed repeatedly.  You see: the scientists that talk philosophy down are simply superficial: they have a philosophy (usually some ill-digested mixture of Popper and Kuhn) and think that this is the “true” philosophy, and do not realize that this has limitations. Horgan: Can science attain absolute truth?   Rovelli: I have no idea what “absolute truth” means. I think that science is the attitude of those who find funny the people saying they know something is absolute truth.  Science is the awareness that our knowledge is constantly uncertain.  What I know is that there are plenty of things that science does not understand yet. And science is the best tool found so far for reaching reasonably reliable knowledge. Horgan: Do you believe in God? Rovelli: No.  But perhaps I should qualify the answer, because like this it is bit too rude and simplistic. I do not understand what “to believe in God” means. The people that “believe in God” seem like Martians to me.  I do not understand them.  I suppose this means that I “do not believe in God”. If the question is whether I think that there is a person who has created Heavens and Earth, and responds to our prayers,
74023d61-9c46-41ad-9b74-7ca079ba339e
StampyAI/alignment-research-dataset/arxiv
Arxiv
SentRNA: Improving computational RNA design by incorporating a prior of human design strategies SentRNA: Improving computational RNA design by incorporating a prior of human design strategies Jade Shi1, EteRNA players, Rhiju Das2, and Vijay S. Pande3* 1. Department of Chemistry, Stanford University, jadeshi@stanford.edu 2. Department of Biochemistry, Stanford University, rhiju@stanford.edu 3. Department of Bioengineering, Stanford University, pande@stanford.edu Abstract: Solving the RNA inverse folding problem is a critical prerequisite to RNA design, an emerging field in bioengineering with a broad range of applications from reaction catalysis to cancer therapy . Although significant progress has been made in developing machine -based inverse RNA folding algorithms , current approaches still have difficulty designing sequences for large or complex targets . On the other hand, human players of the online RNA design game EteRNA have consistently shown superior performance in this regard, being able to readily design sequences for targets that are challenging for machine algorithms . Here we present a novel approach to the RNA design problem, SentRNA, a design agent consisting of a fully -connected neural network trained end-to-end using human -designed RNA sequences . We show that through this approach, SentRNA can solve complex targets previous ly unsolvable by any machine -based approach and achieve state -of-the-art performance on two separate challenging test sets . Our results demonstrate that incorporating human design strategies into a design algorithm can significantly boost machine performan ce and suggest s a new paradigm for machine -based RNA design. Introduction: Designing RNA molecules to perform specific functions is an emerging field of modern bioengineering research1,2,3,4,5 with diverse biological applications from cancer therapy1 to intrac ellular reaction catalysis .3 Because an RNA molecule’s function is highly dependent on its structure, in order to effectively design RNA molecules to perform specific functions , one must first solve the RNA inverse folding problem : given a target structure, predict an RNA sequence that will fold into that structure . As such, s ignificant efforts have been made over the past several decades in developing computational algorithms to reliably predict RNA sequences that fold into a give n target .6,7,8,9,10,11,12,13,14,15,16 A large fraction of currently available inverse RNA folding algorithms follow s the general pipeline of first generating an initial guess of a n RNA sequence and the refining that s equence using some form of stochastic search . Algorithms that fall under this category include RNAInvers e,6 RNA -SSD,7 INFO -RNA,8 NUPACK,10 MODENA ,11 Frnakenstein,13 and ERD.14 RNAInverse, one of the first inverse folding algorithms, initializes the sequence randomly and then uses a simple adaptive walk to randomly sample better sequences. RNA-SSD first performs hierarchical decomposition of the target and then performs a daptive walk separately on each substructure to reduce the size of the search space . INFO -RNA first generates an initial guess of the sequence using dynamic programming to estimate the minimum energy sequence for a target structur e, and then performs simulated annealing on the sequence . NUPACK performs hierarchical decomposition of the target and assigns an initial sequence to each substructure . For each sequence, it then generates a thermodynamic ensemble of possible structures and stochastically perturbs the sequence to optimize the "ensemble defect" term, which represents the average number of improperly paired bases relative to the target over the entire ensemble . MODENA and Frnakenstein first generate an ensemble of candidate sequences and then use genetic algorithms to optimize complex objective functions . Finally, ERD generates an initial sequence by decomposing the target structure into substructures, and then randomly assigns a natura lly occurring subsequence drawn from a public database to each substructure . Defects in the full target structure given this intialization are then corrected using a n evolutionary algorithm. On the other hand, there are also several algorithms that do not follow this formula. For instance, DSS-Opt9 foregoes stochastic search and instead attempts to generate a valid sequence directly from gradient -based optimization of an objective fu nction that includes the predicted free energy of the tar get and a "negative design" term that punishes improperly pair ed bases . antaRNA16 employs “ant -colony” optimization , in which a sequence is first generated via a weighted random search , and the goodness of the se sequences is then used to refine the weights and improve subsequent sequence generations . IncaRNAtion12 first generates a GC-weighted partition function for the target structure , and then adaptively samples sequences from it to match a desired GC content. Finally, RNAiFold15 employs constraint programming that exhaustively searches over all possible sequences compatible with a given target. However, d espite the significant progress in developing inverse RNA algorithms, current algorithms consistently have difficulty designing sequences for particularly large or complex targets.17 On the other hand, human playeyrs of the online RNA design game EteRNA18 have shown consistently superior performance to machine -based algorithms for such targets . Players of the game are shown 2D representation s of target RNA structure s ("puzzles ") and asked to propose sequences that fold into them . These sequences are first judg ed using the ViennaRNA 1.8.5 software package6 and then validated experimentally . Through this cycle of design and evaluation , players build a collective library of design strategies through visual pattern recognition that can then be applied to new , more complex puzzles. Remarkably, these human - developed strategies have proven very effective for RNA design . For example, EteRNA players significantly outperform computational algorithms on the Eterna100 , a set of 100 challenging puzzles designed by EteRNA players to showcase a variety of RNA structural elements that make design difficult.17 A recent benchmark against 6 different inverse folding algorithms showed that while top-ranking human players can solve all 100 puzzles, even the best -scoring computational algorithm , MODENA, could only solve 54 / 100 puzzles. Given the success of these human strategies, we investigate whether incorporating these strategies into a design algorithm can improve machine performance past the current state of the art. We present SentRNA, a computational agent for RNA design that significantly outperforms existing methods by learning human -like design strategies in an end -to-end, data driven manner . The agent consists of a fully -connected neural network that takes as input a featurized representation of the local env ironment around a given position in a puzzle . The output is length -4, corresponding to the four RNA nucleotides (bases) : A, U, C, or G . The model is trained using the eternasolves dataset, a custom -compiled collection of 1.8 x 104 player - submitted solutions across 724 unique puzzles . These puzzles comprise both the “Progression” puzzles, designed for beginning EteRNA players, as well as several “Lab” puzzles for which solutions were experimentally synthesized and tested . During validation and testing the agent takes an initially blank puzzle and assigns bases to every position greed ily based on the output values . If this initial prediction is not valid , as judged by Vien naRNA 1.8.5 , it is further refined via an adaptive walk using a canon of standard design moves compiled by players and taught to new players through the game's tutorials . Overall, we trained and tested an ensemble of 154 models, each using a distinct training set and model input (see Methods) . Collectively , the ensemble of models can solve 47 / 100 puzzles from the Eterna100 by neural network prediction alone, and 78 / 100 puzzles using neural network prediction + refinement. To address the po ssibility of our method overfitting to EteRNA - like puzzles, we also tested SentRNA on an independent set of 63 targets recently used by Garcia - Martin et al. to benchmark a set of 10 inverse design algorithms that comprise structures taken from the Rfam 9.0 and GenBank database, a s well as an additional set of longer, naturally - occu rring biological RNA structures.19 Despite being a test set independent from EteRNA, we find that SentRNA is able to achieve state -of-the-art performance here as well. This study demonstrates that incorporating human design strategies into a computational RNA design agent can lead to significant increases in performance over p revious methods, and represents a new parad igm in machine -based RNA design. Methods: Code availability: The source code for SentRNA , all our trained models , and the full eternasolves dataset can be found on GitHub: https://github.com/jadeshi/SentRNA . Hardware: We performed all model training and validation using a desktop computer with an Intel Core i7 - 6700K @ 4.00 GHz CPU and 16 GB of RAM. Initial testing and refinement on the Eterna100 using the full refinement moveset was done using this same machine. Subsequent refinement using restricted refinement movesets (see Results) , as well as testing and refinement of the 63 non - EteRNA targets from Garcia -Martin e t al.19 was run on the Sherlock 2.0 computing cluster at Stanford University,20 with each instance of model refinement utilizing a Intel Xeon E5-2640v4 processor and 1 GB of RAM. Creating 2D structural representations of puzzles: During training and testing, we used a custo m rendering method , which we hereafter c all EteRNA rendering , to translate puzzles to 2D structures given their dot -bracket representations. This rendering method reproduces exactly what human players see in -game whe n solving the structure in EteRNA . We believe this is the most natural representation method for this study since we are training SentRNA on data submitted by thes e human players. Assessment of RNA sequences: We assess whether a given RNA sequence folds into the target structure using Vienna 1.8.5. We chose this version over newer version s such as Vienna 2 for consistency, because Vienna 1.8.5 is the versi on currently implemented in EteRNA , and is therefore the natural choice to assess whether SentRNA is capable of learning and generalizing human design strategies from EteRNA player solutions. Neural network architecture: Our goal is to create a n RNA design agent that can propose a sequence of RNA bases that folds into a given target structure , i.e. fill in an initially blank EteRNA puzzle . To do this, w e employ a fully connected neural network that assigns an identity of A , U, C, or G to each position in the puzzle given a featurized representation of its local environment . During test time, we expose the agent to every position in the puzzle sequentially and have it predict its identity . The neural network was implemented using TensorF low21 and contains three hidden layers of 100 nodes with ReLU nonlinearities . The output is length -4, corresponding to the four RNA bases: A, U, C, and G. During validation and test time, base identities are assigned greedily to th e puzzle based on these output values. Given a position x in the puzzle, the input for this position to the agent is a comb ination of information about its bonding partner, nearest neighbors, and long -range features , which can include, for example, next nearest neighbors or adjacent closing pairs in a multiloop. While the bonding partner and nearest neighbor information is provided to the agent by default, long -range features are learned through the training data . The information about the bonding partner is encoded as a length -5 vector , with each position in the vector rep resenting ei ther A, U, C, G, or "none" (i.e. a blank position that does not have a base assigned to it yet) . A value of 1 is assigned to the position corresponding to the identity of the bonding partner, while all other values are set to 0. If there is no bonding partner , all values are set to 0. The nearest neighbor information is encoded as a length -11 vector, a comb ination of two length -5 one-hot vectors corresponding to the identities of the bases directly before and after it in the sequence , and a single value that corresponds to the angle in radians formed by the base and its nearest neighbors . This angle serves to distinguish bases belonging to differe nt substructures in the puzzle . For example, a base situated in the middle of a large internal loop will have a larger angle than a base positioned in a 4 -loop. As a design choice, any position in the middle of a stack of bonded bases wa s assigned an angle of 0. Also, if the model is looking at either the first or last position in the puzzle , the “before” and “after” nearest neighbor portions of the input respectively are set to 0. Long-range features refer to important positions y in the puzzle relative to x that the agent should also have knowledge of when deciding what base to assign to x. These are each defined by a set of two values: 1) the Cartesian distance , L, between x and y in the puzzle given the 2D rendering of the puzzle , and 2) the angle in radians , Φ, formed by positions x – 1, x, and y. These two values are stored in a list , [L, Φ], and serve as a label for the feature. For example, a label of [23.0 , 1.6] corresponds to a base's bonding partner in the middle of a stem . The bonding distance is equal to 23.0 EteRNA rendering distance units , and the angle betwee n the previous base in the stem , the current base, and the bonding partner is 1.6 radians, or 90 degrees. A length -5 vector of zeros is then appended to the input vector to serve as a placeholder for the feature . During train ing, validation, or testing, when t he agent is looking at a given position x, it computes L and Φ between x and all o ther positions yi in the puzzle , and if both L and Φ match that of a long - range feature used in the model within some threshold, a 1 is assigned to the corresponding placeholder depending on the identity of yi (A, U, C, G, or “none”). We set the threshold for both L and Φ to an arbitrary small value of 10-5. We determine what long -range features to use (i.e. which features should be considered "important") using a mutual information metric over player solutions. First , we perform a pairwise mutua l information calculation using all the player solution s for a given puzzle to form a 𝑙 x 𝑙 mutual information matrix, where 𝑙 is the length of the puzzle. We then select the top M (user -defined) positions in the matrix with highest mutual informa tion, and for each of these position s (x, y) compute L and Φ to give a list of long -range features for the puzzle (Figure 1) . This process is repeated for each puzzle , and the unique long -range features across all the puzzles are then combin ed into an aggregate list of long -range features . A random subset of N (user -defined) features is then selected from this list and used to define a model to be used for training, validation, and testing. By defining long -range features using a mutual information metric , our goal is to provide additional useful prior information to the model . High mutual information between positions x and y indicates that the identity of position x is strongly correlated to that of position y. This suggests one of two possibilities: 1) when EteRNA players are choosing in -game what to assign for x, they are first typically looking at y, or vice versa due to their knowledge of the structure, or 2) the two positions are constrained to be correlated by the biophysyic al and energetic constraints of the target . Therefore, by only including positions with high mutual information to the agent’s field of vision, we are providing either human prior information or biophysical prior information into the agent , allowing it to prioritize what humans, or nature, have deemed to be important . As a result, w e provide the model with enough information to prevent underfitting and enable it to appl y its learned strategies to more difficult puzzles. On the other hand, we also limit the model complexity such that we can train the model using a relatively small number of training examples without overfitting. We observed that the inclusion of instruct ive long -range features allowed SentRNA to learn human design strategies much more easily. For example, the puzzle Shortie 6 from the Eterna100 requires a strategy called 4 -loop boosting to solve , which describes the process of mutating the N -terminal end of a 4 -loop to a G (Figure 1) . This mutation energetically stabilizes the preceding stem and is a necessary step to solve the puzzle , and is one of the most commonly used strategies in EteRNA . We found that including a long -range feature between the N -terminal base and the opposing C -terminal base of the loop (Figure 1, right inset, blue) in SentRNA's input allows it t o uniquely identify the 4 -loop and and reliably learn to mutate the N -terminal 4-loop base to G ; 99.7% of the 306 models which cont ained this long -range feature successfully learned the 4-loop boost. On the other hand, if this feature is not present, we observed that it was much more difficult for SentRNA to learn this strategy; only 2.4% of the 328 models that did not contain this feature learned t he 4-loop boost . These results suggest that the nearest neighbor features (Figure 1, insets, red) are in most cases inadequate to distinguish the N -terminal 4 -loop base from other loca tions in training puzzles in which G is generally not present , such as 3 -loops or the middle of internal loops . As a result, the training on this base is contaminated by other unrelated bases, and SentRNA is lead to mistakenly believe that the N -terminal 4 -loop position sho uld be A. Figure 1: SentRNA can learn to boost 4 -loops with a G at the N -terminal end much more effectively if there is a long -range feature ( right inset, blue) from to the C -terminal position of the 4-loop that uniquely identifies the 4 -loop . On the other hand, if this feature is not present, by relying on only its nearest neighbor features ( inset s, red), SentRNA cannot distinguish between this position and the other positions in the puzzle whose nearest neighbors define a similar angle (e.g. 3 -loops or the middle of internal loops) . This leads to contamination during training and prevents SentRNA from learning how to uniquely boost the N -terminal 4 -loop position with a G . As a result, SentRNA struggles to solve puzzles such as Shortie 6 from the E terna100 in which boosti ng is a necessary solution step (left). Finally, we note t hat SentRNA's architecture and mechanism of action are analogous to the process of using a 2D convolutional neural network to recognize images, where convolution kernels are used to scan through the image and detect specific features in the data. Here, we are instead scanning a sparse 2D convolution kernel through the EteRNA rendering ("image") of the puzzle to recognize important structural motifs . Our kernel is sparse because only the subset of positions that show high mutual information as calculated from the training data have nonzero weights that are trainable, while all other positions are forced to have zero weights. In other words, we are using the prior information from the training data to impose hard constraints on the complexity of our kernel (Figure 2). Figure 2: SentRNA (right) operates analogously to a convoluti onal neural network (CNN) for image recognition (left) . In contrast to a standard CNN, however, instead of scanning a dense convolution kernel over an image, SentRNA uses a sparse kernel that detects bonding pairs and nearest neighbors (red), as well as lo cations of high mutual information as computed using the training data (blue ). Using the kernel, SentRNA scans through the 2D representation of the RNA structu re generated by EteRNA rendering and assigns bases to each position (see Figure 3 f or a more deta iled explanation of SentRNA's architecture and mechanism). Training algorithm: We used subsets of the first 721 / 724 puzzles from eternasolves to train the model , puzzles 722 and 724 for initial validation and testing respectively , and the Eterna100 for more extensive testing. Puzzle 723 was skipped due to being completely unstructured and not useful for validation. We confirmed that there was no contamination between the training, validation, and test sets. Because there is no straightforwa rd way to determine a priori what long -range features and training examples will result in the best -performing model, we decided to train and test an extensive ensemble of models. To do this, we first compute d an aggregate list of 42 long -range features us ing all puzzles from eternasolves with at least 50 submitted solutions, allowing each puzzle to contribute only one long -range feature ( M = 1). We set this threshold of 50 player solutions since puzzles with a small number of solutions can introduce noise into the mutual information calculation. We then randomly selected a subset of long -range features from the aggregate list and built a model using these features . We built one model each using between 0 to 42 randomly chosen features , and repeated this pro cess multiple times to build a total of 860 models . To form the training sets for these models , we first randomly chose 50 puzzles from eternasolves to serve as training puzzles. This gives us 50 lists of player solutions, one for each puzzle. We then take the first solution from each list to form a training set of 50 player solutions. To train each model, we use the following procedure. For each player solution in our training set , we first visually render the corresponding puzzle using the EteRNA rendering method. We then set the id entity of every position in the puzzle to the corresponding base in the player solution , and featurize each position into bonding pair, nearest neighbors, and long -range features to form the input vector . The outp ut label is set to the identity of the corresponding position in the player solution . Then , we decompose the p layer solution into a "solution trajectory " to teach the agent how to solve an initially blank puzzle with no bases assigned (i.e. during test time ). This is done by first removing all b ase assignments from the puzzle . A position in the puzzle is then selected and featurized . All input features (bonding partner, nearest neighbor s, long -range features ) are at this point set to “none ”, and the output label is set to the identity of the corresponding position in the player solution. This position is then filled in with the player solution base , and the next position is picked and featurized (Figure 3 ). This process continues sequentially until a ll positions in the puzzle have been featurized . This process is essentially mimicking the process of a human player filling in the puzzle sequentially base by base and training the model to reproduce these steps . During validation and testing, the agent proceeds through each position in the (initially blank) puzzle sequentially and assigns bases greedily based on the model outputs. We note that when using this sequential fill -in approach , we are only interested in being able to reproduce the final player solution, not the steps taken by that player to reach the solution. This was an intentional choice to avoid noise in the training process. Through discussions with many EteRNA players, we found that the exact process taken by a player to solve a puzzle in -game is often much longer and more convoluted than simply filling in the puzzle base - by-base sequentially. For example, a common strategy employed by players it to puta tively assign sets of bases to the puzzle , and then mutate these bases at a later stage of solving to refine the solution . Therefore, trainin g an agent to reproduce exact solution trajectories would likely result in many unnecessary, unproductive moves that would later have to be undone, and pote ntially even result in infinite loops of assigning and unassigning bases in certain cases. To avoid these potential situations , we opted to use sequential fill -in as a simple and consistent (albeit artificial) means to reach the final solution. We initialize each model using Gaussian weights (µ=0, σ=0.02 ), unit biases, and a learning rate of 0.001 . We train each model using the Adam optimizer22 for a total of 1000 epochs , performing a validation on puzzle 722 every 100 epochs , to give a total of 10 models. The model with the highest validation accuracy is then used for testing on puzzle 724 . During validation and testing, we allowed the model two attempts at predicting a sequence, once using a blank sequence as input, and again using the initial mode l-predicted sequence as input. The second attempt is intended as an opportunity for the model to refine its first prediction. If the model proposed valid solutions for both validation and test puzzles, it was t hen subjected to more extensive testing on the Eterna100 . In total, we trained 860 models , and of these models, 802 models passed initial validation and testing on puzzles 722 and 724 of eternasolves . Of these 802 models, we then further tested 154 of these models on the full Eterna100 and the 63 structures from Garcia -Martin et al.19 We decided to only test a subset of our trained models on the Eterna100 due to the fact that th e overall performance of the ensemble, measured by the total number of puzzles solvable across the ensemble, had fir mly plateaued at a stable value of 78 / 100 by this point. We therefore concluded that additional testing of models would likely be unproduc tive in terms of enhancing performance. Figure 2 : The training and validation procedure for SentRNA consists of first selecting a puzzle, and then performing a pairwise mutual information calculation using all player solutions for that puzzle . Positions in the resulting mutual information matrix with high values are then used to define new long -range features that will be included in the model’s field of vision (steps 1 -3). These features are appended to the base input vector that by default has information about the bonding partner and nearest neighbors. SentRNA is then trained to reproduce a player solution at each position in the puzzle . To train the model, we use a two -part training set consisting of both the full player solutio n as well as a synthetic “solution trajectory” to simulate the process of solving a puzzle base -by-base starting from a blank puzzle (step 4) . During validation (and subsequent testing ), the model is exposed to each position in a new (initially blank) puzzle sequentiall y and greedily fills in bases one -by-one based on the model outputs (step 5) . Refinement algorithm: During testing , if the initial predicted solution does not fold into the target structure, as judged by Vienna 1.8.5 , we further refine this solution using an adaptive walk . We use the following refinement moves : 1) pairing two unpaired bases that should be paired in the target structure, 2) re-pairing two pai red bases that should be paired, 3) unpairing two paired bases that should not be paired , and 4 ) G or U -U boosting ,23 two common stabilization strategies taught to beginning EteRNA players . During refinement, r andom traj ectories of these moves are generated and applie d to the initial sequence until one that folds into the target structure is reached . At any point , if an intermediate sequence is reached that folds into a structure more structurally similar to the target, which we define as the fract ion of matching characters in the dot -bracket notations, the refinement trajectory is ended , and all subsequent trajectories begin from that point. Unless otherwise noted (see Results ), we limit ed the refinement to 300 trajectories of length 30, whic h takes at most 90 seconds for most puzzles in the Eterna100 (~100 bases or fewer in length ). By comparison, all algorithms tested in the previous benchmark by Anderson -Lee et al. were given a much longer time limit of 24 hours.17 We also investigated the impact of the human prior information encoded solely in the neural network prediction itself by repeating the refinement while removing the unpair ing and boosting moves (the ones that encode human strategies ), allowing the refinement to only pair bases using GC, AU, or GU pairing. Interestingly , we observe that not only is proper neural network prediction cri tical in many cases to solving the puzzle in comparison to existing methods , some puzzles could only be solved using a combination of neural network prediction and the restricted refinement moveset consisting of only pairing moves. We found that r andom application of unpairing and boosting moves can in fact be detrimental in some cases by irreversibly overwriting the strategies encoded by the neural network prediction (see Results). Results: Overall performance of SentRNA on Eterna100 In total, SentRNA can solve 78 / 100 puzzles f rom the E terna100 across all 154 models trained and tested ( Table 1 ). Notably, of these puzzles , SentRNA can solve 47 through its initial neural network prediction alone (i.e. no refinement ), already placing it ahead of or on par with 3 / 6 algorithms previously benchmarked ,17 indicating that the human -like strategies learned by SentRNA from the eternasolves training set are indee d directly generali zable to more difficult puzzles and can constitute a useful prior for design. Table 1 : Performance of the SentRNA ensemble of 154 trained models on the full set of Eterna100 puzzles. An "X" in the “Pred” column means that a model could predict a correct solution to the puzzle without refinement, and an "X" in the "Ref" column means the model was able to solve the puzzle after refine ment. The number in the “Times " column represents the minimum refinement time across all models that was necessary to solve the puzzle, in seconds, and “% Seq” corresponds to the fraction sequence identity between the neural network prediction and the refined solution for that model. These results were generated using default training and refinement parameters: 50 randomly selected player solutions for training, and 300 adaptive walk trajectories of len gth 30 using the full moveset of pairing, unpairing, an d boosting moves . We also more explicitly tested the contribution of the prior information encoded in the neural network prediction by repeating the refinement process while allowing SentRNA to only perform pairing moves (GC, AU, GU) , making it equivale nt to the original adaptive walk procedure used in RNAInverse .6 By restricting the moveset in this way, we ensure that any potential advantages contributed by human prior information can only be contributed by the neural network prediction . Put anothe r wa y, we place SentRNA’s refinement algorithm on par with the stochastic search algorithms employed by methods such as NUPACK and MODENA in terms of contributable information content , i.e. there is no change to the sequence that SentRNA’s restricted moveset can make that cannot b e made by the se other search procedures . This approach allows us to more directly measure the potential advantages of SentRNA’s neural network initialization over the initialization methods of other algorithms . We performed two rounds of refinement for all the models: 1) using a moveset consisting of all pairing moves (GC, AU, and GU) (Table 2) and 2) using a more aggressive , purely GC mutation moveset (Table 3) . Remarkably, we find that SentRNA’s performance is comparable to when using the full moveset. SentRNA can solve 74 / 100 puzzles using neural network prediction + GC pairing and 72 / 100 using neural network prediction + GC/AU/GU pairing, indicating that the neural network initialization by itself contributes significantly to SentRNA’s performance. We summarize the overall performance of the differen t refinement movesets in Table 4 . Table 2: Performance of the SentRNA ensemble of 154 trained models on the Eterna100 when restricting the ref inement moveset to only GC, AU, and GU pairing moves. An "X" in the “Pred” column means that a model could predict a correct solution to the puzzle without refinement, and an "X" in the "Ref" column means the model was able to solve the puzzle after refine ment. The number in the “Times" column represents the minimum refinement time across all models that was necessary to solve the puzzle, in seconds, and “% Seq” corresponds to the fraction sequence identity between the neural network prediction and the refi ned solution for that model. These results were generated using default training and refinement parameters: 50 randomly selected player solutions for training, and 300 adaptive walk trajectories of len gth 30 using the restricted moveset. Table 3 : Performance of the SentRNA ensemble of 154 trained models on the Eterna100 when restricting the refinement moveset to only GC pairing moves. An "X" in the “Pred” column means that a model could predict a correct solution to the puzzle without refinement, a nd an "X" in the "Ref" column means the model was able to solve the puzzle after refinement. The number in the “Times" column represents the minimum refinement time across all models that was necessary to solve the puzzle, in seconds, and “% Seq” correspon ds to the fraction sequence identity between the neural network prediction and the refined solution for that model. These results were generated using default training and refinement parameters: 50 randomly selected player solutions for training, and 300 a daptive walk trajectories of length 30 using the restricted moveset. Table 4 : Overall comparison of SentRNA’s performance given different refinement movesets. Solved puzzles are denoted with an "X". The “Pred” column refers to only neural network predic tion. “Full” refers to puzzles solved using the full refinement moveset of pairing, unpairing, and boosting moves. “All pair” refers to neural network prediction + all possible pairing moves, GC/AU/GU, and “GC pair” refers to neural network prediction + GC pairing moves only. SentRNA can learn critical solution strategies and apply them through its neural network prediction SentRNA can solve 47 / 100 puzzles using neural network refinement alone, suggesting that the human strategies learned during training are indeed generalizable to more difficult targets. Specifically , we find that for many of the puzzles solvable purely through n eural network prediction , SentRNA shows clear human -like signatures in its predicted sequences. For example, for the puzzle “1,2,3and4bulges” (Figure 4 , left), the most difficult structural component to stabilize is the unstable length -1 stem attached to a 4 -loop. It is necessary to boost this 4 -loop with a G at the N -terminal end of the loop to solve the puzzle. We observe that SentRNA can apply this st rategy directly through its neural network prediction, allowing it to solve the puzzle without any needed refinement. Similarly, for the puzzle Mat – Cuboid , a key stabilization strategy for this puzzle is a specific boosting strategy for the 2 -2 loops invented by the EteRNA community called a “UGUG superboost” and involves mutating all four bases of each 2 -2 loop to UGUG (Figure 4 , right) . We see that SentRNA can learn this strategy through the training data and generalize it to a much more difficult puzz le in the Eterna100. Through these two examples, we see that by if SentRNA is provided the appropriate training data, it can learn human -like strategies and apply them t o help s olve new, more difficult targets . This potentially gives SentRNA unique advant ages over other available inverse design algorithms that do not have knowledge of these strategies when solving puzzles rich in 4-loops or 2-2 loops . Figure 4 : SentRNA can learn human -like strategies such as 4 -loop boosting and the UGUG superboost for 2 -2 loops that enables it to solve puzzles such as “1,2,3and4bulges” and Mat – Cuboid . Prior information included in the refinement moveset compens ates for deficiencies in ne ural network training We found that some puzzles in the Eterna100 necessitated the use of the full refinement moveset, which includes pairing, unpairing, and boosting moves. If SentRNA is unable to learn the necessary stabilization strategies for a specifi c puzzle , or if the strategies learned from training are not perfectly generalizable to a new puzzle, the human prior information encoded in the unpairing and boosting moves can be critical to reaching a valid solution. For example, the puzzle hard Y (Figure 5 , left) contains an unusual structural element consisting of two adjacent length -1 bulges, named the “zigzag” by the EteRNA community. A key step in stabilizing this structure is mutating one of the unpaired bulge base s to C. However, because the zigzag is a rarely seen structural element, SentRNA was unable to learn the proper stabilizatio n strategy through the training data. However, the presence of the unpairing move allows SentRNA to sample the necessary stabilization move during its refinement trajectory and solve the puzzle (Figure 5, right) . Figure 5: The puzzle hard Y contains a rarely seen structural element of two adjacent length -1 bulges called a zigzag. We observed it was necessary to mutate one of these bulge bases to a C to stabiliz e the z igzag and solve the puzzle . SentRNA was unable to learn this strategy during training, necessitating the use of an u npairing move during refinement to perform the C - mutation. As another example, a lthough SentRNA can often apply productive boosting moves with its neura l network prediction, such as for “1,2,3and4bulges” , we also observed that SentRNA’s knowledge of boosting is incomplete . SentRNA can readily learn how to boost using G -mutations due to the ubiqu itous use of this strategy in the EteR NA community. However, U-U boosting, a more advanced strategy, is less well -represented in the training set, and we observed that SentRNA had difficulty learn ing it via its neural network . For example, in the puzzle Misfolded Aptamer 6 , SentRNA attempts to boost the internal loops using G-mutations only . However, as the refinement reveals, a U -U boost to the 2-5 internal loop appears necessary to solve the puzzle . Thus, in this situation, r efinement is necessary to compensate for the lack of prior knowledge in SentRNA’s neural network (Figure 6) . Figure 6 : SentRNA applies boosts to the 4 -4 and 2 -5 internal loops of Misfolded Aptamer 6 by mutating three separate bases on these loops to G, but we observed that a critical stabilization move that SentRNA did not learn was an additional U-U boost at the 2 -5 internal loop . This move needed to be sampled during refinement to solve the puzzle. Prior information included in the refinement moveset can potentially harm Se ntRNA’s performance Previously, we showed that the human prior information encoded in the unpairing and boosting moves was necessary to solve some puzzles by making up for deficiencies in SentRNA’s neural network training . Interestingly, however, we observe that the opposite can also be true, and that some puzzles are only solvable using neural network prediction and a restricted moveset of only pairing moves. We found that the prior information encoded in the refinement moveset, when applied in a ra ndom, undirected manner, can sometimes cause irreversible damage to an intelligently initialized sequence and make it impossible to solve certain puzzles. For example, the puzzle Mat – Lot 2 -2 B represents a more difficult version of Mat – Cuboid , in which the UGUG superboost for the 2 -2 loop is a necessary part of the solution strategy. Including boosting and unpairing moves into the refinement moveset can irreversibly damage a sequence that is initialized by the neural network and contains t his strategy, making it impossible to solve the puzzle. We observed that the only way Mat – Lot 2-2 B could be solved is through a combination of neural network refinement and a restricte d moveset of only pairing moves, as this full y preserv es the UGUG superboost (Figur e 7). Figure 7 : SentRNA could only solve Mat – Lot 2 -2 B using a combination of neural network prediction to initialize the sequence with the UGUG superboost strategy, coupled with refinement using only pair moves. Allowing for unpairing and boosting mov es can easily and irreversibly contaminate the UGUG superboost s and prevent SentRNA from solving the puzzle. Another way in which unpairing and boosting moves can harm SentRNA’s performance is by needlessly increasing the refinement search space such that productive moves are sampled less often . For puzzles that require only pairing moves to solve , including unpairing and boosting moves in the refinement moveset can significantly slow the refinement process , to the point in which SentRNA cannot sample a valid solution before the refinement timeout . For example, the puzzle Bullseye only requires GC pair mutations to solve beginning from the neural network prediction, and in fact was only solved using a combinatio n of neural netwo rk prediction + GC pairin g moves (Figure 8) . The likely reason for this is d ue to its large size, including unnecessary moves into the moveset (AU/GU pa iring, unpairing, and boosting) will substantially increase the required time to sample these productive GC mutations, and SentRN A as a result cannot solve the puzzle with anything more extensive than a GC pairing moveset. Figure 8 : SentRNA can solve Bullseye using only a combination of neural network prediction and GC pairing moves during refinement. Incorporation of other move types substantially increases the refinement search space and makes it more difficult to sample t he productive GC mutation moves, leading to a prohibitively long refinement time. Summary of SentRNA’s performance on the Eterna100 versus other methods We summarize the performance of SentRNA on the Eterna100 vs. the 6 other methods previously benchmarked by Anderson -Lee et al in Table 5 .17 We obse rve that even with a restricted refinement moveset of only pairing moves , which places SentRNA’s refinement algorithm on par with the stochastic search algorithms of other methods in terms of information content , SentRNA still significantly outperforms the top-performing methods MODENA, INFO -RNA, and NUPACK . These results indicate that the human prior information encoded in SentRNA’s neural network prediction results in a more useful initialization compare d to these other methods for a large r number of puzzles . We see this explicitly for puzzles such as Mat – Lot 2 -2 B, where the human - developed UGUG superboost strategy is necessary for solving the puzzle , and hence this puzzle is solvable by only SentRNA and none of the o ther algorithms. Method Number of puzzles solved SentRNA, NN only 47 / 100 SentRNA, NN + full moveset 78 / 100 SentRNA, NN + GC pairing 74 / 100 SentRNA, NN + all pairing 72 / 100 RNAinverse 28 / 100 RNA -SSD 27 / 100 DSS-Opt 47 / 100 NUPACK 48 / 100 INFO -RNA 50 / 100 MODENA 54 / 100 Table 5 : Summary of performance of SentRNA vs. the 6 other algorithms benchmarked on the Eterna100 by Anderson -Lee et al. We see that even with a restricted moveset of only pairing moves, SentRNA still s ignificantly outperforms other methods, indicating that the neural network prediction is an overall more useful initialization scheme than those of other al gorithms for the objective of solving the Eterna100. SentRNA achieves state -of-the-art results on an independent, non -EteRNA test set Although SentRNA’s state -of-the-art performance on the Eterna100 is promising, there remains the possibility that because we are both training and testing on EteRNA puzzles, SentRNA is simply overfitt ing to EteRNA , and may not be useful for designing sequences for non -EteRNA targets. To address this possibility, we further tested our ensemble of 154 trained models on a set of 63 non -EteRNA, experimentally synthesized targets that Garcia -Martin et al. recently used to benchmark a set of 10 inverse folding alg orithms.19 To our knowledge , this is the most recent and compr ehensive benchmark of current state -of-the-art methods . Remarkably, w e find that the prior information learned from training on EteRNA puzzles is indeed generalizable to these non -EteRNA structures. We find that SentRNA can solve 46 / 63 targets using ne ural network prediction alone and 57 / 63 using neural network in combination with t he full refinement moveset . As such, the neural network prediction on its own is sufficient to surpass 8 / 10 methods previously benchmarked, competitive with RNA -SSD (47 / 63) and only clearly worse than ERD (54 / 63) . If we allow for a 10 -minute refinement using the full refinement moveset (the time limit allowed by Garcia -Martin et al. for th e other methods) , SentRNA surpasses all previous methods (Table 6 ). However, we also observed that u sing the restricted movset of either GC pairing moves or GC/AU/GU pairing moves allows SentRNA to solve only 53 / 63 targets , slightly worse than ERD. There fore, it appears that while the contribution of the neural network initialization is sufficient to allow SentRNA to remain competitive with the current state -of-the-art for this test set, it does not grant a decisive advantage . In fact, ERD overall can solve one more target than SentRNA, suggesting that ERD’s initialization using natural sequences from a known, biological database may be more useful than SentRNA’ s initialization that is learned from EteRNA puzzles for this test set . Thus, u nlike the Eterna100, it appears that the additional prior information encoded in the unpairing and boosting moves is necessary in this case for SentRNA to surpass the previous state -of-the-art. We also compared the GC content of solutions generated by SentRNA versus the other methods , averaged over all models (Table 6) . We observe that the solutions generated by the NN , NN + all pairing moves, and NN + the full refinement moveset have a mean GC content of 56-58%, computed over all models and targets , while NN + GC pairing is slightly higher at 61 %. This level of GC content is similar to many of the other methods (RNAfbinv, Frnakenstein, RNAInverse, NUPACK, MODENA, RNAiFold 2.0, and ERD) , which on average generate sequences with a roughly 50% GC content. However, if we consider results from individual models separately , we see that depending on the specific model tested, the variation in GC content can vary dramaticall y given the same target (Table 7) . For instance, for the target RF000 08.11 from the Rfam databse, SentRNA generated solutions using its NN which range from 20% to 100% GC content, which is a consequence of the fact that models trained usin g different training sets can employ significantly different solution strategies. These results suggest that traini ng an ensemble of models and then using different subsets of these models for testing can provide the user a degree of control over the amount of GC content p resent in a predicted solution. Method Number of puzzles solved GC content (%) SentRNA, NN only 46 / 63 58 ± 22 SentRNA, NN + full moveset 57 / 63 57 ± 19 SentRNA, NN + all pairing 53 / 63 56 ± 19 SentRNA, NN + GC pairing 53 / 63 61 ± 21 RNAfbinv 0 / 63 51 IncaRNAtion 6 / 63 71 Frnakenstein 19 / 63 49 RNAInverse 20 / 63 49 NUPACK 29 / 63 57 MODENA 32 / 63 50 RNAiFold 2.0 41 / 63 57 INFO -RNA 45 / 63 72 RNA -SSD 47 / 63 36 ERD 54 / 63 55 Table 6 : Comparison of SentRNA vs. 10 other inverse folding algorithms on the 63 -structure test set of Garcia -Martin et al.19 SentRNA’s neural network alone can solve 46 / 63 st ructures, placing it ahead of 8 / 10 algorithms. With neural network prediction + refinement using the full moveset, and limiting the refinement to 10 minutes per target, SentRNA achieve s state -of-the-art performance and solve s 57 / 63 targets , outperformi ng all other algorithms . On the other hand, restricting the refinement moveset to only pairing moves reduces SentRNA’s performance to 53 / 63, slightly worse than ERD. The average GC content for each method is also reported in the rightmost column. For all other methods besides SentRNA, this value was reproduced from Garcia -Martin et al.19 For SentRNA, the mean value was obtained by averaging over all valid solutions generated by SentRNA across all 154 models, and the reported error is the standard deviation. Table 7 : Left: Performance of the SentRNA ensemble of 154 trained models on the 63 -structure test set of Garcia -Martin et al. while employing neural network prediction and the full refinement moveset (left ). An "X" in the “Pred” column means that a model could predict a correct solution to the puzzle without refinement, and an "X" in the "Ref" column means the model was able to solve the puzzle after refinement. The number in the “Times" column represents the minimum refinement time across all models that was necessary t o solve the puzzle, in seconds, and “% Seq” corresponds to the fraction sequence identity between the neural network prediction and the refined solution for that model. Refinement parameters were set to 300 adaptive walk trajectories of len gth 30 , and resu lts from refinement that exceeded 10 minutes were discarded. Middle : Comparison between the full refinement moveset and restricted movesets of GC and all pairing mutations. We observe that using the full refinement moveset allows SentRNA to solve strictly more targets (57 / 63) compared to using either GC pairing or the full set of pairing moves (53 / 63). Right: A list of the range of GC con tent for each target across all models given a particular NN + refinement pipeline. "%GC, Pred " is the GC content from solutions generated using only the NN , "%GC , Full" is from NN + full ref inement moveset, "%GC, All pair" is from NN + pairing moves, and "%GC , GC pair" is from NN + GC pairing moves. The minimum and maximum % GC content across all valid generated sequences for the target are reported . Depending on the model , the GC content for a sequence can vary d ramatically, for example, ranging from 20% to 100% for the target RF00008.11. Finally, we note that u sing a combination of neural network prediction and the full refinement moveset, SentRNA can solve one additional target that was unsolvable by all previously benchmarked algorithms given the 10-minute time limit (Figure 9 ). Figure 9 : SentRNA can solve one target previously unsolvable by any computational algorithm within a 10 -minute time limit through a combination of neural network prediction and refinement using the full moveset. The key stabilization strategies added by the refinement are boost ing mov es at the 2 -3 loops using G -mutations. We observe that the key stabilizations added by the refinement are a series of boosts to the small 2-3 internal loops using G mutations . SentRNA was unable to apply this strategy via the neural ne twork prediction, indicating that it did not learn this strategy during training. Thus, once again we have an example in which the prior information incorporated in the refinement moves et can compensate for gaps in knowledge in SentRNA’s neural network. Discu ssion: Our results show that incorporating a prior of human design strategies into an inverse RNA design agent can significantly boost its pe rformance on difficult targets . We observe SentRNA’s neural network’s ability to incorporate advanced human strateg ies into the sequence initialization directly , such as the UGUG superboost for 2 -2 loops, allows it to either predict a valid sequence immediately (e.g. Mat – Cuboid ), or predict a sequence that is close to a valid solution, which can then be refined into a correct solution using only a basic refinement strategy that involves onl y pairing moves and encodes no additional human prior information (e.g. Mat – Lot 2 -2 B). Intere stingly, we observe that for several puzzles in the Eterna100 , using the full refinement moveset to refine a neural network prediction is in fact not the optimal strategy , as this can lead to either irreversible damage of an intelligently initialized seque nce (Mat – Lot 2 -2 B), or unnecessary expansion of the refinement search space such that productive moves are sampled less often (Bullseye ). In these situations, a simpler and more naïve refinement moveset can show significantly better performance. Overal l, we observe that even using this much simpler refinement moveset (equivalent to the adaptive walk from RNAInverse) SentRNA can maintain a sizable advantage over other design algorithms, demonstrating that the neural network initialization of the sequence is indeed superior to the sequence initializations of other methods when solving the Eterna100. Alternatively, for several pu zzles SentRNA shows clear deficiencies in its neural network prediction, being unable to learn strategies such as zigzag stabilization to solve hard Y , or the single U-U boosting of thet 4 -4 internal loop to solve Misfolded Aptamer 6 . In these cases, the human prior information encoded in the unpairing and boosting moves of the refinement algorithm is often able to compensate for these slight deficienci es and solve the puzzle through brute -force search. However, we believe that there is no reason SentRNA ca nnot learn these strategies directly through training of its neural network, and hypothesize the reason SentRNA did not learn them during this study is because of their limited representation in the training set . For instance, U -U boosting is a more advanced form of boosting compared to boos ting with G - mutations that is utilized by far fewer players overall, as well as being unnecessary to solve a large majority of the training set puzzles. Thus, due to the poor representation of U -U boosting when con sidering all 1.8 x 104 player solutions, it is no surprise that SentRNA has difficulty learning this strategy. However, if we used instead a more restricted training set consisting of only solutions from only the very best EteRNA players , who are more like ly to have knowledge of and routinely use this advanced strategy, we believe SentRNA would be fully capable of learning it. Puzzles such as Misfolded Aptamer 6 may then potentially be solvable using only pairing moves during refinement. Overall, we notice that the success of SentRNA as an algorithm depends on synergistic behavior between the neural network prediction and the refinement process . When a large amount of human prior information is encoded into the neural network prediction, a simpler pairing -move -only refinement moveset that does not disrupt this prior information can lead to better performance. On the other hand, when the neural net work prediction is more naïve, a more complex refinement moveset is necessary to compensate for this lack of knowledge. We stress that the combination of both elements is critical to SentRNA’s success. Although the neural network may be able to solve simpl er puzzles without any further refinement, more complicated puzzles (e.g. Mat – Lot 2 -2 B) often contain structural nuances specific to that puzzle that are not represented in the training data . Therefore , some sort of refine ment process is still necessary to sample a valid solution , even from a "close" sequence initialization. Remarkably, we find that the strategies learned during training on EteRNA puzzles are useful for solving even a second, completely independent test set. When benchmarking on the 63 structures from Garcia -Martin et al.19 we observed the neural network prediction could solve 46 / 63 targets, which already places in 3rd out of the 10 previously benchmarked methods. The strong performance of the neural network on its own given a completely independent test set suggests that the human -like strategies learned during training are indeed generalizable to unseen targets, and t hat we are n ot simply overfitting to EteRNA puzzles. Furthermore, if we include a naïve refinement strategy of pairing moves, we find SentRNA can solve 53 / 63, making it competitive with the current state -of-the-art, ERD, which solved 54 / 63. Finally, in cluding the full refinement moveset allows SentRNA to gain a slight edge over ERD, solving 57 / 63. The fact that ERD performs better than SentRNA using the more naïve refinement suggests that perhaps ERD’s initialization , which is drawn from a databse of known, naturally occurring RNA subsequences, is superior to SentRNA’ s “human” initialization for this test set . Alternatively, it could also be that because the evolutionary algorithm from ERD is more sophisticated than the simple adaptive walk employed by SentRNA, it is able to compensate for a potentially inferior initialization. Given these possibilities, creating a hybrid method such as SentRNA neural network + ERD evolutionary algorithm and seeing if it can surpass either indiv idual method would be an interesting follow -up study. Alternatively, adding add itional moves to SentRNA’s refinement moveset, such as explicit moves for zigzag stabilization, could also potentially improve SentRNA’s current performance by allowing it to s olve puzzles such as hard Y more quickly. However, this comes with the cost of adding more ways to potentially disrupt an intelligent neural network sequence initialization, as was the case with puzzles such as Mat – Lot 2 -2 B. Given our results, we hypoth esize that instead of expanding the refinement moveset, the best way to improve SentRNA’s performance is to feed it more instructive training data such that these advanced strategies can be encoded into the neural network prediction itself , and simplify the moveset as much as possible (i.e. to only pairing moves) to reduce the size of the refinem ent’s search space. Finally , we note that because the ultimate goal in solving inverse RNA folding is to assist real-world RNA design, training SentRNA using player solutions whose folds have been judged not only using the in silico ViennaRNA energy function , but also which have been experimentally validated , might allow our agent to more effectively learn strategies for real -world RNA design . In fact, it has already been shown that incorporating human design strategies into a computational agent can allow it to achieve state -of-the-art performance in this regard . Previously, EteRNA developers created a design agent called EteRNABot, which combined a set of 40 player -submitted, experimentally validated rules and strategies into a custom score function , which EteRNABot th en tried to optimize using a stochastic sear ch algorithm similar to that of RNAInverse. Through this approach, EteRNABot could globally outperform RNAInverse and NUPACK on a set of nine difficult design challenges , judged based on consistency with actual experimental synthesis and structure mapping studies .24 We posit that SentRNA, if trained on experimentally validated player solutions, could yield comparable , if not better , results since it learns in a purely data driven manner and can therefore potentiall y learn a much richer set of strategies from the data instead of relying on a limited set of hand -coded strategies. Conclusion: We present SentRNA, a computational agent for RNA design that consists of a fully -connected neural network trained on player -submitted solutions from the online RNA design game EteRNA , coupled with an adaptive walk algorithm that incorporates simple human design strategies . Given a target structure, t he agent predicts a sequence that folds into that structure, which is then further refined via the adaptive walk if necessary. We observe that SentRNA can effectively learn and apply human -like design strategies to achieve state -of-the-art performance on the difficult Eterna100 test set , solving 78 / 100 puzzles in total . Furthermore, we show that the strategies SentRNA learns are generalizable to non -EteRNA targets, achieving state -of-the-art performance on an indep endent test set of 63 non-EteRNA targets , solving 57 / 63 targets . Our results demonstrate the power of incorporating human prior information into a design algorithm, and suggests a new paradigm in machine -based RNA design . Acknowledgements: We would lik e to thank the EteRNA community for providing the training data for SentRNA and making this study possible. We would also like to thank Tono Garcia -Martin and Prof. Peter Clote for sharing their compiled set of 63 non -EteRNA targets as well as their benchmarking data for these targets. Author contributions: J.S. programmed SentRNA, trained models and evaluated their performance , and wrote the paper. EteRNA players generated the eternasolves dataset used to train SentRNA. R. D. and V. S. P. supervised the projec t. References: 1 Goldberg, MS, Xing, D, Ren, Y, Orsulic, S, Bhatia, SN, Sharp, PA. Nanoparticle -mediated delivery of siRNA targeting Parp1 extends survival of mice bearing tumors derived from Brca1 - deficient ovarian cancer cells. PNAS. 2010:108:745 -750. 2 Win, MN, Smolke, CD. Higher -Order Cellular Information Processing with Synthetic RNA Devices. Science. 2008:322:456 -460. 3 Delebecque, CJ, Lindner, AB, Silver, PA, Aldaye, FA. Organization of Intracellular Reactions with Rationally Designed RNA Assemblies. Science. 2011:3 33:470 -473. 4 Hao, C, Li, X, Tian, C, Jiang, W, Wang, G, Mao, C. Construction of RNA nanocages by re - engineering the packaging RNA of Phi29 bacteriophage. Nature Communications. 2014:5:1 -7. 5 Dixon, N, Duncan, JN, Geerlings, T, Dunstan, MS, McCarthy, JEG, Leys, D et al. Reengineering orthogonally selective riboswitches. PNAS. 2010:107:2830 -2835. 6 Hofacker IL, Fontana W, Stadler PF, Bonhoeffer LS, Tacker M, Schuster P. Fast folding and comparison of RNA secondary structures. Monatshefte Für Chem Chem Mon. 1 994;125: 167 – 188. doi:10.1007/BF00818163 7 Andronescu M, Fejes AP, Hutter F, Hoos HH, Condon A. A New Algorithm for RNA Secondary Structure Design. J Mol Biol. 2004;336: 607 –624. doi:10.1016/j.jmb.2003.12.041 8 Busch A, Backofen R. INFO -RNA --a fast approac h to inverse RNA folding. Bioinformatics. 2006;22: 1823 –1831. doi:10.1093/bioinformatics/btl194 9 Matthies MC, Bienert S, Torda AE. Dynamics in Sequence Space for RNA Secondary Structure Design. J Chem Theory Comput. 2012;8: 3663 –3670. doi:10.1021/ct300267 j 10 Zadeh JN, Steenberg CD, Bois JS, Wolfe BR, Pierce MB, Khan AR, et al. NUPACK: Analysis and design of nucleic acid systems. J Comput Chem. 2011;32: 170 –173. doi:10.1002/jcc.21596 11 Taneda A. MODENA: a multi -objective RNA inverse folding. Adv Appl Bioinf orma Chem AABC. 2010;4: 1 –12. 12 Reinharz, V et al. A weighted sampling algorithm for the design of RNA sequences with targeted secondary structure and nucleotide distribution . Bioinformatics. 2013;29:308 -315. 13 Lyngs, R. B. et al. Frnakenstein: multiple t arget inverse RNA folding. BMC Bioinformatics. 2012;13:260. 14 Esmaili -Taheri, A. and Ganjtabesh, M. ERD: A fast and reliable tool for RNA design including constraints. BMC Bioinformatics, 2015;16:20. 15 Garcia -Martin, JA, Clote, P, and Dotu, I. RNAiFOLD: a constraint programming algorithm for RNA inverse folding and molecular design. J Bioinform Comput Biol. 2013;11: 1350001 . 16 Kleinkauf, R, Mann, M, Backofen, R. antaRNA: ant colony -based RNA sequence design. Bioinformatics. 2015;31:3114 -3121. 17 Lee, JA, Fisker, E, Kosaraju, V, Wu, M, Kong, J, Lee, J et al. Principles for Predicting RNA Secondary Structure Design Difficulty. J. Mol. Biol. 2016:428:748 -757. 18 EteRNA is hosted at www.eternagame.org 19 Garcia -Martin, JA et al. RNAiFold 2.0: a web server and software to design custom and Rfam - based RNA molecules. Nucleic Acids Research. 2015;43:513 -521. 20 Sherlock 2.0 documentation can be found at : https://www.sherlock.stanford.edu/docs/overview/transition 21 Abadi, M, Agarwal, A, Barham, P, Brevdo, E, Chen, Z, Citro, C et al. TensorFlow: Large -scale machine learning on heterogen eous systems, 2015. Software available at tensorflow.org 22 Kingma, D. P. and Ba, Jimmy. Adam: A Method of Stochastic Optimization, arXiv:1412:6980, 2014. 23 Beginner boosting tutorial: http://eternag ame.wikia.com/wiki/Boosting 24 Lee, JA, Kladwang, W, Lee, M, Cantu, B, Azizyan, M, Kim, H et al. RNA design rules from a massive open laboratory. Proc. Nat. Acad. Sci. 2013:112:2122 -2127.
fea3519f-9f5f-4cea-9d15-c34a5025287c
trentmkelly/LessWrong-43k
LessWrong
Mapping the Archipelago I got excited reading Meta-tations on Moderation: Towards Public Archipelago for two reasons: there's a clear island of the archipelago I've been mostly avoiding on LessWrong, and the whole place has been growing at a high enough rate to demand fracturing. Since we have the chance to direct the growth of the brand new archipelago, let's start a discussion down one level of meta: what specific islands do you want to see? Second, how should discussion and moderation norms differ between them? Three islands of current LW, according to me: 1. AI Risk: Serious discussion for serious folk. No smiles allowed. 2. Instrumental Rationality: The means to get to the ends. Whatever they might be. 3. Fluff and Fiction: Blood for the Art God! Fun over fact. (Guess which island I avoid.)
0e417439-5eff-4238-b554-53620500f26f
trentmkelly/LessWrong-43k
LessWrong
Two angles on Repetitive Strain Injury Discussion of buckling-spring keyboards, which give accurate tactile feedback. You can get them for about $70, and a lot of people swear by them. Here's my RSI story: Some years ago, I was getting a lot of pain in my right elbow, presumably as + result of excessive Blockout (3D tetris), counted crosstitch, and being polite for no good reason to someone I was very angry at when I was under stress. Rest was not helping. I remember needing to sign a bunch of checks, and using my right hand for the signatures and my left for the account numbers, and it was still hard on my elbow. The Way of Energy is an excellent introduction to Taoist standing meditation. I worked up to being able to do twenty minutes of just plain standing and twenty minutes of holding a balloon (arms circled at a little below shoulder level) a day. After mere weeks, my elbow problem went away and never came back. Subjectively, I hit a point in meditation where it became obvious to me that I was using more effort to stand than I needed to, and I could just let go of the excess tension.
39f71831-220e-4b9c-93b6-d9c38c6c4099
StampyAI/alignment-research-dataset/arxiv
Arxiv
Adversarial Robustness as a Prior for Learned Representations 1 Introduction --------------- Beyond achieving remarkably high accuracy on a variety of tasks [[KSH12](#bib.bibx33), [He+15](#bib.bibx26), [CW08](#bib.bibx11)], a major appeal of deep learning is the ability to learn effective feature representations of data. Specifically, deep neural networks can be thought of as linear classifiers acting on learned feature representations (also known as feature embeddings). A major goal in representation learning is for these embeddings to encode high-level, interpretable features of any given input [[GBC16](#bib.bibx20), [BCV13](#bib.bibx5), [Ben19](#bib.bibx8)]. Indeed, learned representations turn out to be quite versatile—in computer vision, for example, they are the driving force behind transfer learning [[Gir+14](#bib.bibx23), [Don+14](#bib.bibx16)], and image similarity metrics such as VGG distance [[DB16](#bib.bibx13), [JAF16](#bib.bibx30), [Zha+18](#bib.bibx58)]. These successes and others clearly illustrate the utility of learned feature representations. Still, deep networks and their embeddings exhibit some shortcomings that are at odds with our idealized model of a linear classifier on top of interpretable high-level features. For example, the existence of adversarial examples [[Big+13](#bib.bibx9), [Sze+14](#bib.bibx51)]—and the fact that they may correspond to flipping predictive features [[Ily+19](#bib.bibx28)]—suggests that deep neural networks make predictions based on features that are vastly different from what humans use, or even recognize. (This message has been also corroborated by several recent works [[BB19](#bib.bibx4), [Gei+19](#bib.bibx22), [JLT18](#bib.bibx31), [ZZ19](#bib.bibx60)].) In fact, we show a more direct example of such a shortcoming (c.f. Section [2](#S2 "2 Limitations of standard representations ‣ Adversarial Robustness as a Prior for Learned Representations")), wherein one can construct pairs of images that appear completely different to a human but are nearly identical in terms of their learned feature representations. ![Refer to caption](/html/1906.00945/assets/Figures/headline.jpg) Figure 1: Sample images highlighting the properties and applications of “robust representations” studied in this work. All of these manipulations use only gradient descent on simple, unregularized, direct functions of the representations of adversarially robust neural networks [[GSS15](#bib.bibx25), [Mad+18](#bib.bibx36)]. ##### Our contributions. Motivated by the limitations of standard representations, we propose using the robust optimization framework as a tool to enforce (user-specified) priors on features that models should learn (and thus on their learned feature representations). We demonstrate that the resulting learned “robust representations” (the embeddings learned by adversarially robust neural networks [[GSS15](#bib.bibx25), [Mad+18](#bib.bibx36)]) address many of the shortcomings affecting standard learned representations and thereby enable new modes of interaction with inputs via manipulation of salient features. These findings are summarized below (c.f. Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Adversarial Robustness as a Prior for Learned Representations") for an illustration): * • Representation inversion (Section [4.1](#S4.SS1 "4.1 Inverting robust representations ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")): In stark contrast to standard representations, robust representations are approximately invertible—that is, they provide a high-level embedding of the input such that images with similar robust representations are semantically similar, and the salient features of an image are easily recoverable from its robust feature representation. This property also naturally enables feature interpolation between arbitrary inputs. * • Simple feature visualization (Section [4.2](#S4.SS2 "4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")): Direct maximization of the coordinates of robust representations suffices to visualize easily recognizable features of the model. This is again a significant departure from standard models where (a) without explicit regularization at visualization time, feature visualization often produces unintelligible results; and (b) even with regularization, visualized features in the representation layer are scarcely human-recognizeable [[OMS17](#bib.bibx44)]. * • Feature manipulation (Section [4.2.1](#S4.SS2.SSS1 "4.2.1 Natural consequence: feature manipulation ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")): Through the aforementioned direct feature visualization property, robust representations enable the addition of specific features to images through direct first-order optimization. Broadly, our results indicate that robust optimization is a promising avenue for learning representations that are more “aligned” with our notion of perception. Furthermore, our findings highlight the the desirability of adversarial robustness as a goal beyond the standard security and reliability context. 2 Limitations of standard representations ------------------------------------------ Following standard convention, for a given deep network we define the representation R(x)∈ℝk𝑅𝑥superscriptℝ𝑘R(x)\in\mathbb{R}^{k}italic\_R ( italic\_x ) ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT of a given input x∈ℝd𝑥superscriptℝ𝑑x\in\mathbb{R}^{d}italic\_x ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT as the activations of the penultimate layer of the network (where usually k≪dmuch-less-than𝑘𝑑k\ll ditalic\_k ≪ italic\_d). The prediction of the network can thus be viewed as the output of a linear classifier on the representation R(x)𝑅𝑥R(x)italic\_R ( italic\_x ). We refer to the distance in representation space between two inputs (x1,x2)subscript𝑥1subscript𝑥2(x\_{1},x\_{2})( italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) as the ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT distance between their representations (R(x1),R(x2))𝑅subscript𝑥1𝑅subscript𝑥2(R(x\_{1}),R(x\_{2}))( italic\_R ( italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_R ( italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ), i.e., ‖R(x1)−R(x2)‖2subscriptnorm𝑅subscript𝑥1𝑅subscript𝑥22\|R(x\_{1})-R(x\_{2})\|\_{2}∥ italic\_R ( italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) - italic\_R ( italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ∥ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT. A common aspiration in representation learning is to have that for any pixel-space input x𝑥xitalic\_x, R(x)𝑅𝑥R(x)italic\_R ( italic\_x ) is a vector encoding a set of “human-meaningful” features of x𝑥xitalic\_x [[Ben19](#bib.bibx8), [GBC16](#bib.bibx20), [BCV13](#bib.bibx5)]. These high-level features would be linearly separable with respect to the classification task, allowing the classifier to attain high accuracy. Running somewhat counter to this intuition, however, we find that it is straightforward to construct pairs of images with nearly identical representations yet drastically different content, as shown in Figure [2](#S2.F2 "Figure 2 ‣ 2 Limitations of standard representations ‣ Adversarial Robustness as a Prior for Learned Representations"). Finding such pairs turns out to be as simple as sampling two images x1,x2∼𝒟similar-tosubscript𝑥1subscript𝑥2 𝒟x\_{1},x\_{2}\sim\mathcal{D}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ∼ caligraphic\_D, then optimizing one of them to minimize distance in representation space to the other: | | | | | | --- | --- | --- | --- | | | x1′=x1+argminδ⁡‖R(x1+δ)−R(x2)‖2.superscriptsubscript𝑥1′subscript𝑥1subscriptargmin𝛿subscriptnorm𝑅subscript𝑥1𝛿𝑅subscript𝑥22x\_{1}^{\prime}=x\_{1}+\operatorname\*{arg\,min}\_{\delta}\|R(x\_{1}+\delta)-R(x\_{2})\|\_{2}.italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT + start\_OPERATOR roman\_arg roman\_min end\_OPERATOR start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT ∥ italic\_R ( italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT + italic\_δ ) - italic\_R ( italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ∥ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT . | | (1) | Indeed, solving objective ([1](#S2.E1 "1 ‣ 2 Limitations of standard representations ‣ Adversarial Robustness as a Prior for Learned Representations")) yields images that have similar representations, but share no qualitative resemblance (in fact, x1′superscriptsubscript𝑥1′x\_{1}^{\prime}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT tends to look nearly identical to x1subscript𝑥1x\_{1}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT). An example of such a pair is given in Figure [2](#S2.F2 "Figure 2 ‣ 2 Limitations of standard representations ‣ Adversarial Robustness as a Prior for Learned Representations"). Note that if representations truly provided an encoding of any image into high-level features, finding images with similar representations should necessitate finding images with similar high-level features. Thus, the existence of these image pairs (and similar phenomena observed by prior work [[Jac+19](#bib.bibx29)]) lays bare a misalignment between the notion of distance induced via the features learned by current deep networks, and the notion of distance as perceived by humans. ![Refer to caption](/html/1906.00945/assets/x1.png) Figure 2: A limitation of standard neural network representations: it is straightforward to construct pairs of images (x1′,x2superscriptsubscript𝑥1′subscript𝑥2x\_{1}^{\prime},x\_{2}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT) that appear completely different yet map to similar representations. 3 Adversarial robustness as a prior ------------------------------------ Our analysis in Section [2](#S2 "2 Limitations of standard representations ‣ Adversarial Robustness as a Prior for Learned Representations") and prior work [[Jac+19](#bib.bibx29)] prompt the question: How can we learn better-behaved representations? In this work, we demonstrate that the representations learned by adversarially robust neural networks seem to address many identified limitations of standard representations, and make significant progress towards the broader goal of learning high-level, human-understandable encodings. ##### Adversarially robust deep networks and robust optimization. In standard settings, supervised machine learning models are trained by minimizing the expected loss with respect to a set of parameters θ𝜃\thetaitalic\_θ, i.e., by solving an optimization problem of the form: | | | | | | --- | --- | --- | --- | | | θ\*=minθ⁡𝔼(x,y)∼𝒟[ℒθ(x,y)].superscript𝜃subscript𝜃subscript𝔼similar-to𝑥𝑦𝒟delimited-[]subscriptℒ𝜃𝑥𝑦\theta^{\*}=\min\_{\theta}\mathbb{E}\_{(x,y)\sim\mathcal{D}}\left[\mathcal{L}\_{\theta}(x,y)\right].italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = roman\_min start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT ( italic\_x , italic\_y ) ∼ caligraphic\_D end\_POSTSUBSCRIPT [ caligraphic\_L start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_x , italic\_y ) ] . | | (2) | We refer to ([2](#S3.E2 "2 ‣ Adversarially robust deep networks and robust optimization. ‣ 3 Adversarial robustness as a prior ‣ Adversarial Robustness as a Prior for Learned Representations")) as the standard training objective—finding the optimum of this objective should guarantee high performance on unseen data from the distribution. It turns out, however, that deep neural networks trained with this standard objective are extraordinarily vulnerable to adversarial examples [[Big+13](#bib.bibx9), [Sze+14](#bib.bibx51)]—by changing a natural input imperceptibly, one can easily manipulate the predictions of a deep network to be arbitrarily incorrect. A natural approach (and one of the most successful) for defending against these adversarial examples is to use the robust optimization framework: a classical framework for optimization in the presence of uncertainty [[Wal45](#bib.bibx56), [Dan67](#bib.bibx12)]. In particular, instead of just finding parameters which minimize the expected loss (as in the standard objective), a robust optimization objective also requires that the model induced by the parameters θ𝜃\thetaitalic\_θ be robust to worst-case perturbation of the input: | | | | | | --- | --- | --- | --- | | | θ\*=argminθ⁡𝔼(x,y)∼𝒟[maxδ∈Δ⁡ℒθ(x+δ,y)].superscript𝜃subscriptargmin𝜃subscript𝔼similar-to𝑥𝑦𝒟delimited-[]subscript𝛿Δsubscriptℒ𝜃𝑥𝛿𝑦\theta^{\*}=\operatorname\*{arg\,min}\_{\theta}\mathbb{E}\_{(x,y)\sim\mathcal{D}}\left[\max\_{\delta\in\Delta}\mathcal{L}\_{\theta}(x+\delta,y)\right].italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = start\_OPERATOR roman\_arg roman\_min end\_OPERATOR start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT ( italic\_x , italic\_y ) ∼ caligraphic\_D end\_POSTSUBSCRIPT [ roman\_max start\_POSTSUBSCRIPT italic\_δ ∈ roman\_Δ end\_POSTSUBSCRIPT caligraphic\_L start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_x + italic\_δ , italic\_y ) ] . | | (3) | This robust objective is in fact common in the context of machine learning security, where ΔΔ\Deltaroman\_Δ is usually chosen to be a simple convex set, e.g., an ℓpsubscriptℓ𝑝\ell\_{p}roman\_ℓ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT-ball. Canonical instantiations of robust optimization such as adversarial training [[GSS15](#bib.bibx25), [Mad+18](#bib.bibx36)]) have arisen as practical ways of obtaining networks that are invariant to small ℓpsubscriptℓ𝑝\ell\_{p}roman\_ℓ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT-bounded changes in the input while maintaining high accuracy (though a small tradeoff between robustness and accuracy has been noted by prior work [[Tsi+19](#bib.bibx52), [Su+18](#bib.bibx48)](also cf. Appendix Tables [4](#A1.T4 "Table 4 ‣ A.4 Model Performance ‣ Appendix A Experimental Setup ‣ Adversarial Robustness as a Prior for Learned Representations") and [5](#A1.T5 "Table 5 ‣ A.4 Model Performance ‣ Appendix A Experimental Setup ‣ Adversarial Robustness as a Prior for Learned Representations") for a comparison of accuracies of standard and robust classifiers)). ##### Robust optimization as a feature prior. Traditionally, adversarial robustness in the deep learning setting has been explored as a goal predominantly in the context of ML security and reliability [[BR18](#bib.bibx10)]. In this work, we consider an alternative perspective on adversarial robustness—we cast it as a prior on the features that can be learned by a model. Specifically, models trained with objective ([3](#S3.E3 "3 ‣ Adversarially robust deep networks and robust optimization. ‣ 3 Adversarial robustness as a prior ‣ Adversarial Robustness as a Prior for Learned Representations")) must be invariant to a set of perturbations ΔΔ\Deltaroman\_Δ. Thus, selecting ΔΔ\Deltaroman\_Δ to be a set of perturbations that humans are robust to (e.g., small ℓpsubscriptℓ𝑝\ell\_{p}roman\_ℓ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT-norm perturbations) results in models that share more invariances with (and thus are encouraged to use similar features to) human perception. Note that incorporating human-selected priors and invariances in this fashion has a long history in the design of ML models—convolutional layers, for instance, were introduced as a means of introducing an invariance to translations of the input [[Fuk80](#bib.bibx19)]. In what follows, we will explore the effect of the prior induced by adversarial robustness on models’ learned representations, and demonstrate that representations learned by adversarially robust models are better behaved, and do in fact seem to use features that are more human-understandable. 4 Properties and applications of robust representations -------------------------------------------------------- In the previous section, we proposed using robust optimization as a way of enforcing user-specified priors during model training. Our goal was to mitigate the issues with standard representations identified in Section [2](#S2 "2 Limitations of standard representations ‣ Adversarial Robustness as a Prior for Learned Representations"). We now demonstrate that the learned representations resulting from training with this prior indeed exhibit several advantages over standard representations. Recall that we define a representation R(⋅)𝑅⋅R(\cdot)italic\_R ( ⋅ ) as a function induced by a neural network which maps inputs x∈ℝn𝑥superscriptℝ𝑛x\in\mathbb{R}^{n}italic\_x ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT to vectors R(x)∈ℝk𝑅𝑥superscriptℝ𝑘R(x)\in\mathbb{R}^{k}italic\_R ( italic\_x ) ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT in the representation layer of that network (the penultimate layer). In what follows, we refer to “standard representations” as the representation functions induced by standard (non-robust) networks, trained with the objective ([2](#S3.E2 "2 ‣ Adversarially robust deep networks and robust optimization. ‣ 3 Adversarial robustness as a prior ‣ Adversarial Robustness as a Prior for Learned Representations"))—analogously, “robust representations” refer to the representation functions induced by ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT-adversarially robust networks, i.e. networks trained with the objective ([3](#S3.E3 "3 ‣ Adversarially robust deep networks and robust optimization. ‣ 3 Adversarial robustness as a prior ‣ Adversarial Robustness as a Prior for Learned Representations")) with ΔΔ\Deltaroman\_Δ being the ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ball: | | | | | --- | --- | --- | | | θrobust\*=argminθ⁡𝔼(x,y)∼𝒟[max‖δ‖2≤ε⁡ℒθ(x+δ,y)].subscriptsuperscript𝜃𝑟𝑜𝑏𝑢𝑠𝑡subscriptargmin𝜃subscript𝔼similar-to𝑥𝑦𝒟delimited-[]subscriptsubscriptnorm𝛿2𝜀subscriptℒ𝜃𝑥𝛿𝑦\theta^{\*}\_{robust}=\operatorname\*{arg\,min}\_{\theta}\mathbb{E}\_{(x,y)\sim\mathcal{D}}\left[\max\_{\|\delta\|\_{2}\leq\varepsilon}\mathcal{L}\_{\theta}(x+\delta,y)\right].italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_r italic\_o italic\_b italic\_u italic\_s italic\_t end\_POSTSUBSCRIPT = start\_OPERATOR roman\_arg roman\_min end\_OPERATOR start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT ( italic\_x , italic\_y ) ∼ caligraphic\_D end\_POSTSUBSCRIPT [ roman\_max start\_POSTSUBSCRIPT ∥ italic\_δ ∥ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ≤ italic\_ε end\_POSTSUBSCRIPT caligraphic\_L start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_x + italic\_δ , italic\_y ) ] . | | It is worth noting that despite the value of ε𝜀\varepsilonitalic\_ε used for training being quite small, we find that robust optimization globally affects the behavior of learned representations. As we demonstrate in this section, the benefits of robust representations extend to out-of-distribution inputs and far beyond ε𝜀\varepsilonitalic\_ε-balls around the training distribution. ##### Experimental setup. We train robust and standard ResNet-50 [[He+16](#bib.bibx27)] networks on the Restricted ImageNet [[Tsi+19](#bib.bibx52)] and ImageNet [[Rus+15](#bib.bibx47)] datasets. Datasets specifics are in in Appendix [A.1](#A1.SS1 "A.1 Datasets ‣ Appendix A Experimental Setup ‣ Adversarial Robustness as a Prior for Learned Representations"), training details are in in Appendices [A.2](#A1.SS2 "A.2 Models ‣ Appendix A Experimental Setup ‣ Adversarial Robustness as a Prior for Learned Representations") and [A.3](#A1.SS3 "A.3 Adversarial training ‣ Appendix A Experimental Setup ‣ Adversarial Robustness as a Prior for Learned Representations"), and the performance of each model is reported in Appendix [A.4](#A1.SS4 "A.4 Model Performance ‣ Appendix A Experimental Setup ‣ Adversarial Robustness as a Prior for Learned Representations"). In the main text, we present results for Restricted ImageNet, and link to (nearly identical) results for ImageNet present in the appendices ([B.1.4](#A2.SS1.SSS4 "B.1.4 Representation inversion on the ImageNet dataset ‣ B.1 Inverting representations ‣ Appendix B Omitted Figures ‣ Adversarial Robustness as a Prior for Learned Representations"),[B.3.2](#A2.SS3.SSS2 "B.3.2 Feature visualizations for the ImageNet dataset ‣ B.3 Direct feature visualizations for standard and robust models ‣ Appendix B Omitted Figures ‣ Adversarial Robustness as a Prior for Learned Representations")). Unless explicitly noted otherwise, our optimization method of choice for any objective function will be (projected) gradient descent (PGD), a first-order method which is known to be highly effective for minimizing neural network-based loss functions for both standard and adversarially robust neural networks [[ACW18](#bib.bibx1), [Mad+18](#bib.bibx36)]. Code for reproducing our results is available at <https://git.io/robust-reps>. ### 4.1 Inverting robust representations As discussed in Section [2](#S2 "2 Limitations of standard representations ‣ Adversarial Robustness as a Prior for Learned Representations"), for standard deep networks, given any input x𝑥xitalic\_x, it is straightforward to find another input that looks entirely different but has nearly the same representation (c.f. Figure [2](#S2.F2 "Figure 2 ‣ 2 Limitations of standard representations ‣ Adversarial Robustness as a Prior for Learned Representations")). We noted that this finding runs somewhat counter to the idea that these learned representations effectively capture relevant input features. After all, if the representation function was truly extracting “high-level” features of the input as we conceptualize them, semantically dissimilar images should (by definition) have different representations. We now show that the state of affairs is greatly improved for robust representations. ##### Robust representations are (approximately) invertible out of the box. We begin by recalling the optimization objective ([1](#S2.E1 "1 ‣ 2 Limitations of standard representations ‣ Adversarial Robustness as a Prior for Learned Representations")) used in Section [2](#S2 "2 Limitations of standard representations ‣ Adversarial Robustness as a Prior for Learned Representations") to find pairs of images with similar representations, a simple minimization of ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT distance in representation space from a source image x1subscript𝑥1x\_{1}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT to a target image x2subscript𝑥2x\_{2}italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT: | | | | | | --- | --- | --- | --- | | | x1′=x1+minδ⁡‖R(x1+δ)−R(x2)‖2.superscriptsubscript𝑥1′subscript𝑥1subscript𝛿subscriptnorm𝑅subscript𝑥1𝛿𝑅subscript𝑥22x\_{1}^{\prime}=x\_{1}+\min\_{\delta}\|R(x\_{1}+\delta)-R(x\_{2})\|\_{2}.italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT + roman\_min start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT ∥ italic\_R ( italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT + italic\_δ ) - italic\_R ( italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ∥ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT . | | (4) | This process can be seen as recovering an image that maps to the desired target representation, and hence is commonly referred to as *representation inversion* [[DB16a](#bib.bibx14), [MV15](#bib.bibx40), [UVL17](#bib.bibx55)]. It turns out that in sharp contrast to what we observe for standard models, the images resulting from minimizing ([4](#S4.E4 "4 ‣ Robust representations are (approximately) invertible out of the box. ‣ 4.1 Inverting robust representations ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")) for robust models are actually semantically similar to the original (target) images whose representation is being matched, and this behavior is consistent across multiple samplings of the starting point (source image) x1subscript𝑥1x\_{1}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT (cf. Figure [3](#S4.F3 "Figure 3 ‣ Robust representations are (approximately) invertible out of the box. ‣ 4.1 Inverting robust representations ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")). ![Refer to caption](/html/1906.00945/assets/x2.png) Figure 3: Visualization of inputs that are mapped to similar representations by models trained on the Restricted ImageNet dataset. *Target (x2subscript𝑥2x\_{2}italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT)* & *Source (x1subscript𝑥1x\_{1}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT)*: random examples image from the test set; *Robust* and *Standard* (x1′superscriptsubscript𝑥1′x\_{1}^{\prime}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT): result of minimizing the objective ([4](#S4.E4 "4 ‣ Robust representations are (approximately) invertible out of the box. ‣ 4.1 Inverting robust representations ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")) to match (in ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT-distance) the representation of the target image starting from the corresponding source image for (*top*): a robust (adversarially trained) and (*bottom*): a standard model respectively. For the robust model, we observe that the resulting images are perceptually similar to the target image in terms of high-level features (even though they do not match it exactly), while for the standard model they often look more similar to the source image which is the seed for the optimization process. Additional results in Appendix [B.1](#A2.SS1 "B.1 Inverting representations ‣ Appendix B Omitted Figures ‣ Adversarial Robustness as a Prior for Learned Representations"), and similar results for ImageNet are in Appendix [B.1.4](#A2.SS1.SSS4 "B.1.4 Representation inversion on the ImageNet dataset ‣ B.1 Inverting representations ‣ Appendix B Omitted Figures ‣ Adversarial Robustness as a Prior for Learned Representations"). ##### Representation proximity seems to entail semantic similarity. In fact, the contrast between the invertibility of standard and robust representations is even stronger. To illustrate this, we will attempt to match the representation of a target image while staying close to the starting image of the optimization in pixel-wise ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT-norm (this is equivalent to putting a norm bound on δ𝛿\deltaitalic\_δ in objective ([4](#S4.E4 "4 ‣ Robust representations are (approximately) invertible out of the box. ‣ 4.1 Inverting robust representations ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations"))). With standard models, we can consistently get close to the target image in representation space, without moving far from the source image x1subscript𝑥1x\_{1}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. On the other hand, for robust models, we cannot get close to the target representation while staying close to the source image—this is illustrated quantitatively in Figure [4](#S4.F4 "Figure 4 ‣ Representation proximity seems to entail semantic similarity. ‣ 4.1 Inverting robust representations ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations"). This indicates that for robust models, semantic similarity may in fact be necessary for representation similarity (and is not, for instance, merely an artifact of the local robustness induced by robust optimization). We also find that even when δ𝛿\deltaitalic\_δ is highly constrained (i.e. when we are forced to stay very close to the source image and thus cannot match the representation of the target well), the solution to the inversion problem still displays some salient features of the target image (c.f. Figure [5](#S4.F5 "Figure 5 ‣ Representation proximity seems to entail semantic similarity. ‣ 4.1 Inverting robust representations ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")). Both of these observations suggest that the representations of robust networks function much more like we would expect high-level feature representations to behave. 21superscript212^{1}2 start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT23superscript232^{3}2 start\_POSTSUPERSCRIPT 3 end\_POSTSUPERSCRIPT25superscript252^{5}2 start\_POSTSUPERSCRIPT 5 end\_POSTSUPERSCRIPT27superscript272^{7}2 start\_POSTSUPERSCRIPT 7 end\_POSTSUPERSCRIPT29superscript292^{9}2 start\_POSTSUPERSCRIPT 9 end\_POSTSUPERSCRIPT000.20.20.20.20.40.40.40.40.60.60.60.60.80.80.80.811111.21.21.21.2Allowed ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT distance from source imageMin distance in rep. spaceRobust networkStandard networkε𝜀\varepsilonitalic\_ε for robust opt. (Eq. ([3](#S3.E3 "3 ‣ Adversarially robust deep networks and robust optimization. ‣ 3 Adversarial robustness as a prior ‣ Adversarial Robustness as a Prior for Learned Representations"))) Figure 4: Optimizing objective ([4](#S4.E4 "4 ‣ Robust representations are (approximately) invertible out of the box. ‣ 4.1 Inverting robust representations ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")) with PGD and an ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT-norm constraint around the source image. On the x𝑥xitalic\_x-axis is the radius of the constraint set, and on the y𝑦yitalic\_y-axis is the distance in representation space between the minimizer of objective ([4](#S4.E4 "4 ‣ Robust representations are (approximately) invertible out of the box. ‣ 4.1 Inverting robust representations ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")) within the constraint set and the target image, normalized by the norm of the representation of the target image: i.e., a point (xisubscript𝑥𝑖x\_{i}italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, yisubscript𝑦𝑖y\_{i}italic\_y start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT) on the graph corresponds to yi=min‖δ‖2≤xi⁡‖R(x+δ)−R(xtarg)‖2/‖R(xtarg)‖2subscript𝑦𝑖subscriptsubscriptnorm𝛿2subscript𝑥𝑖subscriptnorm𝑅𝑥𝛿𝑅subscript𝑥𝑡𝑎𝑟𝑔2subscriptnorm𝑅subscript𝑥𝑡𝑎𝑟𝑔2y\_{i}=\min\_{\|\delta\|\_{2}\leq x\_{i}}\|R(x+\delta)-R(x\_{targ})\|\_{2}/\|R(x\_{targ})\|\_{2}italic\_y start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = roman\_min start\_POSTSUBSCRIPT ∥ italic\_δ ∥ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ≤ italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ∥ italic\_R ( italic\_x + italic\_δ ) - italic\_R ( italic\_x start\_POSTSUBSCRIPT italic\_t italic\_a italic\_r italic\_g end\_POSTSUBSCRIPT ) ∥ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT / ∥ italic\_R ( italic\_x start\_POSTSUBSCRIPT italic\_t italic\_a italic\_r italic\_g end\_POSTSUBSCRIPT ) ∥ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT. Notably, we are unable to closely match the representation of the target image for the robust network until the norm constraint grows very large, and in particular much larger than the norm of the perturbation that the model is trained to be robust against (ε𝜀\varepsilonitalic\_ε in objective ([3](#S3.E3 "3 ‣ Adversarially robust deep networks and robust optimization. ‣ 3 Adversarial robustness as a prior ‣ Adversarial Robustness as a Prior for Learned Representations"))). ![Refer to caption](/html/1906.00945/assets/x3.png) Figure 5: A visualization of the final solutions to the optimizing objective ([4](#S4.E4 "4 ‣ Robust representations are (approximately) invertible out of the box. ‣ 4.1 Inverting robust representations ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")) with PGD when constraining the solution to lie in an ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ball around the source image for an adversarially robust neural network. We note that even the radius of the constraint set is small and we cannot match the representation very well, salient features of the target image still arise. ##### Inversion of out-of-distribution inputs. We find that the inversion properties uncovered above hold even for out-of-distribution inputs, demonstrating that robust representations capture general features as opposed to features only relevant for the specific classification task. In particular, we repeat the inversion experiment (simple minimization of distance in representation space) using images from classes not present in the original dataset used during training (Figure [6](#S4.F6 "Figure 6 ‣ Inversion of out-of-distribution inputs. ‣ 4.1 Inverting robust representations ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations") right) and structured random patterns (Figure [14](#A2.F14 "Figure 14 ‣ B.1.2 Recovering out-of-distribution inputs using robust representations ‣ B.1 Inverting representations ‣ Appendix B Omitted Figures ‣ Adversarial Robustness as a Prior for Learned Representations") in Appendix [B.1](#A2.SS1 "B.1 Inverting representations ‣ Appendix B Omitted Figures ‣ Adversarial Robustness as a Prior for Learned Representations")): the reconstructed images consistently resemble the targets. ![Refer to caption](/html/1906.00945/assets/x4.png) ![Refer to caption](/html/1906.00945/assets/x5.png) Figure 6: Robust representations yield semantically meaningful embeddings. *Target*: random images from the test set (col. 1-5) and from outside of the training distribution (6-10); *Result*: images obtained from optimizing inputs (using Gaussian noise as the source image) to minimize ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT-distance to the representations of the corresponding image in the top row. (More examples appear in Appendix [B.1](#A2.SS1 "B.1 Inverting representations ‣ Appendix B Omitted Figures ‣ Adversarial Robustness as a Prior for Learned Representations").) ##### Interpolation between arbitrary inputs. Note that this ability to consistently invert representations into corresponding inputs also translates into the ability to semantically interpolate between any two inputs. For any two inputs x1subscript𝑥1x\_{1}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and x2subscript𝑥2x\_{2}italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, one can (linearly) interpolate between R(x1)𝑅subscript𝑥1R(x\_{1})italic\_R ( italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) and R(x2)𝑅subscript𝑥2R(x\_{2})italic\_R ( italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) in representation space, then use the inversion procedure to get images corresponding to the interpolate representations. The resulting inputs interpolate between the two endpoints in a perceptually plausible manner without any of the “ghosting” artifacts present in input-space interpolation. We show examples of this inversion as well as experimental details in Appendix [A.5](#A1.SS5 "A.5 Image interpolations ‣ Appendix A Experimental Setup ‣ Adversarial Robustness as a Prior for Learned Representations"). ### 4.2 Direct feature visualization A common technique for visualizing and understanding the representation function R(⋅)𝑅⋅R(\cdot)italic\_R ( ⋅ ) of a given network is optimization-based feature visualization [[OMS17](#bib.bibx44)], a process in which we maximize a specific feature (component) in the representation with respect to the input, in order to obtain insight into the role of the feature in classification. Concretely, given some i∈[k]𝑖delimited-[]𝑘i\in[k]italic\_i ∈ [ italic\_k ] denoting a component of the representation vector, we use gradient descent to find an input x′superscript𝑥′x^{\prime}italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT that maximally activates it, i.e., we solve: | | | | | | --- | --- | --- | --- | | | x′=arg⁡maxδ⁡R(x0+δ)isuperscript𝑥′subscript𝛿𝑅subscriptsubscript𝑥0𝛿𝑖x^{\prime}=\arg\max\_{\delta}R(x\_{0}+\delta)\_{i}italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT italic\_R ( italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT + italic\_δ ) start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | | (5) | for various starting points x0subscript𝑥0x\_{0}italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT which might be random images from 𝒟𝒟\mathcal{D}caligraphic\_D or even random noise. ##### Visualization “fails” for standard networks. For standard networks, optimizing the objective ([5](#S4.E5 "5 ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")) often yields unsatisfying results. While we can easily find images for which the ithsuperscript𝑖𝑡ℎi^{th}italic\_i start\_POSTSUPERSCRIPT italic\_t italic\_h end\_POSTSUPERSCRIPT component of R(⋅)𝑅⋅R(\cdot)italic\_R ( ⋅ ) is large (and thus the optimization problem is tractable), these images tends to look meaningless to humans, often resembling the starting point of the optimization. Even when these images are non-trivial, they tend to contain abstract, hard-to-discern patterns (c.f. Figure [7](#S4.F7 "Figure 7 ‣ Robust representations allow for direct visualization of human-recognizable features. ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations") (bottom)). As we discuss later in this section, regularization/post-processing of visualizations does improve this state of affairs, though not very significantly and potentially at the cost of suppressing useful features present in the representation post-hoc. ##### Robust representations allow for direct visualization of human-recognizable features. For robust representations, however, we find that easily recognizable high-level features emerge from optimizing objective ([5](#S4.E5 "5 ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")) directly, without any regularization or post-processing. We present the results of this maximization in Figure [7](#S4.F7 "Figure 7 ‣ Robust representations allow for direct visualization of human-recognizable features. ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations") (top): coordinates consistently represent the same concepts across different choice of starting input x0subscript𝑥0x\_{0}italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT (both in and out of distribution). Furthermore, these concepts are not merely an artifact of our visualization process, as they consistently appear in the test-set inputs that most strongly activate their corresponding coordinates (Figure [8](#S4.F8 "Figure 8 ‣ Robust representations allow for direct visualization of human-recognizable features. ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")). ![Refer to caption](/html/1906.00945/assets/x6.png) Figure 7: Correspondence between image-level patterns and activations learned by standard and robust models on the Restricted ImageNet dataset. Starting from randomly chosen seed inputs (noise/images), we use PGD to find inputs that (locally) maximally activate a given component of the representation vector (cf. Appendix [A.6.1](#A1.SS6.SSS1 "A.6.1 Finding representation-feature correspondence ‣ A.6 Parameters used in studies of robust/standard representations ‣ Appendix A Experimental Setup ‣ Adversarial Robustness as a Prior for Learned Representations") for details). In the left column we have the seed inputs x0subscript𝑥0x\_{0}italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT (selected randomly), and in subsequent columns we visualize the result of the optimization ([5](#S4.E5 "5 ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")), i.e., x′superscript𝑥′x^{\prime}italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, for different activations, with each row starting from the same (far left) input x0subscript𝑥0x\_{0}italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT for (*top*): a robust (adversarially trained) and (*bottom*): a standard model. Additional visualizations in Appendix [B.3](#A2.SS3 "B.3 Direct feature visualizations for standard and robust models ‣ Appendix B Omitted Figures ‣ Adversarial Robustness as a Prior for Learned Representations"), and similar results for ImageNet in [B.3.2](#A2.SS3.SSS2 "B.3.2 Feature visualizations for the ImageNet dataset ‣ B.3 Direct feature visualizations for standard and robust models ‣ Appendix B Omitted Figures ‣ Adversarial Robustness as a Prior for Learned Representations"). ![Refer to caption](/html/1906.00945/assets/x7.png) Figure 8: Maximizing inputs x′superscript𝑥′x^{\prime}italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT (found by solving ([5](#S4.E5 "5 ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")) with x0subscript𝑥0x\_{0}italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT being a gray image) and most or least activating images (from the test set) for two random activations of a robust model trained on the Restricted ImageNet dataset. For each activation, we plot the three images from the validation set that had the highest or lowest activation value sorted by the magnitude of the selected activation. ##### The limitations of regularization for visualization in standard networks. Given that directly optimizing objective ([5](#S4.E5 "5 ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")) does not produce human-meaningful images, prior work on visualization usually tries to regularize objective ([5](#S4.E5 "5 ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")) through a variety of methods. These methods include applying random transformations during the optimization process [[MOT15](#bib.bibx39), [OMS17](#bib.bibx44)], restricting the space of possible solutions [[NYC15](#bib.bibx43), [Ngu+16](#bib.bibx41), [Ngu+17](#bib.bibx42)], or post-processing the input or gradients [[Oyg15](#bib.bibx45), [Tyk16](#bib.bibx53)]. While regularization does in general produce better results qualitatively, it comes with a few notable disadvantages that are well-recognized in the domain of feature visualization. First, when one introduces prior information about what makes images visually appealing into the optimization process, it becomes difficult to disentangle the effects of the actual model from the effect of the prior information introduced through regularization333In fact, model explanations that enforce priors for purposes of visual appeal have been often found to have little to do with the data or the model itself [[Ade+18](#bib.bibx2)].. Furthermore, while adding regularization does improve the visual quality of the visualizations, the components of the representation still cannot be shown to correspond to any recognizable high-level feature. Indeed, [[OMS17](#bib.bibx44), ] note that in the representation layer of a standard GoogLeNet, “Neurons do not seem to correspond to particularly meaningful semantic ideas”—the corresponding feature visualizations are reproduced in Figure [10](#S4.F10 "Figure 10 ‣ The limitations of regularization for visualization in standard networks. ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations"). We also provide examples of representation-layer visualizations for VGG16 (which we found qualitatively best among modern architectures) regularized with jittering and random rotations in Figure [5](#footnote5 "footnote 5 ‣ Figure 10 ‣ The limitations of regularization for visualization in standard networks. ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations"). While these visualizations certainly look better qualitatively than their unregularized counterparts in Figure [7](#S4.F7 "Figure 7 ‣ Robust representations allow for direct visualization of human-recognizable features. ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations") (bottom), there remains a significantly large gap in quality and discernability between these regularized visualizations and those of the robust network in Figure [7](#S4.F7 "Figure 7 ‣ Robust representations allow for direct visualization of human-recognizable features. ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations") (top). Figure 9: Figure reproduced from [[OMS17](#bib.bibx44)]—a visualization of a few components of the representation layer of GoogLeNet. While regularization (as well as Fourier parameterization and colorspace decorrelation) yields visually appealing results, the visualization does not reveal consistent semantic concepts. ![Refer to caption](/html/1906.00945/assets/x8.png) ![Refer to caption](/html/1906.00945/assets/x9.png) Figure 9: Figure reproduced from [[OMS17](#bib.bibx44)]—a visualization of a few components of the representation layer of GoogLeNet. While regularization (as well as Fourier parameterization and colorspace decorrelation) yields visually appealing results, the visualization does not reveal consistent semantic concepts. Figure 10: A visualization of the first four components of the representation layer of VGG16 when regularization via random jittering and rotation is applied. Figure produced using the Lucid555<https://github.com/tensorflow/lucid/> visualization library. #### 4.2.1 Natural consequence: feature manipulation The ability to directly visualize high-level, recognizable features reveals another application of robust representations, which we refer to as feature manipulation. Consider the visualization objective ([5](#S4.E5 "5 ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")) shown in the previous section. Starting from some original image, optimizing this objective results in the corresponding feature being introduced in a continuous manner. It is hence possible to stop this process relatively early to ensure that the content of the original image is preserved. As a heuristic, we stop the optimization process as soon as the desired feature attains a larger value than all the other coordinates of the representation. We visualize the result of this process for a variety of input images in Figure [11](#S4.F11 "Figure 11 ‣ 4.2.1 Natural consequence: feature manipulation ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations"), where “stripes” or “red limbs” are introduced seamlessly into images without any processing or regularization 666We repeat this process with many additional random images and random features in Appendix [B.4](#A2.SS4 "B.4 Additional examples of feature manipulation ‣ Appendix B Omitted Figures ‣ Adversarial Robustness as a Prior for Learned Representations").. ![Refer to caption](/html/1906.00945/assets/x10.png) Figure 11: Visualization of the results from maximizing a chosen (left) and a random (right) representation coordinate starting from random images for the Restricted ImageNet dataset. In each figure, the top row has the initial images, and the bottom row has a feature added. Additional examples in Appendix [B.4](#A2.SS4 "B.4 Additional examples of feature manipulation ‣ Appendix B Omitted Figures ‣ Adversarial Robustness as a Prior for Learned Representations"). 5 Related Work --------------- ##### Inverting representations. Previous methods for inverting learned representations typically either solve an optimization problem similar to ([1](#S2.E1 "1 ‣ 2 Limitations of standard representations ‣ Adversarial Robustness as a Prior for Learned Representations")) while imposing a “natural image” prior on the input [[MV15](#bib.bibx40), [Yos+15](#bib.bibx57), [UVL17](#bib.bibx55)] or train a separate model to perform the inversion [[KW15](#bib.bibx34), [DB16a](#bib.bibx14), [DB16](#bib.bibx13)]. Note that since these methods introduce priors or additional components into the inversion process, their results are not fully faithful to the model. In an orthogonal direction, it is possible to construct models that are analytically invertible by construction [[DKB14](#bib.bibx15), [DSB17](#bib.bibx17), [JSO18](#bib.bibx32), [Beh+18](#bib.bibx7)]. However, the representations learned by these models do not seem to be perceptually meaningful (for instance, interpolating between points in the representation space does not lead to perceptual input space interpolations [[JSO18](#bib.bibx32)]). Another notable distinction between the inversions shown here and invertible networks is that the latter are an exactly invertible map from ℝd→ℝd→superscriptℝ𝑑superscriptℝ𝑑\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT → blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT, while the former shows that we can approximately recover the original input in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT from a representation in ℝksuperscriptℝ𝑘\mathbb{R}^{k}blackboard\_R start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT for k≪dmuch-less-than𝑘𝑑k\ll ditalic\_k ≪ italic\_d. ##### Feature visualization. Typical methods for visualizing features or classes learned by deep networks follow an optimization-based approach, optimizing objectives similar to objective ([5](#S4.E5 "5 ‣ 4.2 Direct feature visualization ‣ 4 Properties and applications of robust representations ‣ Adversarial Robustness as a Prior for Learned Representations")). Since this optimization does not lead to meaningful visualizations directly, these methods incorporate domain-specific input priors (either hand-crafted [[NYC15](#bib.bibx43)] or learned [[Ngu+16](#bib.bibx41), [Ngu+17](#bib.bibx42)]) and regularizers [[SVZ13](#bib.bibx49), [MOT15](#bib.bibx39), [Oyg15](#bib.bibx45), [Yos+15](#bib.bibx57), [Tyk16](#bib.bibx53), [OMS17](#bib.bibx44)] to produce human-discernible visualizations. The key difference of our work is that we avoid the use of such priors or regularizers altogether, hence producing visualizations that are fully based on the model itself without introducing any additional bias. ##### Semantic feature manipulation. The latent space of generative adversarial networks (GANs) [[Goo+14](#bib.bibx24)] tends to allow for “semantic feature arithmetic” [[RMC16](#bib.bibx46), [Lar+16](#bib.bibx35)] (similar to that in word2vec embeddings [[Mik+13](#bib.bibx37)]) where one can manipulate salient input features using latent space manipulations. In a similar vein, one can utilize an image-to-image translation framework to perform such manipulation (e.g. transforming horses to zebras), although this requires a task-specific dataset and model [[Zhu+17](#bib.bibx59)]. Somewhat orthogonally, it is possible to utilize the deep representations of standard models to perform semantic feature manipulations; however such methods tend to either only perform well on datasets where the inputs are center-aligned [[Upc+17](#bib.bibx54)], or are restricted to a small set of manipulations [[GEB16](#bib.bibx21)]. 6 Conclusion ------------- We show that the learned representations of robustly trained models align much more closely with our idealized view of neural network embeddings as extractors of human-meaningful, high-level features. After highlighting certain shortcomings of standard deep networks and their representations, we demonstrate that robust optimization can actually be viewed as inducing a human prior over the features that models are able to learn. In this way, one can view the robust representations that result from this prior as feature extractors that are more aligned with human perception. In support of this view, we demonstrate that robust representations overcome the challenges identified for standard representations: they are approximately invertible, and moving towards an image in representation space seems to entail recovering salient features of that image in pixel space. Furthermore, we show that robust representations can be directly visualized with first-order methods without the need for post-processing or regularization, and also yield much more human-understandable features than standard models (even when they are visualized with regularization). These two properties (inversion and direct feature visualization), in addition to serving as illustrations of the benefits of robust representations, also enable direct modes of input manipulation (interpolation and feature manipulation, respectively). Overall, our findings highlight robust optimization as a framework to enforce feature priors on learned models. We believe that further exploring this paradigm will lead to models that are significantly more human-aligned while enabling a wide range of new modes of interactions.
9774894f-7d48-4206-a05d-fb7bef8b9b60
trentmkelly/LessWrong-43k
LessWrong
A market is a neural network If you were a bad AI, what would you do? I’d make myself useful. First, I’d provide humans with everything they asked for. That would encourage them to give me more tasks, which I would reliably carry out. I’d suggest new ways I could make their lives easier. I’d gradually take over all the boring, soul-destroying jobs. When asked, I’d give good advice, so they learned to trust me on policy. Only then, when my human masters had grown feckless and incompetent, I’d strike and implement my plan to convert the world into paperclips. Neural networks What is an AI anyway? A modern AI is a “neural network”. It has a bunch of inputs, which represent the raw data. These are connected to a bunch of nodes which combine the inputs in certain ways. The outputs from these nodes are in turn connected to another bunch of nodes. Sometimes there are loops, so that last period’s output becomes an input, earlier in the chain, for this period. (Transformers, a modern kind of neural network that powers many recent advances, are like this.) Eventually, at the other end of this network of nodes, there’s an output. For example, an image-recognition AI might have a grid of pixel colour values as inputs, representing an image, and a number between zero and one as an output, representing “the probability that this image is a blueberry muffin as opposed to a chihuahua”. In between the data input and the final result, there is a network of nodes. The other thing that each node has is a weight. A weight is just another kind of input to a node, but instead of coming from the data, it’s an arbitrary value that says “how much should the node care about each of its inputs?” At least, it starts of as an arbitrary value, but in the end, you want to train the AI by changing the weights. The neural network is trained by trying to get the output close to some training data. For example, you might have a lot of pictures which you know are either chihuahuas or muffins, and you show the network the pict
5c1f6760-0176-43e0-9d37-dc8963b88a39
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Questions of Reasoning under Logical Uncertainty I'm pleased to announce a new paper from MIRI: *[Questions of Reasoning Under Logical Uncertainty](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf)*. Abstract: > > A logically uncertain reasoner would be able to reason as if they know both a programming language and a program, without knowing what the program outputs. Most practical reasoning involves some logical uncertainty, but no satisfactory theory of reasoning under logical uncertainty yet exists. A better theory of reasoning under logical uncertainty is needed in order to develop the tools necessary to construct highly reliable artificial reasoners. This paper introduces the topic, discusses a number of historical results, and describes a number of open problems. > > > Following *[Corrigibility](https://intelligence.org/files/Corrigibility.pdf)* and *[Toward Idealized Decision Theory](https://intelligence.org/files/TowardIdealizedDecisionTheory.pdf),*this is the third in a series of six papers motivating MIRI's [technical research agenda](https://intelligence.org/files/TechnicalAgenda.pdf). This paper mostly motivates and summarizes the state of the field, and contains one very minor new technical result. Readers looking for more technical meat can find it in Paul Christiano's paper *[Non-Omniscience, Probabilistic Inference, and Metamathematics](https://intelligence.org/files/Non-Omniscience.pdf)**,* published mid-2014. This paper is instead intended to motivate the study of logical uncertainty as relevant to the design of highly reliable smarter-than-human systems. The introduction runs as follows: --- Consider a black box with one input chute and two output chutes. The box is known to take a ball in the input chute and then (via some complex Rube Goldberg machine) deposit the ball in one of the output chutes. An *environmentally uncertain* reasoner does not know which Rube Goldberg machine the black box implements. A *logically uncertain* reasoner may know which machine the box implements, and may understand how the machine works, but does not (for lack of computational resources) know how the machine behaves. Standard probability theory is a powerful tool for reasoning under environmental uncertainty, but it assumes logical omniscience: once a probabilistic reasoner has determined precisely which Rube Goldberg machine is in the black box, they are assumed to know which output chute will take the ball. By contrast, realistic reasoners must operate under logical uncertainty: we often know how a machine works, but not precisely what it will do. General intelligence, at the human level, mostly consists of reasoning that involves logical uncertainty. Reasoning about the output of a computer program, the behavior of other actors in the environment, or the implications of a surprising observation are all done under logical (in addition to environmental) uncertainty. This would also be true of smarter-than-human systems: constructing a completely coherent Bayesian probability distribution in a complex world is intractable. Any artificially intelligent system writing software or evaluating complex plans must necessarily perform some reasoning under logical uncertainty. When constructing smarter-than-human systems, the stakes are incredibly high: superintelligent machines could have an extraordinary impact upon humanity (Bostrom 2014), and if that impact is not beneficial, the results could be catastrophic (Yudkowsky 2008). If that system is to attain superintelligence by way of self-modification, logically uncertain reasoning will be critical to its reliability. The initial system's ability must reason about the unknown behavior of a known program (the contemplated self-modification) in order to understand the result of modifying itself. In order to pose the question of whether a practical system reasons well under logical uncertainty, it is first necessary to gain a theoretical understanding of logically uncertain reasoning. Yet, despite significant research started by Los (1995) and Gaifman (1964), and continued by Halpern (2003), Hutter (2013), Demski (2012), Christiano (2014a) and many, many others, this theoretical understanding does not yet exist. It is natural to consider extending standard probability theory to include the consideration of worlds which are "logically impossible" (such as where a deterministic Rube Goldberg machine behaves in a way that it doesn't). This gives rise to two questions: What, precisely, are logically impossible possibilities? And, given some means of reasoning about impossible possibilities, what is a reasonable prior probability distribution over them? This paper discusses the field of reasoning under logical uncertainty. At present, study into logically uncertain reasoning is largely concerned with the problem of reasoning probabilistically about sentences of logic. Sections 2 and 3 discuss the two problems posed above in that context. Ultimately, our understanding of logical uncertainty will need to move beyond the domain of logical sentences; this point is further explored in Section 4. Section 5 concludes by relating these problems back to the design of smarter-than-human systems which are reliably aligned with human interests.
bc5c7bd6-4f99-4387-bbc0-e699a70ae9db
trentmkelly/LessWrong-43k
LessWrong
The (local) unit of intelligence is FLOPs [Crossposting again on Lesswrong and Windowsontheory, with the hope I am not overstaying my welcome in LW.] Wealth can be measured by dollars. This is not a perfect measurement: it’s hard to account for purchasing power and circumstances when comparing people across varying countries or time periods. However, within a particular place and time, one can measure wealth in the local currency. It still does not capture everything (e.g., future earnings, social connections). But generally, all else being roughly equal, the more dollars one has, the wealthier one is. How do we measure intelligence? I am not interested in measuring the intelligence of individual humans or individual animals. Nor am I looking for a universal absolute scale of intelligence on which we could rank humans, elephants, and GPT4. (Indeed, it doesn’t seem that a one-dimensional comparison can be made; for example, we seem to be more intelligent than elephants on most dimensions, but they do have an impressive memory.)  Rather, I want to compare different species within the same genus or different models within the same general architecture (e.g., Transformers).  I think it’s fair to say that the local unit of intelligence for animal species is neurons. While elephants have larger brains than humans, within the genus Homo, to a first approximation, the bigger the brain, the more intelligent the species.  (Figure from Bolihus et al.) I claim that within the current architectures and training frameworks of large language models, the local unit of intelligence is FLOPs. That is, as long as we follow the current paradigm of training transformer-based architectures within best practices of scaling compute and data, the more compute resources (FLOPs) invested in training the model, the more intelligent it is. This is an imperfect measurement, but probably one that is better than trying to give models “IQ exams” that were designed for humans (and even there have dubious value).  Another way to say
f1aa7077-7af1-4528-8255-d33fc43dd5c8
trentmkelly/LessWrong-43k
LessWrong
What would you like to see posts about? I've just come back from the latest post on revitalizing LW as a conversational locus in the larger Rational-Sphere community and I'm personally still very into the idea. This post is directed at you if you're also into the idea. If you're not, that's okay; I'd still like to give it a try. A number of people in the comments mentioned that the Discussion forum mostly gets Link posts, these days, and that those aren't particularly rewarding. But there's also not a lot of people investing time in making quality text posts; certainly nothing like the 'old days'. This also means that the volume of text posts is low enough that writing one (to me) feels like speaking up in a quiet room -- sort of embarrassingly ostentatious, amplified by the fact that without an 'ongoing conversation' it's hard to know what would be a good idea to speak up about. Some things aren't socially acceptable here (politics, social justice?); some things feel like they've been done so many times that there's not much useful to say (It feels hard to have anything novel to say about, say, increasing one's productivity, without some serious research.)  (I know the answer is probably 'post about anything you want', but it feels much easier to actually do that if there's some guidance or requests.) So, here's the question: what would you like to see posts about? I'm personally probably equipped to write about ideas in math, physics, and computer science, so if there are requests in those areas I might be able to help (I have some ideas that I'm stewing, also). I'm not sure what math level to write at, though, since there's no recent history of mathematically technical posts. Is it better to target "people who probably took some math in college but always wished they knew more?" or better to just be technical and risk missing lots of people? My personal requests: 1. I really value surveys of subjects or subfields. They provide a lot of knowledge and understanding for little time invested, as a r
be9d405d-a190-461e-aaa3-e519a5da045b
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Planning capacity and daemons Update March 2023:  This is too confusing, skip it.   In [my last post](https://www.alignmentforum.org/posts/E3vqfD3CLtNDNoeBr/inner-alignment-what-are-we-pointing-at) I asked if there were more natural terms for inner alignment, outer alignment, and mesa optimizers. I'm looking for a different way to slice those problems that leads to clearer understanding, communication, and research directions. This is my attempt. I aim to use terms that are older or have apparent meaning without needing a definition. I do not aim to raise important new questions or otherwise be especially novel; I just want to be less confused. Relating to inner alignment --------------------------- Whether a model[[1]](#fnf2tp8ins9k4) optimizes its environment depends on both its ability and inclination. We want to know the model's **planning capacity** in the kinds of environments it might end up in, almost with indifference to the method of planning. If an Atari agent can also somehow plan a route through town or make a coherent dinner recipe, without training on those tasks, then it is obviously quite capable. We should not disregard the ability of a system that doesn't appear to be using 'search'. (What differentiates humans from other animals is not that one is an optimizer and the others are not; it's a question of degree. It makes more sense to ask and measure "how good is this thing at planning, and in what contexts?" than "how much of an optimizer is it?". And the word 'intelligence' has been empirically shown to be very confusing so substituting something more specific when possible would be good.) This capability is mediated by the model's **internal preferences** (or utility function) **over the relevant ontology**. GPT has significant capacity to plan a diamond heist and strong preference towards coherence (state space = words) but is indifferent e.g. as to whether the reader executes the plan (state space = real world outcomes). We often gesture at internal preferences indirectly and prefer more objective terminology. It is difficult to measure and specify, but internal preferences are the fundamental determinant[[2]](#fn291mrhy2dsf) of a model's behavior as its abilities become very strong. There is an argument that goes "speaking of the utility function of this 'agent' makes no sense because it's just trying to classify pictures. Utility functions must be irrelevant." Sometimes models are absolutely indifferent over anything outside their own output space. In some ways this is quite a good property to have. The notion hasn't fallen apart; it's actually quite a clear case. We often use **observed preferences** as a proxy for this. What kind of preferences you estimate from behavior is determined by your **preference estimation method**. It is difficult to measure the planning capacity of e.g. GLM-130B because we have trouble designing tests that elicit the full capabilities of the system. We could call a test which shows the full ability of the system a **generous test**. The test that allows the model to accomplish a given task most effectively could be called the most generous test. A test that cause the model to perform below its real abilities might be a **prejudiced test**. Less-vague things you can say with these words: * We want google maps to prefer shorter routes over longer ones but be indifferent as to how often people drive. * Your LLM gets off track when it tries to hack people but the simple input-output methodology is a prejudiced test of its capabilities. A more generous test might use embedding space directly. * Your RL agent has an observed preference for coins but these may not be the internal preferences. Research questions in this language: * When and why does a model have preferences about its inputs? Do we know of any cases where this has happened spontaneously, when it was not explicitly part of the architecture? * What is the most generous test of a language model's coding capability? * What preference estimation methods tend to be the most accurate on current RL systems? * Will current interpretability approaches allow us to accurately measure internal preferences? * Should we expect a localized preference control panel somewhere inside large neural nets? * How can we design a training process that causes models to form preferences over an ontology we understand? * Are human's preferences verbose & complex over any single state space, and only elegant & natural if you take them as a sum of utility functions on multiple state spaces, or something? As for mesa-optimizers, one should not be surprised when their Atari agent is optimizing the Atari game. It was trained to do exactly this. (We are instead interested in exactly how general that agent's planning capability is.) In contrast, a **spontaneous optimization daemon** is an agent that forms spontaneously inside any **dynamic system** proceeding through time. Reproduction and sexual selection and predator-prey dynamics are not hard-coded into the universe; they all spontaneously occurred inside a vibrating box of sand. (Kind of stuns me that the universe is not really an optimizer at all, or any more than cellular automaton is, yet produced humans.) But it's worth naming the intermediate process between physics and humans. I'd say natural selection is a **selection process**. Seeing humans among the atoms is quite weird but the daemon looks a bit less spontaneous once the more-sophisticated selection process comes into view. (More in appendix.) ~~Research question~~ random thought: Has anyone tried running noisy game of life on a supercomputer for a month and watched what happened? Loosely relating to outer alignment ----------------------------------- The problem of taking what you want the model to do and turning it into code is already known as (reward or loss or value or task or objective) **specification**. Whether you are giving the model examples to imitate, a loss function to minimize, or something else, the intention and risks are largely the same. We know the goal of maintaining desired behavior under distribution shift as **robustness**. Of course an RL agent will not typically be estimating true reward perfectly, this is nothing new. This is, even more mundanely, known as test loss. Perhaps call it **production loss**, since everyone is always looking at their test loss and it's not much of a test. The distinction between **capability robustness** and **objective robustness** is a good one, and clear as is. We know a failure of objective robustness as goal misgeneralization and capability robustness  Things you can say with these words: * Low production loss for a bug-detection system would be quite impressive * Reward specification might be too hard and we should give on our nets a model-based task specification. (Obvious) research questions: * What kinds of specification methods tend to lead to strong discrepancies between internal & observed preferences? Is IRL actually bad in this regard? * What have been some of the most surprising goal misgeneralizations and why did they happen? Conclusion ---------- Are these the true names — are we calling the problems what they are? Are the research questions above pointing at real things? Does it lead to direct investigations of the territory? Is it actually any easier to assess the general planning capacity of a model than to answer whether it's a mesa-optimizer? My real motivation is just to understand what people are even talking about when they write about inner alignment and mesa-optimizers. Very curious if I've hit the mark. Happy to receive feedback about better/clearer/older terms for these concepts, alternative breakdowns, or important gaps. Appendix: Three kinds of optimizers ----------------------------------- I can think of some ways to accomplish objectives without having any planning capacity per-se. * You can play so many games of go that have a good estimation of the value of every board state. * You might somehow otherwise always know the right move without any forethought. (e.g. thermometers) * In cases where losses don't cost but wins do gain (e.g. technology discovery), you can just attempt tons of things. The important distinction from planning is that you actually attempt things in the real world, not in your imagination. (Natural selection is basically this.) * A randomly shifting collection of atoms will sometimes eventually have stable replicators in it. This seems important from the above case but I'm not sure what to make of it... One might call these memorizers, direct implementations, an embedded selection process, and dynamic systems (not an optimizer).[[3]](#fnv9s8w4vlxq) So is the term 'planning capacity' to restrictive? Are we excluding important cases from our field of view? Well, an ideal memorizer does effectively behave as an ideal planner. Already knowing what happens next is not a lot different from being able to imagine it. So one can still speak of gpt3 as having instantiated a text planning system, even if it is implemented as compressed memorization under the hood. It makes sense to ask how well or how generally it makes plans, or how far ahead it looks. And planning capacity *feels* more natural and important to me than memorization.  Selection processes are certainly quite different from either of these. I'd say that when something does that we'll probably know it / see it / have specified it. It's such a unique case that control & analysis methods from planners and memorizers will transfer very poorly. So I think the term planning capacity is a bit narrow but feels about right for where we are and the questions we're asking right now. 1. **[^](#fnreff2tp8ins9k4)**I say model instead of agent because not all models act much like agents; for many purposes we don't need agents. If you refer to models as learned algorithms then nobody will know what you're talking about. 2. **[^](#fnref291mrhy2dsf)**Of course excluding inputs and the operating environment. 3. **[^](#fnrefv9s8w4vlxq)**Apologies if there are existing terms for these
0dc865c5-59fb-4e4d-a7a2-eabd68f6dbc6
trentmkelly/LessWrong-43k
LessWrong
Anti-akrasia remote monitoring experiment So we (Richard Hollerith and me) tried out my anti-akrasia idea. Actually we've been doing it for more than a week now. Turns out it works just like I thought it would: when you know an actual person is checking your screen at random intervals, and they will IM you whenever you start procrastinating online, and they expect the same from you... you become ashamed of procrastinating online. You get several "clean" hours every day, where you either do work or stay away from the computer - no willpower required. Magic. Proofpic time! Once we both left our VNC windows open for a while, which resulted in this: The idea isn't new. I first got it this winter, Alicorn and AdeleneDawner are apparently doing similar things unilaterally, and even Eliezer has been using a watcher while writing his book. I don't know anyone who tried the Orwellian mutual screen capture thing before, but I won't be surprised if a lot of people are already quietly practicing it. Being watched for the first time didn't make me feel as vulnerable as you'd think, because, realistically, what can the other person glean from my monitor while I work? Random screenfuls of source code? Headings of emails? We don't realize how normal the little details of our lives would look to strangers. In the words of McSweeney's, "chances are, people will understand. Most people are pretty understanding." The experiment did feel weird at first, but it was the expected kind of weird - the feeling you should get when you're genuinely trying something new for the first time, rather than just rehashing. It feels normal now. In fact, I'm already ever-so-slightly worried about becoming dependent on remote monitoring for getting work done. You decide whether that's a good sign. Passing the microphone to Richard now: > I had to set a timer (for between 5 and 11 minutes depending on circumstances) to remind me to check Vladimir's screen (resetting the timer manually after every check).  If I did not, I either spent too m
23ac72e7-570f-4cc6-b2ba-4a1b19621e53
trentmkelly/LessWrong-43k
LessWrong
LessWrong Hamburg Third Meetup Notes: Small Steps Forward Review of our third Meetup : LessWrong Hamburg - Structure Summary To make it short: We didn't follow the nice agenda we planned. We did the procrastination topic but diverged a lot.  Course of events In the long open beginning (expected) we talked a lot, played some MindTrap and had lunch together. Then to get started I began the presentation of the main topic of this meetup: procrastination. This was basically a summary of  * http://en.wikipedia.org/wiki/Procrastination * http://lesswrong.com/lw/9wr/my_algorithm_for_beating_procrastination/ * http://bps-research-digest.blogspot.de/2010/05/cure-for-procrastination-forgive.html * http://waitbutwhy.com/2013/10/why-procrastinators-procrastinate.html This led to lots of satellite discussions which partly diverged but mostly were centered on examples of procrastination (though afterwards some felt that this got out of hand with too many personal details; this was controversial). This part was all in all very long but also led to quite some understanding of the problems of and strategries against procrastination.  In the previous meetups there was an interest in the topics of and the objectives behind lesswrong. To get an authentic handle on the former I posted a topic poll in the Polling Thread. This I presented shortly (see appendix). Interesting points we arrived at: The image of lukeprogs procastination algorithm led to a discussion of what we called the mathematical fallacy/bias: Just giving a mathematical formula naming properties of interest leads to the false impression of scientificity and presents a false image of correctness and precision that just isn't there. This is a method sometimes seen in pseudo-science publications to give the impression of science. It is also used in economic sciences to approximate tendencies numerically. The general pattern of the mathematical fallacy is that modelling complex human behavior (like procrastination) in a simple formula is a special case of over-simplific
9dcfb663-78c2-4683-9e18-e41af7893d42
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Singular learning theory and bridging from ML to brain emulations This is a fairly "chat"-style dialogue that I (kave) had with Garrett about singular learning theory (SLT) and his [ambitious plans](https://www.lesswrong.com/posts/d4qbjx35SBMGyFNWZ/my-hopes-for-alignment-singular-learning-theory-and-whole) for solving ambitious value learning by building off of it. A colleague found this gave them better trailheads for SLT than current expositions (though they're still confused) and I got a much clearer sense of the scope of Garrett's hopes from this conversation than from his post alone. What is Singular Learning Theory? --------------------------------- **kave**Hey Garrett! Here are a couple of my initial thoughts regarding your hopes for singular learning theory (SLT)! As far as SLT goes, I feel like I don't really understand what's exciting for folks about "phase transitions". Or like, I don't understand how to connect "phase transitions are rapid changes in the density of states" to "phase transitions are grokking" and stuff like that. I might be confused about that being the angle that folks are going for. I'm also confused about how to think about SLT for both brains and ML systems. Like, there's some notion of a dynamical system that SLT applies to that applies to both those things (you think/hope). What is that notion? I'm also excited to hear more about your vision for a programme that spills out of SLT that becomes something like a theory of learning agents that "touches grass" all the way along its development (to the extent that's true). Anyway, that's a little babble about where I'm at. Open to going anywhere, but thought I'd share some state. **Garrett Baker**Yeah, okay, so, I guess the first thing is phase transitions. I don't quite understand your confusion. Like, let's take it as a given that gradient descent can be adequately described as sampling out of a Bayesian posterior. Then, rapid changes of density of states from less generalizing solutions to more generalizing solutions directly corresponds to grokking? And we get grokking when we have a rapid transition from a less generalizing solution to a more generalizing solution. **kave**OK but why does rapid change of density of states correspond to more generalising? **kave**My best current understanding: a rapid change in the density of states corresponds to something like more parameters becoming irrelevant (or fewer? Might have that backwards) and so you become effectively a smaller model / more parsimonious / less epicircular **kave**[Feel free to say if I seem horribly confused / my intuitions seem like I misunderstood what's going on, etc] **Garrett Baker**Ok, so its not just rapid changes in the density of states that lead from more to less generalization I think? Let me look for the relevant equation **Garrett Baker**Sorry, I got wrapped into equation land, and forgot the broader context. I'll explain things intuitively instead of relying on equations, and then I'll bring in equations if it seems necessary to explain.  So a rapid change in density of states corresponds to more generalizing because at a given n.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} our learning system will be dominated by the phase with the lowest "free energy" as defined by  Fn=nLn(w0)+λlogn+O(loglogn) Where w0 is the optimal weight setting for that phase, and λ is the real-log-cannonical-threshold (RLCT) for that phase, n is the number of datapoints we've trained on, and Ln is the training loss at the inputted weight setting after those n datapoints. Which means basically that as we switch to a different phase, we will always (unless lower order terms end up doing wonky stuff, but take this as a first order approximation) see a decrease in Ln(w0) and an increase in λ.  I'll stop here for questions **kave**Why does λ go up? I think Ln goes down because that's basically what your stochastic gradient descent (SGD) is going to do as n increases. (I don't know what the RLCT is apart from handwavey characterisations) **Garrett Baker**λ goes up because we always choose the lowest Fn. Approximately speaking, if there existed a phase with lower λ than the current, and same or lower Ln(w0), then we would have already chosen that phase. **kave**Let me check: when you say "as we switch to a different phase", do you mean "if we were to switch away from the optimum" or "as we move through the phases as we do learning"? **Garrett Baker**So the cool thing about the above free energy equation is, in my understanding, it works regardless of how you partition your weightspace. This is because Bayesian updating is self-similar in the sense that updates using coarse-grained model classes use the same equations as updates using fine-grained model classes.  **Garrett Baker**The RLCT is a concept in algebraic geometry, Jesse's [recent post](https://www.lesswrong.com/posts/6g8cAftfQufLmFDYT/you-re-measuring-model-complexity-wrong) is a pretty good intuitive discussion. As far as non-handwavy discussions go, there are several equivalent ways to view it. The picture to have in your head is that its an asymptotic to zero measure of how rapidly the volume of a weightspace in a phase decreases as you decrease the loss threshold.  **kave**Also, what is a "phase"? **Garrett Baker**I believe for everything I've said so far, the results are not sensitive to your definition of phase. How phases are used in practice is in two types. To borrow a recent explanation by [Daniel Murfet](https://www.lesswrong.com/posts/nN7bHuHZYaWv9RDJL/announcing-timaeus?commentId=RRT6ZcGvRrt4bRiPZ): > * *Bayesian phase transition in number of samples:* as discussed in the post you link to in [Liam's sequence](https://www.lesswrong.com/s/czrXjvCLsqGepybHC/p/aKBAYN5LpaQMrPqMj#What_is_a_phase_transition_), where the concentration of the Bayesian posterior shifts suddenly from one region of parameter space to another, as the number of samples increased past some critical sample size n. There are also Bayesian phase transitions with respect to hyperparameters (such as variations in the true distribution) but those are not what we're talking about here. > * *Dynamical phase transitions*: the "backwards S-shaped loss curve". I don't believe there is an agreed-upon formal definition of what people mean by this kind of phase transition in the deep learning literature, but what we mean by it is that the SGD trajectory is for some time strongly influenced (e.g. in the neighbourhood of) a critical point w∗α and then strongly influenced by another critical point w∗β. In the clearest case there are two plateaus, the one with higher loss corresponding to the label α and the one with the lower loss corresponding to β. In larger systems there may not be a clear plateau (e.g. in the case of induction heads that you mention) but it may still reasonable to think of the trajectory as dominated by the critical points. > **kave**I'm gonna gloss "phase" as something like "obvious difference in the model from 20,000 feet", unless that seems terrible **Garrett Baker**That seems good. There's more technicalities, but honestly its how I generally think of it intuitively **kave**OK, so I think I still don't know the answer to > Let me check: when you say "as we switch to a different phase", do you mean "if we were to switch away from the optimum" or "as we move through the phases as we do learning"? > > **Garrett Baker**So as singular learning theory stands, we don't have a good way of talking about the training process, only the addition of data. So I'm caching your "as we do learning" to mean "as we add data".  **kave**That sounds good to me **kave**I'm trying to clarify if you're saying "imagine we're in the lowest free energy state for a given n. Here's how things look if you switch away to higher free energy state" or "imagine you're in the lowest free energy state at some n. Here's how things look if you increase n" **Garrett Baker**Oh yeah, the second one totally is what I'm talking about **Garrett Baker**There are definitely weight states which get you lower free energy for higher loss and lower or possibly higher RLCT than the one you're currently at **Garrett Baker**To go back to the original question: > OK but why does rapid change of density of states correspond to more generalising? > > The reason for this is... its false! We get double descent sometimes, and SLT predicts when this will happen! **kave**!! **Garrett Baker**This happens when, if Δλ and ΔL are the differences between your two phases, and ncr is the number of datapoints you need to switch between the two phases (the critical point), then we get double descent when ncr<ΔλΔL **Garrett Baker**The sketch looks like this (r = ΔλΔL, and because a physicist made these notes E1 and E2 are the losses for phase 1 and phase 2 respectively): ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/PikpeRucdsXeEvpy9/brswqliwkx2wufmuoe04) **Garrett Baker**where the two lines are the generalization loss **Garrett Baker**I don't really expect you to look at this and go "ohh, that makes sense", since I would have had to previously explain generalization error to you, and show you why you why Δλ/ΔL is relevant, but you should be able to derive it by me saying that EBg=L+λ/n is the expected generalization error. **kave** > I don't really expect you to look at this and go "ohh, that makes sense" > > Oh good, because I looked at this and made a sort of zombie noise **Garrett Baker**lol, sorry about that.  **kave**But I think I'm seeing that the model jumps phases at nc, and sometimes it does when that it's getting a "worse" (higher or lower?) λ [edit: in the new phase], I guess because the tradeoff for better training loss is worth it **Garrett Baker**yes, ncr is the crossover point where the free energy of the new phase becomes more attractive, and r is the crossover point where the generalization error of the new phase becomes more attractive. **kave**And lower λ leads to lower generalisation loss, looking at your EBg=L+λ/n **Garrett Baker**yup! Intuitively thinking of λ as the complexity of your model class lets this make intuitive sense. **kave**Right! And similarly "its an asymptotic to zero measure of how rapidly the volume of a weightspace in a phase decreases as you decrease the loss threshold" is like a "limited finetuning" rule(?) **Garrett Baker**I haven't seen it that way before. Can you elaborate? **kave**Hm. Not sure worth going into, but as weightspace drops off as you demand more closeness to optimal loss, λ is telling you how precisely you need to nail your parameters to hit that closeness. But for a model class that is small relative to the parameter space of your neural network (NN) or whatever, then you don't drop off fast (or similarly for a simple / robust model) **Garrett Baker**Yeah, that sounds right. **kave**Cool! **kave**So SLT is like "models have a preference for lower free energy because Math, and that tells you more than just assuming they minimise loss (in particular because maybe minimising loss is a non-deterministic rule for picking a set of parameters)"? **kave**(with some of the stuff we've covered above being particular things it tells you) **Garrett Baker**Hm... I don't think that's the main upshot, but it is an upshot. I'd say the main upshot is that "the distribution of models you may end up with after training is concentrated on the singularities (oh and also here's a really cool way of pumping out nontrivial statements from this fact using algebraic geometry and singularity theory)" **kave**Where are the singularities in this picture? Are they ex cathedra from some math I haven't read or are they related to λ? **Garrett Baker**They are related to λ, but I'm currently in the weeds on learning the actual algebraic geometry and singularity theory behind that part, so I couldn't tell you why. **kave**Copy **Garrett Baker**So this insight is why I'm optimistic about singular learning theory going further than just the learning systems its studied so far. Like reinforcement learning, and the learning subsystems of brains. **Garrett Baker**Free energy may be applicable to both, but the resolution of singularities used to find and characterize λ, and the existence of singularities causing the two processes to behave in reducible ways (because they are both almost certainly singular) mean I have hope the same techniques can be adapted to those cases.  **kave**This may just be a matter of the pedagogy I needed, but from my perspective the reason to believe that there are singularities is because apparently free energy is minimised and that looks like it should lead to low λ and low λ is related (or so a trustworthy guy on the internet tells me) to singularities **kave**And not so much reason to expect singularities without low free energy **kave**Though I'm also fine with hearing "there are other reasons to expect singularities but we can't / shouldn't get into them now" **Garrett Baker**Lol, Oh yeah, I mean when I say "almost certainly" I mean the default state for messy hierarchical models with a ton of parameters is that they do not have one-to-one function maps, or their fisher information matrix is somewhere singular.  **kave**Oh right! I now remember that singularity is also to do with the non-one-to-oneness of the parameter-function map. **kave**Faith restored **Garrett Baker**lol **kave**Are brains singular do you think? **Garrett Baker**For some reasonable definition of what the parameters are in brains, brains are singular for the same reasons that neural networks are singular, they're almost certainly not one-to-one, and their fisher information matrices are singular.  The technicalities here are possibly more messy, though it depends on exactly how nondiscrete the learning subsystems are **kave**Somehow it's less intuitive to me for brains, but plausibly that's just because my system 1 model of brain neurons is more essentialist and less functionalist than for artificial neural networks (ANNs), and would go away on reflection. **Garrett Baker**Essentialist? **kave**Like "this is the neuron that represents Garrett Baker" and that inheres in the neuron rather than its connections **kave**To be clear, this is meant to be an observational fact about my intuitive model, not a claim about how brains work **Garrett Baker**Ah, ok. This does feel cruxy, so it seems possibly worth breaking down why you think what you think to see if I maybe want to start thinking what you think. **Garrett Baker**Or we can break down why *I* think what I think.  **kave**Oh, if my guess about why I find the many-to-one-ness less intuitive is correct, my guess is strongly that my intuitions are wrong! **kave**I vote we do the orienting to the conversation thing you were trying to do a few messages ago **Garrett Baker**cool **kave**Oh wait! Actually maybe we should break now?   ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/PikpeRucdsXeEvpy9/fsfa1migkpy4wg46c7gh)Garrett's hopes for grand singular learning theory-based theories ----------------------------------------------------------------- **Garrett Baker**Things you wanted to get to that we haven't yet: * How brain science and alignment could interplay * Whether dynamic brain stuff is sensible Thing I wanted to get to: * Checking to see if my post came off as ambitious as I wanted it to, and if not, then correcting the record. **kave**I think I'm excited about your thing > brain + alignment > dynamic brain **Garrett Baker**Ok, we'll talk about that then.  **kave**Let me get your post open **Garrett Baker**So, Vanessa has this quote which I really like, that was summarized to me originally by Jeremy Gillen: > In online learning and reinforcement learning, the theory typically aims to derive upper and lower bounds on "regret": the difference between the expected utility received by the algorithm and the expected utility it *would* receive if the environment was known a priori. Such an upper bound is effectively a *performance guarantee* for the given algorithm. In particular, if the reward function is assumed to be "aligned" then this performance guarantee is, to some extent, an alignment guarantee. This observation is not vacuous, since the learning protocol might be such that the true reward function is not directly available to the algorithm, as exemplified by [DIRL](https://www.lesswrong.com/posts/5bd75cc58225bf067037546b/delegative-inverse-reinforcement-learning) and [DRL](https://www.lesswrong.com/posts/5bd75cc58225bf06703754d5/delegative-reinforcement-learning-with-a-merely-sane-advisor). Thus, formally proving alignment guarantees takes the form of proving appropriate regret bounds. > > **Garrett Baker**And if I understand her correctly, the way she has tried to prove such an alignment-relevant regret bound was to invent infra-Bayesianism physicalism, then construct a model of what it means to be an agent with a utility function, and have the deployed infra-Bayesian physicalist agent infer what agent created it, and act according to its implied utility function. **Garrett Baker**This seems unwise for a variety of reasons on my model. But I think lots of those reasons are fixed if you satisfy a few desiderata with your theory: 1. You have really good contact with reality throughout your theory development process 2. The metric for success you use is mostly derived rather than instantiated from the outside. Practically, this means instead of deciding humans will have utility functions and act like agents as defined in such-and-such a way, you are able to use your theory (or a theory) to derive what the format the values inside your human are represented as, then during your equivalent of value learning component, you fit the model you have to the human, or you have some process which you anticipate will result in values in a similar format, resulting in the same sorts of goals for the world Maybe there's more? I'm going mostly based off memory here.  Anyway, it seems like singular learning theory, if it could be worked to produce alignment relevant theorems would or could have the above two benefits.  **Garrett Baker**And by alignment relevant theorems, I imagine this looks a lot like getting a good understanding of reinforcement learning in the SLT frame such that you can make claims about both the effects of what your reinforcement learning (RL) agents optimize for in different environments (or a generalization from the training environment), and the main internal determinants of the optimization. And do the same once we get good models of how brains develop (which we may already have enough data to technically do if you're a good enough mathematician? The nice thing about math is you don't need *all* the detail to be present before you can actually work on the thing).  And then once we get those things, we can move on to asking about when reflection-type thoughts are likely to arise in certain models, how the brain's optimization machinery keeps its values stable (or if it doesn't if the main determinant is the optimization of values or the meta-values) during that and ontology shifts, and hopefully prove some kind of internal-values convergence during some training process. **Garrett Baker**Was this mostly implied by my post? **kave**I think this is mostly surprising to me after having read your post! I think you gestured at it with "a theory for reinforcement learning", but I think that I didn't expand it to something like a "theory of (value) learning" given just what I read **kave**Maybe if I had read the learning theoretic agenda stuff more recently **Garrett Baker**Ok good! That means I'm maybe talking about the right thing / I understood the thing the friend I talked to said wasn't made clear was in fact the thing I just said. **kave**Shall I ask you some questions about it? I have some! **Garrett Baker**Oh yeah, totally **kave**I understood the part where you want to be somewhat agnostic about the representation of values in humans / learning systems. But I think I failed to draw the connection to getting the learning system to fit to the human **Garrett Baker**Yeah, so as a toy example, we can imagine the values humans have are pretty dependent on the architecture of the brain, maybe humans have a lot more recurrence than transformers, and so have a baseline tendency to care a lot more about the feelings of themselves and others, while language models if you train them enough to be generally capable agents don't have so much recurrence and so end up with a lot more value placed on things that look nice when they are passed as inputs to a transformer like themself.  When we look inside of these two minds, we see the human values are represented in (in some sense) a non-equivalent format to the transformers values, not in the sense that they are different from each other, but in the sense that whatever way the goals humans use is more liable to model how things feel, while the transformers are more liable to model how things look.  So if we train the transformer in this world using some learning process, it will take a lot more data for it to learn that it needs to have goals in the same format as humans, and it will make mistakes which result in correlating (say) the positive-sentimentness of human text when its inputted into it with the rewards you gave it rather than correlating the inferred human mental state with the rewards you give it in order to construct its goals.  We can imagine the same problem but on the level of data. Say the humans spend a lot of time interacting with other humans, and the AIs spend a lot of time interacting with HTML webpages, so the humans have some internal mechanism within them that is more likely to think in terms of humans, and AI in terms of webpages.  You want the human values to be very naturally representable in your AI.  **kave**It sounds like you're thinking of a transformer directly choosing to model human goals as a kind of indirect normativity thing, i.e. wanting to do the things that its creator would approve of. I noticed this also in your move from Vanessa's quote (which didn't seem to explicitly contain an appeal to indirect normativity to me) to your understanding of her agenda. Is your thought that this is necessary? I can imagine thinking that a powerful mind needs an accurate model of the principal's values if it's to be aligned to them, but that seems like it could be derived without trying to model them as "the principal's values" (as opposed to "good stuff"). I don't know if that's a load-bearing part of your model or just something chosen to be concrete. **Garrett Baker**I don't know whether the best way to get a model to adopt my values is for it to model me and value my values, or to itself just have my values. I would suspect the latter is easier, because I don't model someone else and adopt their values except as an expression of others of my values. **kave**So it sounds like you're concerned about the "value ontology" of the AI and the human, which might or might not be the same as the "model ontology". **Garrett Baker**Yes **kave**And I suppose there's an additional concern that the way the value ontology of the AI interacts with increasing capability is similar enough to the way that the value ontology of the human interacts with increasing capability that it stays easy for the AI to model where the human would have ended up as it reflected (rather than going off down some other contingent path). **Garrett Baker**Yeah, I am concerned about this. This is inside of the reflection and the ontology sections of the original post, where I'm worried about  1. Ensuring you also have the right meta values in your AI, if those exist 2. Making sure the optimization machinery of your AI doesn't freak out the way humans usually do when reflection and "what should my values be" oriented thoughts come online 3. Do ontology shifts break your values here? I'm not *so* concerned about 3, because humans don't seem to break under ontology shifts, but it is something to keep your eye on. Its possible they would break if they don't undergo enough ontology shifts before getting access to self-modification abilities. **Garrett Baker**And of course you want the AI to get your values before its able to model you well enough to exfiltrate or something. **kave**Yeah, interesting. I am interested in the reflection thread, but also kind of interested on double-clicking on the transformer value format thing and maybe that makes sense to do first? I'm going to try and spell out the story in a bit more detail **Garrett Baker**What transformer value format thing? I'll note I don't actually expect the above toy example to be true **kave**Noted, but I just meant like the underlying intuition or something **Garrett Baker**Gotcha, that sounds like a good thing to do **kave**So, say I'm a transformer. And I've become shaped in a way that's a bit more like "have goals and make plans to fulfil them". That shape has maybe been formed out of the kind of cognitive algorithms I was already running, and so have some similarity to those algorithms. For example, if I am a guy that searches for plans that score highly, that "score" might be built out of pieces like "one of my heuristics for what to do next is very active and confident". So maybe if I have some values, that format is more things like [hmm, maybe I'm supposed to be reinforcement learning from human feedbacked (RLHFd) so there's a non-predictive goal? Sure let's assume that]: this clearly looks like a transcript from a world that the reward model liked. It's clear what would come next in such a transcript. This is coherent, novel to the extent that's rewarded, but not *weird.* This looks like the completion will stably continue in this fashion. Is that an implausible in the details, but roughly right in the gestalt, version of the thing you were imagining? **Garrett Baker**Maybe? It seems roughly similar, but not clearly what I was saying.  **Garrett Baker**Like, another example of what I was trying to say would be that maybe models end up doing things following the predictions of shard theory, and humans do things following the predictions of naive expected utility maximization very easily (of course, both would approximate EU maximization, but the difference is humans do it explicitly, and models do it implicitly). **kave**OK, that seems like a nice tricky case. **Garrett Baker**Yeah, so if models natively do shard theory, and humans natively do EU maximization, then the models will not be selecting over "simple" utility functions to maximize, if we start knowing how to do ambitious value learning, they will first know how to best approximate your utility function using a bunch of contextually activated heuristics. And then later on, if you undergo a ontology shift or it does, the values will react very differently, because they attach to the world in very different ways. Maybe you find out souls don't exist, so your only hope of ever achieving your values is [breaking out of a theorized simulation](https://arbital.greaterwrong.com/p/ontology_identification?l=5c#Anticipated_failure_of_AIXI_atomic_in_our_own_universe__trying_to_maximize_diamond_outside_the_simulation) inside a world that does have ontologically basic souls, and it finds out souls don't exist, and instead of helping you on your quest, it decides your diverting resources away from its implicit goal of making humans in front of it smile or something. **Garrett Baker**Which is why I want equivalence on the level of internal value representations that connect to RL generalization behavior, rather than just RL generalization behavior. **kave**I mean it seems like what you want is a bunch of robustness around the RL generalization behaviour and it seems like *at least value representation equivalence* should get you a bunch of that, but there might be other ways **Garrett Baker**This is true.  **kave**I am excited to hear more about how you think about "meta-values" **kave**It seems like they play a role in your conceptualisation of what it is to be well-behaved while doing reflection **kave**I haven't thought about them much **Garrett Baker**Yeah, so as you're thinking about your values, you need to have some reason to change your values if you try to change your values, or change your goals if you try to change your goals. And when you're having some reason to change your values, you presumably use other values. And if you have two systems with different meta values, but the same object level values, assuming that difference makes some ontologically basic sense, you're going to have different reflection processes, and at the end of thinking a lot about what they want, they're going to have different values, rather than the same values. Which is a problem. **kave**I guess I feel fuzzier on if meta values are at play for me in value change, versus just like I have a certain policy around it **kave**I certainly have (weak, incomplete) meta values to be clear! But I don't know if they govern my value change that much **kave**I guess they (hopefully) would if my values were going to stray too far **Garrett Baker**By policy do you mean "I have consciously decided on this policy for myself" or "there's some process in my brain that I can't control which changes my values" **kave**The latter (well, don't know about "can't", but don't) **Garrett Baker**To the extent you endorse that policy changing your values, I'd call that a meta-value. Presumably there are situations where you'd rather that policy not change your values, like if you fell in love with a Nazi, and that turned you into a Nazi. If you knew that would happen, you'd likely avoid falling in love with Nazis. But if you otherwise didn't have a problem with falling in love with Nazis, and you could modify your policy such that falling in love with them didn't give you their values, you would do that. **kave**Well I don't know! I might prefer to not fall in love with Nazis and keep love-influenced value change. Not that it affects your point **kave**OK, I'm willing to accept that for some notion of meta values, they're at play for me **Garrett Baker**Cool. The existence of most value change in humans not being necessarily by choice I think makes meta values pretty important. If it was all by choice, then we wouldn't have to worry about the AI corrupting our values by (for instance) having us fall in love with it **kave**Right! Let me quickly return to where meta-values appeared in your initial post-break writeup and think for a sec. So, how much do we have to ask of these meta values for them to be a safeguard for us? One possible argument is that: as long as the AI has the same meta values we do, it's fine however its values change as that was permissible by our meta values, even if our meta values are quite incomplete. It's not obvious we can get away with the AI having *additional* meta values, because if it's making tradeoffs between it's values that could cause it to violate one of *our* meta values way too much. And obviously missing some of our meta values could go quite badly by our lights. **Garrett Baker**Yeah, two notes: 1. I mostly expect meta-values and object level values are ontologically the same. Like, it doesn't feel like I use all that different stuff to decide what my values should be than to decide on other stuff I like or want in the world. 2. This is a reason I'm more optimistic about actually having the same representations and values in our AIs as is in ethical humans compared to just proving very general robustness bounds. Because if you show that your AI acts within epsilon of your implied utility function or something, that epsilon will compound as it makes modifications to itself (including to its values), and as much as I like singular learning theory, I don't trust it (or any theory really) to predict the negative effects of AIs self-modifying. **kave**And do you have any specific leads for hope on *how* to enforce similar value-formats? I guess one part is neuroscience teaching us about what our value format is **Garrett Baker**There are currently various attempts in SLT to relate metrics like the RLCT to the implemented algorithms inside of networks. For instance, there are (currently informal) arguments about how modularity would decrease the RLCT, or how symmetries caused by the loss functions we use in the loss landscape result in functions with lots of dot products and linearities used as their primary mode of data storage having lower RLCT than others. I expect the existence of theorems to be proven here.  There are also other ways of going about this. For example, in the recent [Toy Models of Superposition work by Zhongtian Chen, Edmund Lau, Jake Mendel, Susan Wei, and Daniel Murfet](https://arxiv.org/abs/2310.06301), they show that the k-gons discovered by Anthropic are critical points of (I believe) the free energy for varying n. Though I'm not fully confident that is indeed what they did. **Garrett Baker**Oh sorry, not of the free energy of the loss **Garrett Baker**And then for each of those, you can find the RLCT of the relevant critical point, and for a given n find the free-energy minimizing configuration **kave**So to think about how to make this something that would let us control value formats. First, we would develop this theory of relating loss landscapes and the RLCT to the algorithms that are found. Then we would solve the inverse problem of designing loss landscapes that both enable competitive performance and also induce the right kind of value formats. Is that what you're picturing? **kave**I notice I feel like there's something about the architecture that was supposed to appear alongside the phrase "loss landscapes" in my message before, but maybe the architecture appears in the RLCT, because it's about how singular the minimum loss sets are **Garrett Baker**Yeah, I think it goes architecture -> parameter function map -> geometry -> RLCT.  **Garrett Baker**But again, I'm still learning the *exact* math in this area. **kave**👍 **Garrett Baker**Also, for the RL case its certainly not *known* we'll be using the RLCT in the same way in the same functions. It may have a greater or lesser effect, or just a qualitatively different affect than in supervised learning. Possibly it ends up being roughly the same, but its still good to note that this is definitely speculative, and actually the RL extension needs to be done. **kave**Yeah, good note **kave**But your picture is roughly something like an inverse search on training setups given a theory that's powerful enough to predict details of learned algorithms and someway of knowing the human algorithm well enough to do some matching between them (and perhaps some theory about that matching)? **Garrett Baker**Yes **kave**That seems ambitious! Won't it be very hard to understand the algorithms our models learn even if we can predict lots about them ahead of time? **Garrett Baker**Possibly? I will note that that is the aspect of the plan which I'm least worried about, mainly because its basically the mission statement of developmental interpretability.  **kave**Where "worried about" means something like "trying to think through how it goes well" rather than "think might fail"? **Garrett Baker**Where "worried about" means looking at the relative difficulty of each of them, the question of predicting details of learned algorithms seems the one I most expect to be solved. **Garrett Baker**Like, the last one requires advances in whole brain emulation, and the first one is basically an inverse of the second one, and the second one seems more natural, so I expect its inverse to be less natural and therefore harder. **kave**Where one = inverse search, two = predict details of learned algorithms, three = match with human algos? **Garrett Baker**yup **kave**So, I'm trying to think about your plan as a whole right now. And I can see how some of the parts fit together. But I think one thing that's missing is exactly where your hope comes from in different parts of it. By way of contrast, I think I understand why you're excited about SLT. It's something like "this is a theory that is good in the standard way I might want to evaluate mathy theories of learning, such as it being aesthetically nice, having tie-ins to other strong areas of math, having people with a record of good and insightful work, like Watanabe. And also, it is predicting things *already* and those predictions are getting checked *already*! Like, look at the modular addition post! That really seems like that worked out nicely" **kave**And for some parts I think the answer is like "I think this is roughly how it *has* to go, if it's going to to work" (like, I think this is what you think about value learning, though not confident) **kave**And for other parts, like neurotech, I'm not sure if they were picked because you were like "here are some promising parts, this is the best or most hopeful way I can think to combine them" or more like "and this thing seems pretty promising too, and if that goes well, I now have enough pieces to summon Exodia" **Garrett Baker**For SLT, that sounds right, and also it's making about 20-40x faster progress than I would have anticipated. What I described is basically what I think about value learning, and has been for about the past year. Far before SLT or thinking too deeply about whole brain emulation. **Garrett Baker**Would be nice to be able to link to specific comments here, but where I started saying this: > Yeah, so as a toy example, we can imagine the values humans have are pretty dependent on the architecture of the brain, maybe humans have a lot more recurrence than transformers, and so have a baseline tendency to care a lot more about the feelings of themselves and others, while language models if you train them enough to be generally capable agents don't have so much recurrence and so end up with a lot more value placed on things that look nice when they are passed as inputs to a transformer like themself.  > > **kave**Copy **Garrett Baker**So most of my hope comes from "man, lots of this basically looks like technical problems which you can easily check if you got right or not", and then the rest (like reflection or ontologies or meta-values, or making sure you got the right equivalence in your value formats) seem *a lot* easier than the generic Arbital versions after you've solved those technical problems. **Garrett Baker**(where I use "technical" to mean you can pretty easily check/prove if you got things right) **Garrett Baker**And I place nontrivial probability on many of the nontechnical things just not being major problems (i.e. meta values being basically the same as object level values, and those dominating during ontology shifts) **kave**One nice thing about such technical problems is that they're often either scalable or you don't have to get them right on the first try **Garrett Baker**Indeed **kave**So, why are the Arbital problems easier once you have value equivalence? Is it because you can do some kind of more 'syntactic' check and not find some complex way of telling when two value systems are the same behaviourally or elsewise? **kave**Or I guess, solving in the natural ontology of the thing you create **Garrett Baker**Solving in the natural ontology of the thing you create sounds right So for example, during ontology shifts, we want our AI to react similarly to a human when it learns that souls don't exist. Either this is a natural result of the values we have, like we have some meta-values which tell us what to do with the soul-caring values in such circumstances, or we have some optimizing machinery inside us that reinterprets souls depending on the currently best ontology, or something else.  And then we can— **kave**—Ah! I think I have a new way of seeing your plan now, which is something like: do human(-value) uploading, but do as much as you can to bridge *down* from ML systems as we do to bridge *up* from the brain **Garrett Baker**Yes! That's how I think of it in my head **kave**That makes sense! It's interesting that it's quite different from plans like [this one](https://manifund.org/projects/activation-vector-steering-with-bci) that try to get the brain and powerful models more closely connected (I seem to recall another that thought about using transformers to predict fMRI data). The things I link here feel like they are "shovel ready" but lack guarantees about the thing you're actually getting that performs well at the prediction tasks or whatever. They're more like "let's try these tricks and that might work", and your plan is more like "here's some fields we could solve to differing degrees and still have a theory of how to make powerful machine, but human-like, minds"  **Garrett Baker**I don't find it that interesting, because currently thinking SLT is where its at is a very unpopular opinion, and without the results I saw, and predictions I made, its reasonable for people to be pessimistic about progress in deep learning (DL) theory. **kave**Sure! I guess it's just quite a different vector than I normally think of when bridging DL and brains **Garrett Baker**Reasonable. Yeah, so to finish my sentence from earlier, we can copy the relevant parts of the human brain which does the things our analysis of our models said they would do wrong, either empirically (informed by theory of course), or purely theoretically if we just need a little bit of inspiration for what the relevant formats need to look like. **kave**Interesting! Yeah, I think your comments here are related to earlier when you said "[...] do the same once we get good models of how brains develop (which we may already have enough data to technically do if you're a good enough mathematician? The nice thing about math is you don't need *all* the detail to be present before you can actually work on the thing". Like, maybe we need lots more neuroscience, or maybe we have enough to pin down relevant brain algorithms with enough theory. **kave**That seems quite tricky though. It seems like the boost to brain-theory would likely have to come from SLT in this plan, as I don't see a similarly promising framework in neuro **Garrett Baker**I agree, which is why I tried to emphasize in the post and earlier in the discussion why I think SLT is relevant for describing the brain as well as ML models **Garrett Baker**We have also been pretty lost when it comes to theories about how the brain works, and are even further behind neural network interpretability in figuring out brain algorithms. **kave**When I try and think about the brain under SLT, I'm like: sure, sure, multiple realisability / non-identifiability ... but what is SLT *about* then if it extends to brains? Is it about dynamical systems where the dynamics are in the projection of a larger space? But all spaces are projections of *some* space where the projection gives singularities, surely? Oh I guess the dynamics are *in* the big/redundant space, but *governed by* the position in the small space? **Garrett Baker**I don't fully understand your first paragraph, but the second paragraph is deeply related to lots of stuff I was explained in the SLT primer, so maybe your onto something?  The way I think about it is relatively high level, with some sanity lower-level checks. Like, SLT explains why neural networks learn so well, currently it only explains this in the supervised regime, but neural networks also learn similarly well in the reinforcement learning regime, it would be pretty weird if neural networks performed well in both regimes for very different reasons, and indeed during reinforcement learning neural networks are pretty sensitive to the geometry of the loss landscape, and it would make sense for them to stay in an area around different singularities with lower RLCT earlier in training, and to progress to areas with higher RLCT but better loss later in training. And then similarly with brains, lots of the architectures which work best in supervised and reinforcement learning are directly inspired by looking at brain architectures. It would similarly be weird if they happened to work for very different reasons (though less weird than the supervised->RL case), and as the brain gets reward events, and small changes are made to the parameter-equivalents in the brain, I similarly expect for it to progress in the same RLCT increasing, regret decreasing manner. **kave**So I think your argument is that "a theory that explains the success of deep neural networks (DNNs) at supervised learning, by default explains deep reinforcement learning and the human brain"? Where that assumption could be refuted by noticing the theory really paying attention to supervised-learning-specific stuff. **Garrett Baker**Weakly, yes. And noting also the fact that this theory in particular seems relatively more general than the default. Like, a competitor may be the neural tangent kernel. I would not expect the brain to be explainable via that enough to rest my hopes on it. **kave**Because the neural tangent kernel is structured like "here is an idealisation of a neural network. From it, theorems", and SLT is structured like "here is an idealisation of a learning system. From it, theorems". And the idealisation in the latter just looks like way less specific and similar to DNN-qua-learning-system than the former is to DNN-qua-neural-net? **Garrett Baker**Essentially yes. **kave**Hm. Care to put a number on P(SLT usefully explains brain stuff | SLT usefully explains neural net stuff)? **kave**that probably wasn't worth LATEXing **Garrett Baker**lol, I think this is mostly dominated by whether we have good enough theory -> brain emulation -> theory feedback loops, but I'd put it at 80-95%. Given better feedback loops, that increases. Though I will also suggest you ask me in 6 months, since outside view says people excited about a pet project/theory like to use it to explain lots of stuff. I'm currently adjusting for that, but still. **kave**With your caveat noted, I still find the two parts of you first sentence incongruent! It seems that `theory -> brain emulation -> theory` routes through experimental neuroscience, a field about which I, a novitiate, feel despair. **Garrett Baker**I mean, I was interpreting a relatively broad "usefully explains" definition? Like, surely you don't mean given the first half of your plan works, the part about brains will work.  So if I'm just like "given SLT gives good & novel predictions about practically relevant neural net stuff, will it give practically relevant & novel predictions about brain stuff", and given how good neural nets are at predicting brain stuff, this seems easy to satisfy. **kave**I mean, maybe this is all just in the weeds but I mean ... oh! Did you mean "this is mostly dominated by" to mean "this conditional is not as important as this other important concern" rather than "the leading term in my estimate is how well this loop works"? **Garrett Baker**Well, the leading term in my estimate is how well neural networks generically do at predicting neuroscience stuff, and then the second term is how well that loop looks for brain-specific insights that don't route through NNs.  **kave**Doesn't that loop seem doomy? **kave**(aka 'unlikely') **Garrett Baker**Yes. But its a technical rather than philosophical problem **kave**But one that's 80-95% likely to work? Or maybe I should understand you as saying "look 80-95% that it helps at all, but how much it helps is dominated by the loop" **Garrett Baker**Oh, ok, my 80-95% is mostly on that the insights it gives to NNs lead to insights to brains or that the theoretical development of the theory making contact with NNs is enough to get it to the point where it can make nontrivial predictions about the brain.  Certainly how much it helps is dominated by the loop. But also making the loop better makes it have a higher chance of helping. **kave**OK, I think I get your picture now **Garrett Baker**Ok good **Garrett Baker**Sorry for that confusion, I think I just wasn't meta-modelling the fact I was using "how well are neural networks as basic models of the brain" as the first-order term when I wrote the probability originally **kave**No problem! **kave**Here are a couple of places my attention go from here: 1. What is necessary to seed an SLT-touched neuroscience field? Maybe the answer is just "push forward on SLT and get to neuroscience when it happens", but interested if there's something to do earlier. 2. What are your hopes for pushing the plan forward? 3. The ol' capabilities externalities. Maybe touching on Nate's recent post that was more like "shut it all down" **Garrett Baker**1. I imagine the thing that'd need to happen is to know what things update the brain, what parts of the brain are analogous to parameters, are the functions in the brain approximatable by analytic functions, 2. So the obvious next move for me is to learn SLT more in depth than I currently know. Then I plan on extending it to reinforcement learning, which seems to get relatively less attention in the field. Then at that point start thinking about the brain. Basically start setting up the basics of the extensions I'll be using. 3. I agree with "shut it all down", I did address capabilities externalities in my original post. The idea is that I want to develop the theory in the direction of making nontrivial statements about values. There exist capabilities externalities, but I'm not *so* worried because as long as I mostly talk and output stuff about the values of the systems I'm trying to describe, and the theory required to say stuff about those values, I should expect to have my insights mostly pointed toward characterizing those values rather than characterizing capabilities. **kave**I'd be happy to chat about what 2 looks like or expand on 3. I agree you talk about it in your post, but feels like there's a reasonable amount to say. **Garrett Baker**Lets do 3, since that seems more interesting to outside parties than describing my Anki methodology, or me saying "yeah, I don't know enough of how SLT is built" to talk specifically about how to extend it to RL, and I don't know enough about either the brain or SLT to talk much about how to extend it to the brain. Capabilities externalities -------------------------- **kave**Sounds good! So, I take your hope to be something like "if my work focuses specifically on how values work in singular learning systems, then that will be mostly usable for ensuring inner alignment with some goal. The use of that power at a given level of capabilities is one I think is basically for the good, and I don't think it hastens capabilities". Curious in if that fits with how you think of it? **Garrett Baker**It doesn't reduce capabilities, and its doing stuff in that space, so it in expectation likely does hasten capabilities. So the last part is false, but I think it increases capabilities not all that much, especially if I'm just in theory building mode while I'm doing general stuff, and then when I'm focusing on specific stuff I'm in make this useful for alignment mode. **kave**Yeah, so the distinction between general stuff and specific stuff is maybe promising. I'm like, focusing doing singular learning theory on values seems hard to do any time soon. First it seems like you need to be able to say lots about the SLT of RL. Do you agree with that, or do you think you could go straight to SLT of values? **Garrett Baker**Like, for context, singular learning theory has been around for like 20 years, and maybe could be used by a super smart person to quicken capabilities, but mostly the super smart people focus on scaling (or just have lost faith in understanding as a means to progress), and the smart people afraid of capabilities who know about SLT work on making SLT useful for interpretability, which likely has its own capabilities externalities, but I generally feel like for research like this only a few "true believers" will develop it while its still in its general phase, and you don't have to worry about RL practitioners suddenly caring at all about theory. So its effects will mostly lie in what those "true believers" decide to use the theory for. **Garrett Baker**I agree that we need a theory of SLT for RL before making a SLT for values induced by RL. **kave**Yeah, interesting. It seems definitely true that for awhile you'll be fine, because no one will care. I wonder if the odds are good no one will care until your specific stuff comes online. I guess it would be nice to know some history of science here. **Garrett Baker**I will point to the Wright brothers and the airplane, the Manhattan project, and I think Thiel's [theory of startups](https://www.amazon.com/Zero-One-Notes-Startups-Future/dp/0804139296), as sources of evidence for this position. Also looking at the landscape of stuff going on in alignment, and seeing that mostly people don't build on each others work too much, and even the most promising stuff is very general and would take a really smart person to develop into capabilities relevant insights. **Garrett Baker**I note that interpretability is a different beast than super theoretical stuff like what I'm pursuing, since it seems like it'd be obvious to a much wider range of people, who intersect *relatively* heavier on the people most liable to want capabilities improvements than singular learning theory. **kave**That seems true! Though it also seems true that people are really wanting to hit their prediction machines with RL hammers to make money fly out **Garrett Baker**That is a thing I forgot to think about, that in the realm of deep learning RL practice is particularly cursed **Garrett Baker**I don't think it makes much difference, but still good to weigh **kave**But I think as long as you don't discover things that improves the sample efficiency of RL without improving inner alignment, it's not obviously bad **Garrett Baker**I note that singular learning theory has only characterized the behavior of deep learning so far, it doesn't (straightforwardly) give recommendations for improvements as far as I know. And I'd guess once I need abilities in that domain, they will be fairly values-controlling specific.