id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
cb6e5454-4072-4ebb-9c1d-eb1a086cd839
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Two-year update on my personal AI timelines I worked on my [draft report on biological anchors for forecasting AI timelines](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) mainly between ~May 2019 (three months after the release of GPT-2) and ~Jul 2020 (a month after the release of GPT-3), and posted it on LessWrong in Sep 2020 after an internal review process. At the time, my bottom line estimates from the bio anchors modeling exercise [were](https://docs.google.com/document/d/1cCJjzZaJ7ATbq8N2fvhmsDOUWdm7t3uSSXv6bD0E_GM/edit#heading=h.jhjg6byruuun):[[1]](#fn-LnaAQkuHYCr3b3oQ7-1) * Roughly ~15% probability of [transformative AI](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/) by 2036[[2]](#fn-LnaAQkuHYCr3b3oQ7-2) (16 years from posting the report; 14 years from now). * A median of ~2050 for transformative AI (30 years from posting, 28 years from now). These were roughly close to my all-things-considered probabilities at the time, as [other salient analytical frames on timelines](https://www.cold-takes.com/where-ai-forecasting-stands-today/) didn’t do much to push back on this view. (Though my subjective probabilities bounced around quite a lot around these values and if you’d asked me on different days and with different framings I’d have given meaningfully different numbers.) It’s been about two years since the bulk of the work on that report was completed, during which I’ve mainly been thinking about AI. In that time it feels like very short timelines have become a lot more common and salient on LessWrong and in at least some parts of the ML community. **My personal timelines have also gotten considerably shorter over this period.** I now expect something roughly like this: * ~15% probability by 2030 (a decrease of ~6 years from 2036). * ~35% probability by 2036 (a ~3x likelihood ratio[[3]](#fn-LnaAQkuHYCr3b3oQ7-3) vs 15%). + This implies that each year in the 6 year period from 2030 to 2036 has an average of over 3% probability of TAI occurring in that particular year (smaller earlier and larger later). * A median of ~2040 (a decrease of ~10 years from 2050). + This implies that each year in the 4 year period from 2036 to 2040 has an average of almost 4% probability of TAI. * ~60% probability by 2050 (a ~1.5x likelihood ratio vs 50%). As a result, my timelines have also concentrated more around a somewhat narrower band of years. Previously, my probability increased from 10% to 60%[[4]](#fn-LnaAQkuHYCr3b3oQ7-4) over the course of the ~32 years from ~2032 and ~2064; now this happens over the ~24 years between ~2026 and ~2050. I expect these numbers to be pretty volatile too, and (as I did when writing bio anchors) I find it pretty fraught and stressful to decide on how to weigh various perspectives and considerations. I wouldn’t be surprised by significant movements. In this post, I’ll discuss: * Some [updates toward shorter timelines](#Updates_that_push_toward_shorter_timelines) (I’d largely characterize these as updates made from thinking about things more and talking about them with people rather than updates from events in the world, though both play a role). * Some [updates toward longer timelines](#Updates_that_push_toward_longer_timelines) (which aren’t large enough to overcome the updates toward shorter timelines, but claw the size of the update back a bit). * Some [claims associated with short timelines that I still don’t buy](#Claims_associated_with_short_timelines_that_I_still_don_t_buy). * Some [sources of bias I'm not sure what to do with.](#Sources_of_bias_I_m_not_sure_what_to_do_with) * Very briefly, [what this may mean for actions](#What_does_this_mean_). This post is a catalog of fairly gradual changes to my thinking over the last two years; I'm not writing this post in response to an especially sharp change in my view -- I just thought it was a good time to take stock, particularly since a couple of people have asked me about my views recently. Updates that push toward shorter timelines ========================================== I list the main updates toward shorter timelines below roughly in order of importance; there are some updates toward longer timelines as well (discussed in the next section) which claw back some of the impact of these points. Picturing a more specific and somewhat lower bar for TAI -------------------------------------------------------- *Thanks to Carl Shulman, Paul Christiano, and others for discussion around this point.* When writing my report, I was imagining that a transformative model would likely need to be able to do almost all the tasks that remote human workers can do (especially the scientific research-related tasks). Now I’m inclined to think that just automating most of the tasks in ML research and engineering -- enough to accelerate the pace of AI progress manyfold -- is sufficient. Roughly, my previous picture (similar to what Holden describes [here](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/)) was: Automate science → Way more scientists → [Explosive feedback loop of technological progress](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#explosive-scientific-and-technological-advancement) But *if* it’s possible to automate science with AI, then automating AI development itself seems like it would make the world crazy *almost* as quickly: Automate AI R&D → Explosive feedback loop of *AI progress* specifically → Much better AIs that can now automate science (and more) → explosive feedback loop of technological progress To oversimplify, suppose I previously thought it would take the human AI development field ~10 years of work from some point *T* to figure out how to train a scientist-AI. If I learn they magically got access to AI-developer-AIs that accelerate progress in the field by 10x, I should now think that it will take the field ~1 year from point *T* to get to a scientist-AI. This feels like a lower bar for TAI than what I was previously picturing -- the most obvious reason is because automating one field should be easier than automating all fields, meaning that the model size required should be appreciably smaller. But additionally, I think AI development in particular seems like it has properties that make it easier to automate with only short horizon training (see below for some discussion). So this update reduces my estimate of both model size and effective horizon length. ### Feeling like meta-learning may be unnecessary, making short horizon training seem more plausible *Thanks to Carl Shulman, Dan Kokotajlo, and others for discussion of what short horizon training could look like.* In my report, I [acknowledged](https://docs.google.com/document/d/1k7qzzn14jgE-Gbf0CON7_Py6tQUp2QNodr_8VAoDGnY/edit#heading=h.boruz1dgt9sc) that models trained with short effective horizon lengths could probably do a lot of economically useful work, and that breaking long tasks down into smaller pieces could help a lot. The main candidate in my mind for a task that might require long training horizons to learn [was](https://docs.google.com/document/d/1k7qzzn14jgE-Gbf0CON7_Py6tQUp2QNodr_8VAoDGnY/edit#heading=h.2s3orj7g2t76) (and still is) “efficient learning” itself. That is, I thought that a meta-learning project attempting to train a model on many instances of the task “master some complex new skill (that would take a human a long time to learn, e.g. a hard video game or a new type of math) from scratch within the current episode and then apply it” would have a long effective horizon length, since each individual learning task (each “data point”) would take the model some time to complete. Absent clever tricks, my best guess was that this kind of meta-learning run would have an effective horizon length roughly similar to the length of time it would take for a human to learn the average skill in the distribution. I felt like having the ability to learn novel skills in a sample-efficient way would be important for a model to have a transformative impact, and was unsure about the extent to which clever tricks could make things cheaper than the naive view of “train the model on a large number of examples of trying to learn a complex task over many timesteps.” This pulled my estimate for effective horizon length upwards (to a median of multiple subjective hours). By and large, I haven’t really seen much evidence in the last two years that this kind of meta-learning -- where each object-level task being learned would take a human a long time to learn -- can be trained much more cheaply than I thought, or much evidence that ML can directly achieve human-like sample efficiencies without the need for expensive meta-learning (footnote attempts to briefly address some possible objections here).[[5]](#fn-LnaAQkuHYCr3b3oQ7-5) Instead, as I’ve thought harder about the bar for “transformative,” I’ve come to think that it’s likely not *necessary* for the first transformative models to learn new things super efficiently. Specifically, if the main thing needed to have a transformative impact is to accelerate AI development itself: * There’s so much human-imitation data on programming and AI[[6]](#fn-LnaAQkuHYCr3b3oQ7-6) that the model can train on vastly more examples than a human sees in their lifetime, and after that training it may not really need to learn particularly complex novel skills after training to act as a very skilled AI engineer / researcher. * Coding is intentionally very modular, so it seems especially well-suited to break down into small short-horizon steps. * Probably as a result of the above two points, we’re already seeing much more concrete progress on coding than we are in other technical and scientific domains; AI systems seem likely to add non-trivial value to practicing programmers soon. This generally de-risks the prospect of using AI to help with AI development somewhat, compared to other applications we haven’t started to see yet. * Brute-force search seems like it could play a larger role in progress than in many other sciences (e.g. a relatively simple ML model could generate and test out thousands of different small tweaks to architectures, loss functions, optimization algorithms, etc. at small scale, choosing the one that works best empirically -- other sciences have a less clear-cut search space and longer feedback loops). I do still think that *eventually* AI systems will learn totally new skills much more efficiently than existing ML systems do, whether that happens through meta-learning or through directly writing learning algorithms much more sample-efficient than SGD. But now this seems likely to come *after* short-horizon, inefficiently-trained coding models operating pretty close to their training distributions have massively accelerated AI research. ### Explicitly breaking out “GPT-N” as an anchor *This change is mainly a matter of explicitly modeling something I’d thought of but found less salient at the time; it felt more important to me to factor it in after the lower bar for TAI made me consider short horizons in general more likely.* My original Short Horizon neural network anchor assumed that effective horizon length would be log-uniformly distributed between ~1 subjective second (which is about GPT-3 level) and ~1000 subjective seconds; this meant that the Short Horizon anchor was assuming an effective horizon length substantially longer than a pure language model (the mean was ~32 subjective seconds, vs ~1 subjective second for a language model). I’m now explicitly putting significant weight on an amount of compute that’s more like “just scaling up language models to brain-ish sizes.” (Note that this hypothesis/anchor is just saying that the training computation is very similar to the amount of computation needed to train GPT-N, not that we’d literally do nothing else besides train a predictive language model. For example, it’s consistent with doing RL fine-tuning but just needing many OOMs less data for that than for the original training run -- and I think that’s the most likely way it would manifest.) Considering endogeneities in spending and research progress ----------------------------------------------------------- *Thanks to Carl Shulman for raising this point, and to Tom Davidson for research fleshing it out.* My report forecasted algorithmic progress (the FLOP required to train a transformative model in year Y), hardware progress (FLOP / $ in year Y), and willingness to spend ($ that the largest training run could spend on FLOP in year Y) as simple trendline forecasts, which I didn’t put very much thought into. In the [open questions section](https://docs.google.com/document/d/1cCJjzZaJ7ATbq8N2fvhmsDOUWdm7t3uSSXv6bD0E_GM/edit#heading=h.1ewyaoago2z6), I gestured at various ways these forecasts could be improved. One salient improvement (mentioned but not highlighted very much) would be to switch from a black box trend extrapolation to a model that takes into account *how progress in R&D relates to R&D investment.* That is, rather than saying “Progress in [hardware/software] has been [X doublings per year] recently, so let’s assume it continues that way,” we could say: 1. Progress in [hardware/software] has been [X doublings per year] 2. Over this time, the amount of [money / labor] invested into [hardware / software R&D] has been growing at [Y doublings per year] 3. This implies that every Y doublings of R&D leads to X doublings of improvement in [hardware / software] This would then allow us to express beliefs about how investment in R&D will change, which can then translate into beliefs about how fast research will progress. And if ML systems have lucrative near-term applications, then it seems likely there will be demand for increasing investment into hardware and software R&D beyond the historical trend, suggesting that this progress should happen faster than I model. Furthermore, it seems possible that pre-transformative systems would substantially automate some parts of AI research itself, potentially further increasing the effective “total R&D efforts gone into AI research” beyond what might be realistic from increasing the human labor force alone. Seeing continued progress and no major counterexamples to DL scaling well ------------------------------------------------------------------------- My [timelines model assumed that](https://docs.google.com/spreadsheets/d/1TjNQyVHvHlC-sZbcA7CRKcCp0NxV6MkkqBvL408xrJw/edit#gid=505210495) there was a large (80%) chance that scaling up 2020 ML techniques to use some large-but-not-astronomically-large amount of computation (and commensurate amount of data) would work for producing a transformative model.[[7]](#fn-LnaAQkuHYCr3b3oQ7-7) Over the last two years, I’d say deep learning has broadly continued to scale up well. Since that was the default assumption of my model, there isn’t a big update toward shorter timelines here -- but there was some opportunity for deep learning to “hit a wall” over the last two years, and that didn’t really happen, modestly increasing my confidence in the premise. ### Seeing some cases of surprisingly fast progress My forecasting method was pretty anchored to estimates of brain computation, rather than observations of the impressiveness of models that existed at the time, so I was pretty unsure what the framework would imply for very-near-term progress[[8]](#fn-LnaAQkuHYCr3b3oQ7-8) (“How good at coding would a mouse be, if it’d been bred over millennia to write code instead of be a mouse?”). As a result, I didn’t closely track specific capabilities advances over the last two years; I’d have probably deferred to superforecasters and the like about the timescales for particular near-term achievements. But [progress on some not-cherry-picked benchmarks was notably faster than what forecasters predicted](https://bounded-regret.ghost.io/ai-forecasting-one-year-in/), so that should be some update toward shorter timelines for me. I’m pretty unsure how much, and it’s possible this should be larger than I think now. Making a one-time upward adjustment for “2020 FLOP / $” ------------------------------------------------------- In my report I estimated that the effective computation per dollar in 2020 was 1e17 FLOP/ $, and projected this forward to get hardware estimates for future years. However, this seems to have been an underestimate of FLOP/ $) as of 2020. This is because: * I was using [the V100](https://www.nvidia.com/en-us/data-center/v100/) as my reference machine; this was in fact the most advanced publicly available chip on the market as of 2020, but it was released in 2018 and on its way out, so it was better as an estimate for 2018 or 2019 compute than 2020 compute. The more advanced [A100](https://www.nvidia.com/en-us/data-center/a100/) was 2-3x more powerful per dollar and released in late 2020 almost immediately after my report was published. * I was using the rental price of a V100 (~$1/hour), but big companies get better deals on compute than that, by about another 2-3x. * I was assuming ~⅓ utilization of FLOP/s, which was in line with what people were achieving then, but utilization seems to have improved, maybe to ~50% or so. This means the 2020 start point should have been 2.5 \* 2.5 \* 1.5 = nearly 10x larger. From the 2020 start point, I projected that FLOP / $ would double every ~2.5 years -- which is slightly faster than the 2010 to 2018 period but slightly slower than Moore’s law. I haven't looked into it deeply but my understanding is that this [has roughly held](https://www.lesswrong.com/posts/c6KFvQcZggQKZzxr9/trends-in-gpu-price-performance), so the update I’m making here is a one-time increase to the starting point rather than a change in rate (separate from the changes in rate I’m imagining due to [the endogeneities update](#Considering_endogeneities_in_spending_and_research_progress)). Updates that push toward longer timelines ========================================= * My report estimates that the amount of training data required to train a model with N parameters scales as N^0.8, based significantly on results from [Kaplan et al 2020](https://arxiv.org/abs/2001.08361?context=cs.LG). In 2022, [the Chinchilla scaling result (Hoffmann et al 2022)](https://www.deepmind.com/publications/an-empirical-analysis-of-compute-optimal-large-language-model-training) showed that instead the amount of data should scale as N. + Some people have suggested this should be an update toward shorter timelines. This would be true if your method for forecasting timelines was observing models like GPT-2 and GPT-3, gauging how impressive they were, and trying to guess how many orders of magnitude more training computation would be required to reach TAI. The Chinchilla result would show that GPT-3 was “not as good as it could have been” for a fixed amount of training computation, so your estimate for the amount of additional scaling required should go down. + But in my report I arrive at a forecast by fixing a model size based on estimates of brain computation, and then using scaling laws to estimate how much data is required to train a model of that size. The update from Chinchilla is then that we need more data than I might have thought. * I’m somewhat surprised that I haven’t seen more vigorous commercialization of language models and commercial applications that seem to reliably add real value beyond novelty; this is some update toward thinking that language models are less impressive than they seemed to me, or that it’s harder to translate from a capable model into economic impact than I believed. * [There’s been a major market downturn that hit tech companies especially hard](https://www.cbsnews.com/news/tech-companies-layoffs-stock-market-cryptocurrency/); it seems a little less likely to me now than it did when writing the report that there will be a billion dollar training run by 2025. Overall, the updates in the previous section seem a lot stronger than these updates. Claims associated with short timelines that I still don’t buy ============================================================= * I don’t expect a discontinuous jump in AI systems’ generality or depth of thought from stumbling upon a deep core of intelligence; I’m not totally sure I understand it but I probably don’t expect [a sharp left turn](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization). * Relatedly, I don’t expect that progress will be driven by a small number of key “game-changing” algorithmic insights we can’t anticipate today; I expect transformative models to look quite similar to today’s models[[9]](#fn-LnaAQkuHYCr3b3oQ7-9) (more so now that my timelines are shorter) and progress to be driven by scale and a large number of smaller algorithmic improvements. * Relatedly, I still don’t expect that TAI will be cheap (e.g. <$10B for a project) and don’t think smallish “underdog” research groups are likely to develop TAI;[[10]](#fn-LnaAQkuHYCr3b3oQ7-10) I still expect developing TAI to require hundreds of billions of dollars and to be done by tech companies with high valuations,[[11]](#fn-LnaAQkuHYCr3b3oQ7-11) likely after pretty significant commercialization of sub-transformative systems. * I think the concept of a [“point of no return”](https://www.lesswrong.com/posts/JPan54R525D68NoEt/the-date-of-ai-takeover-is-not-the-day-the-ai-takes-over) that is not an objective observable event like “the AI has monopolized violence” is tricky to reason about, since it folds in complicated forecasts about the competence and options of future actors,[[12]](#fn-LnaAQkuHYCr3b3oQ7-12) but to the extent that I understand the concept I’m mostly not expecting a PONR before explosive acceleration in research progress is underway (e.g. I don’t expect a PONR before the first automated AI company). The update toward [a lower bar for “transformative”](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines#Picturing_a_more_specific_and_somewhat_lower_bar_for_TAI) pushes me more in this direction -- I expect there to be a lot of helpful things to do in the final couple of years in between “AIs accelerate AI research a lot” and “Far superintelligent AI” (a big one being use the AIs to help with alignment research). Sources of bias I’m not sure what to do with ============================================ Putting numbers on timelines is in general a kind of insane and stressful exercise, and the most robust thing I’ve taken away from thinking about all this is something like “It’s really a real, live possibility that the world as we know it is radically upended soon, soon enough that it should matter to all of us on normal planning horizons.” A large source of variance in stated numbers is messy psychological stuff. The most important bias that suggests I’m not updating hard enough toward short timelines is that I face [sluggish updating](https://sideways-view.com/2018/07/12/epistemic-incentives-and-sluggish-updating/) incentives in this situation -- bigger changes to my original beliefs will make people update harder against my reasonableness in the past, and holding out some hope that my original views were right after all could be the way to maximize social credit (on my unconscious calculation of how social credit works). But there are forces in the other direction too -- most of my social group is pretty system 1 bought into short timelines,[[13]](#fn-LnaAQkuHYCr3b3oQ7-13) which for many of us likely emotionally justifies our choice to be all-in on AI risk with our careers. My own choices since 2020 look even better on my new views than my old. I have constantly heard criticism over the last two years that my timelines are too long, and very little criticism that they were too short, even though almost everyone in the world (including economists, ML people, etc) would probably have the other view. I find myself not as interested or curious as I should theoretically be (on certain models of epistemic virtue) in pushback from such people. I spend most of my time visualizing concrete worlds in which things move fast and hardly any time visualizing concrete worlds where they move slow.[[14]](#fn-LnaAQkuHYCr3b3oQ7-14) What does this mean? ==================== I’m unclear how decision-relevant bouncing around within the range I’ve been bouncing around is. Given my particular skills and resources, I’ve steered my career over the last couple years in a direction that looks roughly as good or better on shorter timelines than what I had in 2020. This update should also theoretically translate into a belief that we should allocate more money to AI risk over other areas such as bio risk, but this doesn’t in fact bind us since even on our previous views, we would have liked to spend more but were more limited by a lack of capacity for seeking out and evaluating possible grants than by pure money. Probably the biggest behavioral impact for me (and a lot of people who’ve updated toward shorter timelines in the last few years) will be to be more forceful and less sheepish about expressing urgency when e.g. trying to recruit particular people to work on AI safety or policy. The biggest strategic update that I’m reflecting on now is the prospect of making a lot of extremely fast progress in alignment with comparatively limited / uncreative / short-timescale systems in some period a few months or a year before systems that are agentic / creative enough to take over the world. I’m not sure how realistic this is, but reflecting on how much progress could be made with pretty “dumb” systems makes me want to game out this possibility more. --- --- 1. Bio Anchors part 4 of 4, pg 14-16. [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-1) 2. A year chosen to evaluate a claim made by Holden in 2016 that there was a [>10% chance of TAI within 20 years](https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/). [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-2) 3. Given by [the ratio of odds ratios](https://betterexplained.com/articles/understanding-bayes-theorem-with-ratios/): (0.35 / 0.65) / (0.15 / 0.85) = 3.05. This implies my observations and logical updates from thinking more since 2020 were 3x more likely in a world where TAI happens by 2036 than in a world where it doesn’t happen by 2036. [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-3) 4. This contains 50% of my probability mass but is not a “50% confidence interval” as the term is normally used, because the range I’m considering is not the range from 25% probability to 75% probability. This is mainly to keep the focus on the left tail of the distribution, which is more important and easier to think about. E.g. if I’m wrong about most of the models that lead me to expect TAI soonish, then my probability climbs very slowly up to 75% since I would revert back to simple priors. [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-4) 5. People use [“few-shot learning”](https://arxiv.org/abs/2005.14165) to refer to language models’ ability to understand the pattern of what they’re being asked for after seeing a small number of examples in the prompt (e.g. after seeing a couple of examples of translating an English sentence into French, a model will complete the pattern and translate the next English sentence into French). However, this doesn’t seem like much evidence about the kind of meta-learning I’m interested in, because it takes very little time for a human to learn a pattern like that. If a bilingual French-and-English-speaking human saw a context with two examples of translating an English sentence into French, they would ~immediately understand what was going on. Since the model already knows English and French, the learning problem it faces is very short-horizon (the amount of time it would take a human to read the text). I haven’t yet seen evidence that language models can be taught new skills they definitely didn’t know already over the course of many rounds of back-and-forth. I’ve also seen [EfficientZero](https://arxiv.org/pdf/2111.00210.pdf) cited as evidence that SGD itself can reach human-level sample complexities (without the need for explicit meta-learning), but this doesn’t seem right to me. The EfficientZero model learned the environment dynamics of a game with ML and then performed a search against that model of the environment to play the game. It took the model a couple of hours per game to learn environment dynamic facts like “the paddle moves left to right”, “if an enemy hits you you die,” etc. The relevant comparison is not how much time it would take a human to learn to play the game, it’s how much time it would take a human to get what’s going on and what they’re supposed to aim for -- and the latter is not something that it would take a human 2 hours of watching to figure out (probably more like 15-60 seconds). The kind of thing that would seem like evidence about efficient meta-learning is something like “A model is somehow trained on a number of different video games, and then is able to learn how to play (not just model the dynamics of) a new video game it hadn’t seen before decently with a few hours of experience.” The kind of thing that would seem like evidence of human-like sample efficiency directly from SGD would be something like “good performance on language tasks while training on only as many words as a human sees in a lifetime.” [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-5) 6. All the publicly-available code online (e.g. GitHub), plus company-internal repos, keylogging of software engineers, explicitly constructed curricula/datasets (including datasets automatically generated from the outputs of slightly smaller coding models), etc. Also, it seems like most of the reasoning that went into generating the code is in some sense manifested in the code itself, whereas e.g. the thinking and experimentation that went into a biology experiment isn’t all directly present in the resulting paper. [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-6) 7. Row 17 in sheet “Main” [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-7) 8. The key exception, as discussed above, was predicting that cheap meta-learning wouldn’t happen, which I’d say it didn’t. [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-8) 9. Note that there’s a bunch of already-widely-used techniques (most notably search) that some people wouldn’t count as “pure deep learning” which I expect to continue to play an important role. Transformative AI seems quite likely to look like AlphaGo (which uses search), RETRO (which uses retrieval), etc. [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-9) 10. Though my probability on this necessarily increased some because shifting the distribution to the left has to increase probability mass on “surprisingly cheap,” it’s still not my default. If I had to guess I’d say maybe ~15% chance on <$10B for training TAI and a similar probability that it’s trained by a company with <2% of the valuation of the biggest tech companies. Betting on this possibility -- with one’s career or investments -- seems better than it did to me before (and I don't think it was insane in the past either; just not something I'd consider the default picture of the future). [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-10) 11. Though labs that are currently smallish could grow to have massive valuations and a ton of employees and then develop transformative systems, and that seems a lot more likely than that a company would develop TAI while *staying* small. [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-11) 12. E.g., it seems like it wouldn’t be hard to argue for a “PONR” in the past, e.g. “alignment is so hard that the fact that we didn’t get started on it 20 years ago means we’re past the point of no return.” Instead it just feels like the difficulty of changing course just gets worse and worse over time, and there are very late-stage opportunities that could still technically help, like “shutting down all the datacenters.” [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-12) 13. I might have ended up, with this update, near the median of the people I hang out with most, but I could also still be slower than them -- not totally sure. [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-13) 14. Since longer timelines are harder to think about since the world will have changed more before TAI, less decision-relevant since our actions will have washed out more, less emotionally gripping, etc. [↩︎](#fnref-LnaAQkuHYCr3b3oQ7-14)
3a07c8a8-93e5-4cd2-8704-502f9c5a4157
trentmkelly/LessWrong-43k
LessWrong
Philosophy of mind review I recently read A Brief Introduction to the Philosophy of Mind, a short undergraduate text. I didn’t understand some bits, but I’m not sure if that’s because the book wasn’t that good or philosophy isn’t or I’m not. Here I list them, for you to enlighten me on: 1. It’s apparently standard to use what you do or don’t want to believe as evidence for what is true. E.g. A legitimate criticism of parallelism and epiphenomenalism is that they are ‘fatalistic’. If a theory means that aliens wouldn’t feel the same as us, then it is too anthropomorphic. The problem of other minds implies that we don’t know how others feel, but we tend to assume we do, therefore we do and anything that implies otherwise is wrong. “Externalism, then, opens the door to an unpalatable form of skepticism, and this is reason enough to adopt internalism instead.” Is there some legit reason for this? 2. It’s apparently standard to use the fact that you can imagine a situation where the theory wouldn’t hold as evidence that it isn’t true. E.g. That you can imagine someone with a different brain state and the same mind state is evidence against their coincidence. You can imagine zombies, so functions or brain states can’t determine mental states. It would be correct to say that your previous concept of x can’t determine y if you can imagine it varying with the same y, but it’s not evidence that the concept can’t be extended to coincide. 3. An argument against the interaction between mind and brain necessary for dualism: “..The mind is non-physical and so does not occupy space. If the mind cannot occupy space, there can be no place in the brain or space where interaction happens”. Why does causality have to take up space? 4. Parallelism (the version of dualism where there is no interaction between mind and body, but it so happens that they coincide, thanks to God or something else conveniently external) is not criticized for the parallel existence of a physical world being completely unnecessary to
9e4e0897-3d40-4a0e-a012-140d6784cc49
trentmkelly/LessWrong-43k
LessWrong
The goal of physics In grad school, I was a teaching assistant for a course called, Why the Sky is Blue. It was a qualitative introduction to physics for non-majors, covering a lot of the same topics as Physics I, such as forces, conservation of energy and momentum, electric charges and magnetic fields, in less detail, with not much math. The actual question about why the sky is blue was saved for the end. As the course dragged on and the students (who expected no math, rather than not much math) started to complain, "Are we ever going to find out why the sky is blue?" I watched the schedule slip and wondered the same thing. We skipped some sections and managed to wedge it into the last lecture: finally, we were talking about why the sky is blue! "The sky is blue because of Rayleigh scattering." Okay, that's not an answer we hadn't defined Rayleigh scattering, there wasn't time for it, so we said that air molecules absorb and re-radiate—effectively changing the direction of—blue light more than red light. Red light goes straight through the atmosphere, and blue light bounces around, making the whole sky glow blue. Conversely, sunrises and sunsets are red because you're looking at the light that has gone straight through a larger wedge of atmosphere. It lost most of its blue on the way to your eye. Pretty good explanation, for not being able to say I=I01+cos2θ2R2(2πλ)4(n2−1n2+2)2(d2)6 (the 1/λ4 part affects small-λ blue light more than large-λ red light). We also showed pictures like this sunset: to demonstrate the effect of straight-through red light and bouncing-around blue light. So in the end, "Why is the sky blue?" Answer: "Because sunsets are red!" "And why are sunsets red...?" It was understandably unsatisfying. One thing was only explained in terms of another thing. But even if we had the time to get into detail about Rayleigh scattering, they could reasonably ask, "Why does light scatter according to that formula?" We could go deeper and explain Lord Rayleigh's proof in
db75c254-da6b-4316-bd96-0cf06a0bd25c
trentmkelly/LessWrong-43k
LessWrong
Arthur Chu: Jeopardy! champion through exemplary rationality http://mentalfloss.com/article/54853/our-interview-jeopardy-champion-arthur-chu I'm not sure I've ever seen such a compelling "rationality success story". There's so much that's right here. The part that really grabs me about this is that there's no indication that his success has depended on "natural" skill or talent. And none of the strategies he's using are from novel research. He just studied the "literature" and took the results seriously. He didn't arbitrarily deviate from the known best practice based on aesthetics or intuition. And he kept a simple, single-minded focus on his goal. No lost purposes here --- just win as much money as possible, bank the winnings, and use it to self-insure. It's rationality-as-winning, plain and simple.
5b4b8c91-a7e6-4690-956a-fabb5db22a28
trentmkelly/LessWrong-43k
LessWrong
Social Presuppositions During discussion in my previous post, when we touched the subject of human statistical majorities, I had a side-thought. If taking the Less Wrong audience as an example, the statistics say that any given participant is strongly likely to be white, male, atheist, and well, just going by general human statistics, probably heterosexual. But in my actual interaction, I've taken as a rule not to make any assumptions about the other person. Does it mean, I thought, that I reset my prior probabilities, and consciously choose to discard information? Not relying on implicit assumptions seems the socially right thing to do, I thought; but is it rational? When I discussed it on IRC, this quote by sh struck me as insightful: > I.e. making the guess incorrectly probably causes far more friction than deliberately not making a correct guess you could make. I came up with the following payoff matrix:   Bob Has trait X (p = 0.95) Doesn't have trait X (p = 0.05) Alice Acts as if Bob has trait X +1 -100 Acts without assumptions about Bob 0 0 In this case, the second option is strictly preferable. In other words, I don't discard the information, but the repercussions to our social interaction in case of an incorrect guess outweigh the benefit from guessing correctly. And it also matters whether either Alice or Bob is an Asker or a Guesser. One consequence I can think of is that with a sufficiently low p, or if Bob wouldn't be particularly offended by Alice's incorrect guess, taking the guess would be preferable. Now I wonder if we do that a lot in daily life with issues we don't consider controversial ("hmm, are you from my country/state too?"), and if all the "you're overreacting/too sensitive" complaints come from Alice incorrectly assessing a too low-by-absolute-value negative payoff in (0, 1).
0ba4687e-a47d-477f-b956-029448832eba
trentmkelly/LessWrong-43k
LessWrong
Circular Altruism vs. Personal Preference Suppose there is a diagnostic procedure that allows to catch a relatively rare disease with absolute precision. If left untreated, the disease if fatal, but when diagnosed it's easily treatable (I suppose there are some real-world approximations). The diagnostics involves an uncomfortable procedure and inevitable loss of time. At what a priori probability would you not care to take the test, leaving this outcome to chance? Say, you decide it's 0.0001%. Enter timeless decision theory. Your decision to take or not take the test may be as well considered a decision for the whole population (let's also assume you are typical and everyone is similar in this decision). By deciding to personally not take the test, you've decided that most people won't take the test, and thus, for example, with 0.00005% of the population having the condition, about 3000 people will die. While personal tradeoff is fixed, this number obviously depends on the size of the population. It seems like a horrible thing to do, making a decision that results in 3000 deaths. Thus, taking the test seems like a small personal sacrifice for this gift to others. Yet this is circular: everyone would be thinking that, reversing decision solely to help others, not benefiting personally. Nobody benefits. Obviously, together with 3000 lives saved, there is a factor of 6 billion accepting the test, and that harm is also part of the outcome chosen by the decision. If everyone personally prefers to not take the test, then inflicting the opposite on the whole population is only so much worse. Or is it? What if you care more about other people's lives in proportion to their comfort than you care about your own life in proportion to your own comfort? How can caring about other people be in exact harmony with caring about yourself? It may be the case that you prefer other people to take the test, even if you don't want to take the test yourself, and that is the position of the whole population. What is the right t
f721e7b4-2887-4199-95e0-332d2d08327a
StampyAI/alignment-research-dataset/blogs
Blogs
A reply to Francois Chollet on intelligence explosion This is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay “[The impossibility of intelligence explosion](https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec).” In response to critics of his essay, Chollet tweeted:   > If you post an argument online, and the only opposition you get is braindead arguments and insults, does it confirm you were right? Or is it just self-selection of those who argue online? > > And he earlier tweeted:   > Don’t be overly attached to your views; some of them are probably incorrect. An intellectual superpower is the ability to consider every new idea as if it might be true, rather than merely checking whether it confirms/contradicts your current views. > > Chollet’s essay seemed mostly on-point and kept to the object-level arguments. I am led to hope that Chollet is perhaps somebody who believes in abiding by the rules of a debate process, a fan of what I’d consider Civilization; and if his entry into this conversation has been met only with braindead arguments and insults, he deserves a better reply. I’ve tried here to walk through some of what I’d consider the standard arguments in this debate as they bear on Chollet’s statements. As a meta-level point, I hope everyone agrees that an invalid argument for a true conclusion is still a bad argument. To arrive at the correct belief state we want to sum all the valid support, and only the valid support. To tally up that support, we need to have a notion of judging arguments on their own terms, based on their local structure and validity, and not excusing fallacies if they support a side we agree with for other reasons. My reply to Chollet doesn’t try to carry the entire case for the intelligence explosion as such. I am only going to discuss my take on the validity of Chollet’s particular arguments. Even if the statement “an intelligence explosion is impossible” happens to be true, we still don’t want to accept any invalid arguments in favor of that conclusion. Without further ado, here are my thoughts in response to Chollet.   > The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time. > > I agree this is more or less what I meant by “seed AI” when I coined the term back in 1998. Today, nineteen years later, I would talk about a general question of “capability gain” or how the power of a cognitive system scales with increased resources and further optimization. The idea of recursive self-improvement is only one input into the general questions of capability gain; for example, we recently saw some impressively fast scaling of Go-playing ability without anything I’d remotely consider as seed AI being involved. That said, I think that a lot of the questions Chollet raises about “self-improvement” are relevant to capability-gain theses more generally, so I won’t object to the subject of conversation.   > Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment  > > A good description of a human from the perspective of a chimpanzee. From a certain standpoint, the civilization of the year 2017 could be said to have “magic” from the perspective of 1517. We can more precisely characterize this gap by saying that we in 2017 can solve problems using strategies that 1517 couldn’t recognize as a “solution” if described in advance, because our strategies depend on laws and generalizations not known in 1517. E.g., I could show somebody in 1517 a design for a compressor-based air conditioner, and they would not be able to recognize this as “a good strategy for cooling your house” in advance of observing the outcome, because they don’t yet know about the temperature-pressure relation. A fancy term for this would be “[strong cognitive uncontainability](https://arbital.com/p/strong_uncontainability/)”; a metaphorical term would be “magic” although of course we did not do anything actually supernatural. A similar but much larger gap exists between a human and a smaller brain running the previous generation of software (aka a chimpanzee). It’s not exactly unprecedented to suggest that big gaps in cognitive ability correspond to big gaps in pragmatic capability to shape the environment. I think a lot of people would agree in characterizing intelligence as the Human Superpower, independently of what they thought about the intelligence explosion hypothesis.   > — as seen in the science-fiction movie Transcendence (2014), for instance. > > I agree that public impressions of things are things that *someone* ought to be concerned about. If I take a ride-share and I mention that I do anything involving AI, half the time the driver says, “Oh, like Skynet!” This is an understandable reason to be annoyed. But if we’re trying to figure out the sheerly factual question of whether an intelligence explosion is possible and probable, it’s important to consider the best arguments on all sides of all relevant points, not the popular arguments. For that purpose it doesn’t matter if Deepak Chopra’s writing on quantum mechanics has a larger readership than any actual physicist. Thankfully Chollet doesn’t spend the rest of the essay attacking Kurzweil in particular, so I’ll leave this at that.   > The intelligence explosion narrative equates intelligence with the general problem-solving ability displayed by individual intelligent agents — by current human brains, or future electronic brains. > > I don’t see what work the word “individual” is doing within this sentence. From our perspective, it matters little whether a computing fabric is imagined to be a hundred agents or a single agency, if it seems to behave in a coherent goal-directed way as seen from outside. The pragmatic consequences are the same. I do think it’s fair to say that I think about “agencies” which from our outside perspective seem to behave in a coherent goal-directed way.   > The first issue I see with the intelligence explosion theory is a failure to recognize that intelligence is necessarily part of a broader system — a vision of intelligence as a “brain in jar” that can be made arbitrarily intelligent independently of its situation. > > I’m not aware of myself or Nick Bostrom or another major technical voice in this field claiming that problem-solving can go on independently of the situation/environment. That said, some systems function very well in a broad variety of structured low-entropy environments. E.g. the human brain functions much better than other primate brains in an extremely broad set of environments, including many that natural selection did not explicitly optimize for. We remain functional on the Moon, because the Moon has enough in common with the Earth on a sufficiently deep meta-level that, for example, *induction on past experience* goes on functioning there. Now if you tossed us into a universe where the future bore no compactly describable relation to the past, we would indeed not do very well in that “situation”—but this is not pragmatically relevant to the impact of AI on our own real world, where the future does bear a relation to the past.   > In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. > > [Scott Aaronson’s reaction](https://www.scottaaronson.com/blog/?p=3553): “Citing the ‘No Free Lunch Theorem’—i.e., the (trivial) statement that you can’t outperform brute-force search on *random* instances of an optimization problem—to claim anything useful about the limits of AI, is not a promising sign.” It seems worth spelling out an as-simple-as-possible special case of this point in mathy detail, since it looked to me like a central issue given the rest of Chollet’s essay. I expect this math isn’t new to Chollet, but reprise it here to establish common language and for the benefit of everyone else reading along. [Laplace’s Rule of Succession](https://arbital.com/p/laplace_rule_of_succession/), as invented by Thomas Bayes, gives us one simple rule for predicting future elements of a binary sequence based on previously observed elements. Let’s take this binary sequence to be a series of “heads” and “tails” generated by some sequence generator called a “coin”, not assumed to be fair. In the standard problem setup yielding the Rule of Succession, our state of prior ignorance is that we think there is some frequency \(\theta\) that a coin comes up heads, and for all we know \(\theta\) is equally likely to take on any real value between \(0\) and and \(1\). We can do some Bayesian inference and conclude that after seeing \(M\) heads and \(N\) tails, we should predict that the odds for heads : tails on the next coinflip are:   $$\frac{M + 1}{M + N + 2} : \frac{N + 1}{M + N + 2}$$   (See [Laplace’s Rule of Succession](https://arbital.com/p/laplace_rule_of_succession/) for the proof.) This rule yields advice like: “If you haven’t yet observed any coinflips, assign 50-50 to heads and tails” or “If you’ve seen four heads and no tails, assign 1/6 probability [rather than 0 probability](https://arbital.com/p/cromwells_rule/) to the next flip being tails” or “If you’ve seen the coin come up heads 150 times and tails 75 times, assign around 2/3 probability to the coin coming up heads next time.” Now this rule does not do super-well in any possible kind of environment. In particular, it doesn’t do any better than the maximum-entropy prediction “the next flip has a 50% probability of being heads, or tails, regardless of what we have observed previously” if the environment is in fact a fair coin. In general, there is “no free lunch” on predicting arbitrary binary sequences; if you assign greater probability mass or probability density to one binary sequence or class of sequences, you must have done so by draining probability from other binary sequences. If you begin with the prior that every binary sequence is equally likely, then you never expect any algorithm to do better *on average* than maximum entropy, even if that algorithm luckily does better in one particular random draw. On the other hand, if you start from the prior that every binary sequence is equally likely, you never notice anything a human would consider an obvious pattern. If you start from the maxentropy prior, then after observing a coin come up heads a thousand times, and tails never, you still predict 50-50 on the next draw; because on the maxentropy prior, the sequence “one thousand heads followed by tails” is exactly as likely as “one thousand heads followed by heads”. The inference rule instantiated by Laplace’s Rule of Succession does better in a generic low-entropy universe of coinflips. It doesn’t start from specific knowledge; it doesn’t begin from the assumption that the coin is biased heads, or biased tails. If the coin is biased heads, Laplace’s Rule learns that; if the coin is biased tails, Laplace’s Rule will soon learn that from observation as well. If the coin is actually fair, then Laplace’s Rule will rapidly converge to assigning probabilities in the region of 50-50 and not do much worse per coinflip than if we had started with the max-entropy prior. Can you do better than Laplace’s Rule of Succession? Sure; if the environment’s probability of generating heads is equal to 0.73 and you start out knowing that, then you can guess on the very first round that the probability of seeing heads is 73%. But even with this non-generic and highly specific knowledge built in, you do not do *very* much better than Laplace’s Rule of Succession unless the first coinflips are very important to your future survival. Laplace’s Rule will probably figure out the answer is somewhere around 3/4 in the first dozen rounds, and get to the answer being somewhere around 73% after a couple of hundred rounds, and if the answer *isn’t* 0.73 it can handle that case too. Is Laplace’s Rule the most general possible rule for inferring binary sequences? Obviously not; for example, if you saw the initial sequence… $$HTHTHTHTHTHTHTHT…$$   …then you would probably guess with high though [not infinite](https://arbital.com/p/cromwells_rule/) probability that the next element generated would be \(H\). This is because you have the ability to recognize a kind of pattern which Laplace’s Rule does not, i.e., alternating heads and tails. Of course, your ability to recognize this pattern only helps you in environments that sometimes generate a pattern like that—which the real universe sometimes does. If we tossed you into a universe which *just as frequently* presented you with ‘tails’ after observing a thousand perfect alternating pairs, as it did ‘heads’, then your pattern-recognition ability would be useless. Of course, a max-entropy universe like that will usually not present you with a thousand perfect alternations in the initial sequence to begin with! One extremely general but utterly intractable inference rule is [Solomonoff induction](https://arbital.com/p/solomonoff_induction/), a [universal prior](https://arbital.com/p/universal_prior/) which assigns probabilities to every computable sequence (or computable probability distribution over sequences) proportional to [algorithmic simplicity](https://arbital.com/p/Kolmogorov_complexity/), that is, in inverse proportion to the exponential of the size of the program required to specify the computation. Solomonoff induction can learn from observation any sequence that can be generated by a *compact program*, relative to a choice of universal computer which has at most a bounded effect on the amount of evidence required or the number of mistakes made. Of course a Solomonoff inductor will do slightly-though-not-much-worse than the max-entropy prior in a hypothetical structure-avoiding universe in which algorithmically compressible sequences are *less* likely; thankfully we don’t live in a universe like that. It would then seem perverse not to recognize that for large enough milestones we can see an informal ordering from less general inference rules to more general inference rules, those that do well in an increasingly broad and complicated variety of environments of the sort that the real world is liable to generate: The rule that always assigns probability 0.73 to heads on each round, performs optimally within the environment where each flip has independently a 0.73 probability of coming up heads. Laplace’s Rule of Succession will start to do equally well as this, given a couple of hundred initial coinflips to see the pattern; and Laplace’s Rule also does well in many other low-entropy universes besides, such as those where each flip has 0.07 probability of coming up heads. A human is more general and can also spot patterns like \(HTTHTTHTTHTT\) where Laplace’s Rule would merely converge to assigning probability 1/3 of each flip coming up heads, while the human becomes increasingly certain that a simple temporal process is at work which allows each succeeding flip to be predicted with near-certainty. If anyone ever happened across a hypercomputational device and built a Solomonoff inductor out of it, the Solomonoff inductor would be more general than the human and do well in any environment with a programmatic description substantially smaller than the amount of data the Solomonoff inductor could observe. None of these predictors need do very much worse than the max-entropy prediction in the case that the environment is actually max-entropy. It may not be a free lunch, but it’s not all that expensive even by the standards of hypothetical randomized universes; not that this matters for anything, since we don’t live in a max-entropy universe and therefore we don’t care how much worse we’d do in one. Some earlier informal discussion of this point can be found in [No-Free-Lunch Theorems Are Often Irrelevant](https://arbital.com/p/nofreelunch_irrelevant/).   > If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. > > Some problems are more general than other problems—not relative to a maxentropy prior, which treats all problem subclasses on an equal footing, but relative to the low-entropy universe we actually live in, where a sequence of a million observed heads is on the next round more liable to generate H than T. Similarly, relative to the problem classes tossed around in our low-entropy universe, “figure out what simple computation generates this sequence” is more general than a human which is more general than “figure out what is the frequency of heads or tails within this sequence.” Human intelligence is a problem-solving algorithm that can be understood with respect to a specific *problem class* that is potentially very, very broad in a pragmatic sense.   > In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human. > > The problem that a human solves is much more general than the problem an octopus solves, which is why we can walk on the Moon and the octopus can’t. We aren’t absolutely general—the Moon still has *a certain something* in common with the Earth. Scientific induction still works on the Moon. It is not the case that when you get to the Moon, the next observed charge of an electron has nothing to do with its previously observed charge; and if you throw a human into an alternate universe like that one, the human stops working. But the problem a human solves *is* general enough to pass from oxygen environments to the vacuum.   > What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? … The brain has hardcoded conceptions of having a body with hands that can grab, a mouth that can suck, eyes mounted on a moving head that can be used to visually follow objects (the vestibulo-ocular reflex), and these preconceptions are required for human intelligence to start taking control of the human body. > > It could be the case that in this sense a human’s motor cortex is analogous to an inference rule that always predicts heads with 0.73 probability on each round, and cannot learn to predict 0.07 instead. It could also be that our motor cortex is more like a Laplace inductor that starts out with 72 heads and 26 tails pre-observed, biased toward that particular ratio, but which can eventually learn 0.07 after another thousand rounds of observation. It’s an empirical question, but I’m not sure why it’s a very relevant one. It’s possible that human motor cortex is hyperspecialized—not just jumpstarted with prior knowledge, but inductively narrow and incapable of learning better—since in the ancestral environment, we never got randomly plopped into octopus bodies. But what of it? If you put some humans at a console and gave them a weird octopus-like robot to learn to control, I’d expect their full deliberate learning ability to do better than raw motor cortex in this regard. Humans using their whole intelligence, plus some simple controls, can learn to drive cars and fly airplanes even though those weren’t in our ancestral environment. We also have no reason to believe human motor cortex is the limit of what’s possible. If we sometimes got plopped into randomly generated bodies, I expect we’d already have motor cortex that could adapt to octopodes. Maybe MotorCortex Zero could do three days of self-play on controlling randomly generated bodies and emerge rapidly able to learn any body in that class. Or, humans who are allowed to use Keras could figure out how to control octopus arms using ML. The last case would be most closely analogous to that of a hypothetical seed AI.   > Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization. > > Human visual cortex doesn’t develop well without visual inputs. This doesn’t imply that our visual cortex is a simple blank slate, and that all the information to process vision is stored in the environment, and the visual cortex just adapts to that from a blank slate; if that were true, we’d expect it to easily take control of octopus eyes. The visual cortex requires visual input because of the logic of evolutionary biology: if you make X an environmental constant, the species is liable to acquire genes that assume the presence of X. It has no reason not to. The expected result would be that the visual cortex contains a large amount of genetic complexity that makes it better than generic cerebral cortex at doing vision, but some of this complexity requires visual input during childhood to unfold correctly. But if in the ancestral environment children had grown up in total darkness 10% of the time, before seeing light for the first time on adulthood, it seems extremely likely that we could have evolved to not require visual input in order for the visual cortex to wire itself up correctly. E.g., the retina could have evolved to send in simple hallucinatory shapes that would cause the rest of the system to wire itself up to detect those shapes, or something like that. Human children reliably grow up around other humans, so it wouldn’t be very surprising if humans evolved to build their basic intellectual control processes in a way that assumes the environment contains this info to be acquired. We cannot thereby infer how much information is being “stored” in the environment or that an intellectual control process would be too much information to store genetically; that is not a problem evolution had reason to try to solve, so we cannot infer from the lack of an evolved solution that such a solution was impossible. And even if there’s no evolved solution, this doesn’t mean you can’t intelligently design a solution. Natural selection never built animals with steel bones or wheels for limbs, because there’s no easy incremental pathway there through a series of smaller changes, so those designs aren’t very evolvable; but human engineers still build skyscrapers and cars, etcetera. Among humans, the art of Go is stored in a vast repository of historical games and other humans, and future Go masters among us grow up playing Go as children against superior human masters rather than inventing the whole art from scratch. You would not expect even the most talented human, reinventing the gameplay all on their own, to be able to win a competition match with a first-dan pro. But AlphaGo was initialized on this vast repository of played games in stored form, rather than it needing to actually play human masters. And then less than two years later, AlphaGo Zero taught itself to play at a vastly human-superior level, in three days, by self-play, from scratch, using a much simpler architecture with no ‘instinct’ in the form of precomputed features. Now one may perhaps postulate that there is some sharp and utter distinction between the problem that AlphaGo Zero solves, and the much more general problem that humans solve, whereby our vast edifice of Go knowledge can be surpassed by a self-contained system that teaches itself, but our general cognitive problem-solving abilities can neither be compressed into a database for initialization, nor taught by self-play. But why suppose that? Human civilization taught itself by a certain sort of self-play; we didn’t learn from aliens. More to the point, I don’t see a sharp and utter distinction between Laplace’s Rule, AlphaGo Zero, a human, and a Solomonoff inductor; they just learn successively more general problem classes. If AlphaGo Zero can waltz past all human knowledge of Go, I don’t see a strong reason why AGI Zero can’t waltz past the human grasp of how to reason well, or how to perform scientific investigations, or how to learn from the data in online papers and databases. This point could perhaps be counterargued, but it hasn’t yet been counterargued to my knowledge, and it certainly isn’t settled by any theorem of computer science known to me.   > If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt. Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment. > > It’s not obvious to me why any of this matters. Say an AI takes three days to learn to use an octopus body. So what? That is: We agree that it’s a mathematical truth that you need “some amount” of experience to go from a broadly general prior to a specific problem. That doesn’t mean that the required amount of experience is large for pragmatically important problems, or that it takes three decades instead of three days. We cannot casually pass from “proven: some amount of X is required” to “therefore: a large amount of X is required” or “therefore: so much X is required that it slows things down a lot”. (See also: [Harmless supernova fallacy: bounded, therefore harmless.](https://arbital.com/p/harmless_supernova/))   > If the gears of your brain were the defining factor of your problem-solving ability, then those rare humans with IQs far outside the normal range of human intelligence would live lives far outside the scope of normal lives, would solve problems previously thought unsolvable, and would take over the world — just as some people fear smarter-than-human AI will do. > > “von Neumann? Newton? Einstein?” —[Scott Aaronson](https://www.scottaaronson.com/blog/?p=3553) More importantly: Einstein et al. didn’t have brains that were 100 times larger than a human brain, or 10,000 times faster. By the logic of sexual recombination within a sexually reproducing species, Einstein et al. could not have had a large amount of *de novo* software that isn’t present in a standard human brain. (That is: An adaptation with 10 necessary parts, each of which is only 50% prevalent in the species, will only fully assemble 1 out of 1000 times, which isn’t often enough to present a sharp selection gradient on the component genes; *complex interdependent* machinery is necessarily universal within a sexually reproducing species, except that it may sometimes fail to fully assemble. You don’t get “mutants” with whole new complex abilities a la the X-Men.) Humans are metaphorically all compressed into one tiny little dot in the vastness of mind design space. We’re all the same make and model of car running the same engine under the hood, in slightly different sizes and with slightly different ornaments, and sometimes bits and pieces are missing. Even with respect to other primates, from whom we presumably differ by whole complex adaptations, we have 95% shared genetic material with chimpanzees. Variance between humans is not something that thereby establishes bounds on possible variation in intelligence, unless you import some further assumption not described here. The standard reply to anyone who deploys e.g. the Argument from Gödel to claim the impossibility of [AGI](https://arbital.com/p/agi/) is to ask, “Why doesn’t your argument rule out humans?” Similarly, a standard question that needs to be answered by anyone who deploys an argument against the possibility of superhuman general intelligence is, “Why doesn’t your argument rule out humans exhibiting pragmatically much greater intellectual performance than chimpanzees?” Specialized to this case, we’d ask, “Why doesn’t the fact that the smartest chimpanzees aren’t building rockets let us infer that no human can walk on the Moon?” No human, not even John von Neumann, could have reinvented the gameplay of Go on their own and gone on to stomp the world’s greatest Masters. AlphaGo Zero did so in three days. It’s clear that in general, “We can infer the bounds of cognitive power from the bounds of human variation” is false. If there’s supposed to be some special case of this rule which is true rather than false, and forbids superhuman AGI, that special case needs to be spelled out.   > Intelligence is not a superpower; exceptional intelligence does not, on its own, confer you with proportionally exceptional power over your circumstances. > > …said the *Homo sapiens*, surrounded by countless powerful artifacts whose abilities, let alone mechanisms, would be utterly incomprehensible to the organisms of any less intelligent Earthly species.   > A high-potential human 10,000 years ago would have been raised in a low-complexity environment, likely speaking a single language with fewer than 5,000 words, would never have been taught to read or write, would have been exposed to a limited amount of knowledge and to few cognitive challenges. The situation is a bit better for most contemporary humans, but there is no indication that our environmental opportunities currently outpace our cognitive potential. > > Does this imply that technology should be no more advanced 100 years from today, than it is today? If not, in what sense have we taken every possible opportunity of our environment? Is the idea that opportunities can only be taken in sequence, one after another, so that today’s technology only offers the possibilities of today’s advances? Then why couldn’t a more powerful intelligence run through them much faster, and rapidly build up those opportunities?   > A smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don’t in practice. > > It can’t eat the Internet? It can’t eat the stock market? It can’t crack the protein folding problem and deploy arbitrary biological systems? It can’t get anything done by thinking a million times faster than we do? All this is to be inferred from observing that the smartest human was no more impressive than John von Neumann? I don’t see the strong Bayesian evidence here. It seems easy to imagine worlds such that you can get a lot of pragmatically important stuff done if you have a brain 100 times the size of John von Neumann’s, think a million times faster, and have maxed out and transcended every human cognitive talent and not just the mathy parts, and yet have the version of John von Neumann inside that world be no more impressive than we saw. How then do we infer from observing John von Neumann that we are not in such worlds? We know that the rule of inferring bounds on cognition by looking at human maximums doesn’t work on AlphaGo Zero. Why does it work to infer that “An AGI can’t eat the stock market because no human has eaten the stock market”?   > However, these billions of brains, accumulating knowledge and developing external intelligent processes over thousand of years, implement a system — civilization — which may eventually lead to artificial brains with greater intelligence than that of a single human. It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence… > > > Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can. > > The premise is that brains of a particular size and composition that are running a particular kind of software (human brains) can only solve a problem X (which in this case is equal to “build an AGI”) if they cooperate in a certain group size N and run for a certain amount of time and build Z amount of external cognitive prostheses. Okay. Humans were not especially specialized on the AI-building problem by natural selection. Why wouldn’t an AGI with larger brains, running faster, using less insane software, containing its own high-speed programmable cognitive hardware to which it could interface directly in a high-bandwidth way, and perhaps specialized on computer programming in exactly the way that human brains aren’t, get more done on net than human civilization? Human civilization tackling Go devoted a lot of thinking time, parallel search, and cognitive prostheses in the form of playbooks, and then AlphaGo Zero blew past it in three days, etcetera. To sharpen this argument: We may begin from the premise, “For all problems X, if human civilization puts a lot of effort into X and gets as far as W, no single agency can get significantly further than W on its own,” and from this premise deduce that no single AGI will be able to build a new AGI shortly after the first AGI is built. However, this premise is obviously false, as even [Deep Blue](https://arbital.com/p/deep_blue/) bore witness. Is there supposed to be some special case of this generalization which is true rather than false, and says something about the ‘build an AGI’ problem which it does not say about the ‘win a chess game’ problem? Then what is that special case and why should we believe it? Also relevant: In the game of Kasparov vs. The World, the world’s best player Garry Kasparov played a single game against thousands of other players coordinated in an online forum, led by four chess masters. Garry Kasparov’s brain eventually won, against thousands of times as much brain matter. This tells us something about the inefficiency of human scaling with simple parallelism of the nodes, presumably due to the inefficiency and low bandwidth of human speech separating the would-be arrayed brains. It says that you do not need a thousand times as much processing power as one human brain to defeat the parallel work of a thousand human brains. It is the sort of thing that can be done even by one human who is a little more talented and practiced than the components of that parallel array. Humans often just don’t agglomerate very efficiently.   > However, future AIs, much like humans and the other intelligent systems we’ve produced so far, will contribute to our civilization, and our civilization, in turn, will use them to keep expanding the capabilities of the AIs it produces. > > This takes in the premise “AIs can only output a small amount of cognitive improvement in AI abilities” and reaches the conclusion “increase in AI capability will be a civilizationally diffuse process.” I’m not sure that the conclusion follows, but would mostly dispute that the premise has been established by previous arguments. To put it another way, this particular argument does not contribute anything new to support “AI cannot output much AI”, it just tries to reason further from that as a premise.   > Our problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools. The recursive loop has been in action for a long time, and the rise of “better brains” will not qualitatively affect it — no more than any previous intelligence-enhancing technology. > > From Arbital’s [Harmless supernova fallacy](https://arbital.com/p/harmless_supernova/) page: * **Precedented, therefore harmless:** “Really, we’ve already had supernovas around for a while: there are already devices that produce ‘super’ amounts of heat by fusing elements low in the periodic table, and they’re called thermonuclear weapons. Society has proven well able to regulate existing thermonuclear weapons and prevent them from being acquired by terrorists; there’s no reason the same shouldn’t be true of supernovas.” (Noncentral fallacy / continuum fallacy: putting supernovas on a continuum with hydrogen bombs doesn’t make them able to be handled by similar strategies, nor does finding a category such that it contains both supernovas and hydrogen bombs.)   > Our brains themselves were never a significant bottleneck in the AI-design process. > > A startling assertion. Let’s say we could speed up AI-researcher brains by a factor of 1000 within some virtual uploaded environment, not permitting them to do new physics or biology experiments, but still giving them access to computers within the virtual world. Are we to suppose that AI development would take the same amount of sidereal time? I for one would expect the next version of Tensorflow to come out much sooner, even taking into account that most individual AI experiments would be less grandiose because the sped-up researchers would need those experiments to complete faster and use less computing power. The scaling loss would be less than total, just like adding CPUs a thousand times as fast to the current research environment would probably speed up progress by at most a factor of 5, not a factor of 1000. Similarly, with all those sped-up brains we might see progress increase only by a factor of 50 instead of 1000, but I’d still expect it to go a lot faster. Then in what sense are we not bottlenecked on the speed of human brains in order to build up our understanding of AI?   > Crucially, the civilization-level intelligence-improving loop has only resulted in measurably linear progress in our problem-solving abilities over time. > > I obviously don’t consider myself a Kurzweilian, but even I have to object that this seems like an odd assertion to make about the past 10,000 years.   > Wouldn’t recursively improving X mathematically result in X growing exponentially? No — in short, because no complex real-world system can be modeled as `X(t + 1) = X(t) \* a, a > 1)`. > > This seems like a *really* odd assertion, refuted by a single glance at [world GDP](https://en.wikipedia.org/wiki/Gross_world_product#Historical_and_prehistorical_estimates). Note that this can’t be an isolated observation, because it also implies that every *necessary* input into world GDP is managing to keep up, and that every input which isn’t managing to keep up has been economically bypassed at least with respect to recent history.   > We don’t have to speculate about whether an “explosion” would happen the moment an intelligent system starts optimizing its own intelligence. As it happens, most systems are recursively self-improving. We’re surrounded with them… Mechatronics is recursively self-improving — better manufacturing robots can manufacture better manufacturing robots. Military empires are recursively self-expanding — the larger your empire, the greater your military means to expand it further. Personal investing is recursively self-improving — the more money you have, the more money you can make. > > If we define “recursive self-improvement” to mean merely “causal process containing at least one positive loop” then the world abounds with such, that is true. It could still be worth distinguishing some feedback loops as going much faster than others: e.g., the cascade of neutrons in a nuclear weapon, or the cascade of information inside the transistors of a hypothetical seed AI. This seems like another instance of “precedented therefore harmless” within the harmless supernova fallacy.   > Software is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software, much like our environment puts a hard limit on how intelligent any individual can be — even if gifted with a superhuman brain. > > “A chimpanzee is just one cog in a bigger process—the ecology. Why postulate some kind of weird superchimp that can expand its superchimp economy at vastly greater rates than the amount of chimp-food produced by the current ecology?” Concretely, suppose an agent is smart enough to crack inverse protein structure prediction, i.e., it can build its own biology and whatever amount of post-biological molecular machinery is permitted by the laws of physics. In what sense is it still dependent on most of the economic outputs of the rest of human culture? Why wouldn’t it just start building von Neumann machines?   > Beyond contextual hard limits, even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it . > > Smart agents will try to deliberately bypass these bottlenecks and often succeed, which is why the world economy continues to grow at an exponential pace instead of having run out of wheat in 1200 CE. It continues to grow at an exponential pace despite even the antagonistic processes of… but I’d rather not divert this conversation into politics. Now to be sure, the smartest mind can’t expand faster than light, and its exponential growth will bottleneck on running out of atoms and negentropy if we’re remotely correct about the character of physical law. But to say that this is therefore no reason to worry would be the “bounded, therefore harmless” variant of the harmless supernova fallacy. A supernova isn’t infinitely hot, but it’s pretty darned hot and you can’t survive one just by wearing a Nomex jumpsuit.   > When it comes to intelligence, inter-system communication arises as a brake on any improvement of underlying modules — a brain with smarter parts will have more trouble coordinating them; > > Why doesn’t this prove that humans can’t be much smarter than chimps? What we can infer about the scaling laws that were governing human brains from the evolutionary record is a complicated topic. On this particular point I’d refer you to section 3.1, “Returns on brain size”, pp. 35–39, in [my semitechnical discussion of returns on cognitive investment](https://intelligence.org/files/IEM.pdf). The conclusion there is that we can infer from the increase in equilibrium brain size over the last few million years of hominid history, plus the basic logic of population genetics, that over this time period there were increasing marginal returns to brain size with increasing time and presumably increasingly sophisticated neural ‘software’. I also remark that human brains are not the only possible cognitive computing fabrics.   > It is perhaps not a coincidence that very high-IQ people are more likely to suffer from certain mental illnesses. > > I’d expect very-high-IQ chimps to be more likely to suffer from some neurological disorders than typical chimps. This doesn’t tell us that chimps are approaching the ultimate hard limit of intelligence, beyond which you can’t scale without going insane. It tells us that if you take any biological system and try to operate under conditions outside the typical ancestral case, it is more likely to break down. Very-high-IQ humans are not the typical humans that natural selection has selected-for as normal operating conditions.   > Yet, modern scientific progress is measurably linear. I wrote about this phenomenon at length in a 2012 essay titled “The Singularity is not coming”. We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades. > > I broadly agree with respect to recent history. I tend to see this as an artifact of human bureaucracies shooting themselves in the foot in a way that I would not expect to apply within a single unified agent. It’s possible we’re reaching the end of available fruit in our finite supply of physics. This doesn’t mean our present material technology could compete with the limits of possible material technology, which would at the very least include whatever biology-machine hybrid systems could be rapidly manufactured given the limits of mastery of biochemistry.   > As scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow. > > Our brains don’t scale to hold it all, and every time a new human is born you have to start over from scratch instead of copying and pasting the knowledge. It does not seem to me like a slam-dunk to generalize from the squishy little brains yelling at each other to infer the scaling laws of arbitrary cognitive computing fabrics.   > Intelligence is situational — there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole. > > True of chimps; didn’t stop humans from being much smarter than chimps.   > No system exists in a vacuum; any individual intelligence will always be both defined and limited by the context of its existence, by its environment. > > True of mice; didn’t stop humans from being much smarter than mice. Part of the argument above was, as I would perhaps unfairly summarize it, “There is no sense in which a human is absolutely smarter than an octopus.” Okay, but *pragmatically* speaking, we have nuclear weapons and octopodes don’t. A similar *pragmatic* capability gap between humans and [unaligned](https://arbital.com/p/ai_alignment/) AGIs seems like a matter of legitimate concern. If you don’t want to call that an intelligence gap then call it what you like.   > Currently, our environment, not our brain, is acting as the bottleneck to our intelligence. > > I don’t see what observation about our present world licenses the conclusion that speeding up brains tenfold would produce no change in the rate of technological advancement.   > Human intelligence is largely externalized, contained not in our brain but in our civilization. We are our tools — our brains are modules in a cognitive system much larger than ourselves. > > What about this fact is supposed to imply *slower* progress by an AGI that has a continuous, high-bandwidth interaction with its own onboard cognitive tools?   > A system that is already self-improving, and has been for a long time. > > True if we redefine “self-improving” as “any positive feedback loop whatsoever”. A nuclear fission weapon is also a positive feedback loop in neutrons triggering the release of more neutrons. The elements of this system interact on a much faster timescale than human neurons fire, and thus the overall process goes pretty fast on our own subjective timescale. I don’t recommend standing next to one when it goes off.   > Recursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions arising from the broader context in which they exist, cannot achieve exponential progress in practice. Empirically, they tend to display linear or sigmoidal improvement. > > Falsified by a graph of world GDP on almost any timescale.   > In particular, this is the case for scientific progress — science being possibly the closest system to a recursively self-improving AI that we can observe. > > I think we’re mostly just [doing science wrong](https://arbital.com/p/likelihood_vs_pvalue/), but that would be a [much longer discussion](https://equilibriabook.com/). Fits-on-a-T-Shirt rejoinders would include “Why think we’re at the upper bound of being-good-at-science any more than chimps were?”   > Recursive intelligence expansion is already happening — at the level of our civilization. It will keep happening in the age of AI, and it progresses at a roughly linear pace. > > If this were to be true, I don’t think it would be established by the arguments given. Much of this debate has previously been reprised by myself and Robin Hanson in the “[AI Foom Debate](https://intelligence.org/ai-foom-debate/).” I expect that even Robin Hanson, who was broadly opposing my side of this debate, would have a coughing fit over the idea that progress within all systems is confined to a roughly linear pace. For more reading I recommend my own semitechnical essay on what our current observations can tell us about the scaling of cognitive systems with increasing resources and increasing optimization, “[Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf).” The post [A reply to Francois Chollet on intelligence explosion](https://intelligence.org/2017/12/06/chollet/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
732482dc-d6c4-478d-a398-5fd6149b96b7
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Expectations for Gemini: hopefully not a big deal **Introduction** ---------------- My goal is to register and share my expectations and hear others' opinions on their expectation for the relative performances of Gemini VS GPT-4. **My expectations** ------------------- GPT-4 to Gemini will likely not be as big a jump in capabilities as GPT-3 to GPT-4 was.  Gemini could bring surprises by being more agentic than GPT-4. Being better at planning and longer horizon tasks. But this is likely difficult to achieve, or strong LLM agents would already be making the buzz. **Comparison** -------------- ### **From GPT-3 to GPT-4** * **Scaling Factor**: x100 more compute than GPT-3. * **Optimization**: Chinchilla scaling laws (for MoE) over OpenAI/Kaplan scaling laws. * **MoE Over Dense**: Utilizes Mixture of Experts (MoE) instead of dense layers. * **Data Quality**: Likely higher-quality data, not sure. * **Image Generation**: Not publicly released, possibly due to subpar performance or security risks. * Tools are added during finetuning. * **Algorithmic Gains**: 3 years between GPT-3 and GPT-4. * GPT-4 may already employ process-based feedback. * GPT-4 aimed for training compute efficiency. GPT-4 was not designed to be commercially deployed at scale. ### **GPT-4 to Gemini** * **Scaling Factor**: ~x5 (x20) more compute than GPT-4. * **Supercomputer Constraint**: No existing supercomputer could feasibly provide x100 more compute than used for GPT-4. (Not sure but likely) * **Multimodal**: maybe image, audio, speech. * **Data Efficiency**: Possibly better quality data like Google Books, fewer epochs. * Tools could be added either during finetuning or pretraining. * **Algorithmic Gains**: ~1 year between GPT-4 and Gemini. * Gemini more likely aims for inference efficiency, given its intended extensive usage by Google. Maybe sacrificing training efficiency. * Gemini trained to be more agentic, better at planning, etc. ("GPT-4 + AlphaGo"). **Note:** I drafted that before news of Gemini's release and capabilities but failed to finish writing... Since then, there have been some reports of Gemini being roughly at the level of GPT-4...
9dcb1c23-7d19-4a98-9cd2-716538ba7a5b
trentmkelly/LessWrong-43k
LessWrong
Rival formalizations of a decision problem Decision theory is not one of my strengths, and I have a question about it. Is there a consensus view on how to deal with the problem of "rival formalizations"? Peterson (2009) illustrates the problem like this: > Imagine that you are a paparazzi photographer and that rumour has it that actress Julia Roberts will show up in either New York (NY), Los Angeles (LA) or Paris (P). Nothing is known about the probability of these states of the world. You have to decide if you should stay in America or catch a plane to Paris. If you stay and [she] shows up in Paris you get $0; otherwise you get your photos, which you will be able to sell for $10,000. If you catch a plane to Paris and Julia Roberts shows up in Paris your net gain after having paid for the ticket is $5,000, and if she shows up in America you for some reason, never mind why, get $6,000. Your initial representation of the decision problem is visualized in Table 2.13. Table 2.13   P LA NY Stay $0 $10k $10k Go to Paris $5k $6k $6k > Since nothing is known about the probabilities of the states in Table 2.13, you decide it makes sense to regard them as equally probable [see Table 2.14]. Table 2.14   P (1/3) LA (1/3) NY (1/3) Stay $0 $10k $10k Go to Paris $5k $6k $6k > The rightmost columns are exactly parallel. Therefore, they can be merged into a single (disjuntive) column, by adding the probabilities of the two rightmost columns together (Table 2.15). Table 2.15   P (1/3) LA or NY (2/3) Stay $0 $10k Go to Paris $5k $6k > However, now suppose that you instead start with Table 2.13 and first merge the two repetitious states into a single state. You would then obtain the decision matrix in Table 2.16. Table 2.16   P LA or NY Stay $0 $10k Go to Paris $5k $6k > Now, since you know nothing about the probabilities of the two states, you decide to regard them as equally probable... This yields the formal representation in Table 2.17, which is clearly different from the one suggested above in Table 2.15. Ta
1de45bae-0cdb-4ce9-ac9b-1808dd33b4f0
trentmkelly/LessWrong-43k
LessWrong
Should I do it? > Shall we bite the apple?   Hello community, nice to meet you here.  For a long time I have been tormented by a question that I can't seem to answer alone. Let's talk hypothetically. Let's assume that I have developed a method that can be mapped in a small program. This method, once started, will develop an AGI by itself. The program works independently, evolves and gets better from minute to minute. Not just better with one problem, but with any number of problems. The algorithm is able to rewrite and adapt itself. The algorithm follows this path of continuous improvement until it ends in an AGI. This algorithm consists of several individual components, which together give this algorithm the capabilities described above. Imagine that I wrote and tested each of these individual components separately from each other in an alpha release (error driven design). And further imagine, each of these parts delivered the expected result. The only thing left is to write these components in a faster language like C ++ or Assembler, bring them together and press 'Start'. My dilemma is that I cannot ensure that the process can be stopped once it has started. I'm curious. So I sit here and think. Should I do it and provide evidence? Or should I just wait, do nothing and watch someone else come up with the idea how to do it? Or should I disclose the process without a program? Just publish it so that everyone can read it? A gift to humanity. The question is justified when you consider the risks. But if you want to test it in a safe environment, then it's expensive and costs a lot, a lot of money. I am getting impatient. ... and if you keep walking on the paths of imagination, imagine that you have turned to your government about your dilemma, and to other governments as well, as well as to the economy, and nobody answers you ... So I have two options. A.) Either my aborted mathematics studies and 34 years of experience as a programmer led me to a dead end. I need a pr
bfceeac2-83f5-4abc-9361-62d3d7c0bbd4
trentmkelly/LessWrong-43k
LessWrong
The rational rationalist's guide to rationally using "rational" in rational post titles 1. Don't.
52487326-dbee-486d-ba80-42e38f329724
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Values lock-in is already happening (without AGI) **Summary** ----------- Algorithms as significant, constant factors in our material lives are very new things. Today, algorithms guide the majority of our social and economic exchanges. Twenty-five years ago, algorithms guided almost no aspect of our daily social and economic activity. This modern phenomena of algorithms is dominated by self-interested discourse, under the auspices of neutrality in utility. But, where there is a void of values, values-lessness takes hold. Values-lessness is antithesis to other-regarding behaviors like altruism. Our widespread use of algorithms, their increasing sophistication and the amount of time we spend with them is making it increasingly likely that we will lock-in this values-less system before humanity ever manages to reach an AGI reality. If the EA project reorients towards systems [discourse] change, EAs can potentially do something about this. **Reading note** ---------------- *This essay is an application of an already posted critique of EA,* [*Altruism is systems change, so why isn’t EA?*](https://forum.effectivealtruism.org/posts/xZrvbwhSLmsGmHHSD/altruism-is-systems-change-so-why-isn-t-ea-constructive) *I read the first half of What We Owe the Future over the weekend and Part II really stuck with me, particularly the discussion of value lock-in. So, I thought I’d add to my critique with this application of it to the concept of values lock-in. I had also previously applied my critique to beneficiary agency in giving,* [*Reciprocity and the causes of diminishing returns*](https://forum.effectivealtruism.org/posts/x9towRLtvYidkXugk/reciprocity-and-the-causes-of-diminishing-returns-cause)*, if you’re curious.* *You can view everything in brackets [] as a helper word(s) in case the concept being expressed is unfamiliar. These words can also be read as part of the text, of course. You can also just ignore everything in brackets too, if it makes it easier for you. Even better, insert  [your own]...* *Given this is an application of my already existing critique, I’m copying some relevant key positions from that critique to hopefully make this essay a bit easier to consume. New positions, for this specific argument, are noted.* **Key positions** ----------------- * *New*: the advent and acceleration of algorithms in defining market discourse is currently locking-in a self-interested values-less system, which is antithesis to altruism. This is somewhat dependent upon AI, but mostly its about exposure and the frequency of that exposure to the self-interested discourse. * *New*: privilege makes observing and questioning systems more difficult. * For this critique, *Systems change* and *systemic change* are not the same thing. EA references to *systemic change* do not attempt to reimagine our shared, reality shaping discourse. Rather, they focus on modifications to the discourse as it is without challenging it, to make it possibly better (e.g., policy changes). *Systems change* is broader and is a challenge to the system as it is- a reassessment, revision and reassumption of what we share as reality. * Systems change, when it happens, is rarely dramatic and oftentimes not even noticeable until it has already happened. It is, however, almost always intentional. And when it's not intentional, there's likely no choice in it happening (see x-risk concerns) which is also when it can be dramatic. * We occupy a self-interested system that was created for and by the study of economics and those interested in self [the self-interested?]. There is something other, maybe even before this self-interested system that is fundamentally more compatible with altruism than the current system. A lot of economic theory points to a system of reciprocity where altruism can be seen as a facet of strong reciprocity. * Self-interest is antithesis to altruism. It's contradictory to reify the self-interested system by using its tools axiomatically and treating the question of systems change ambivalently all while claiming altruistic intent. **Privilege and systems awareness** ----------------------------------- A sort of general criticism of EA is that it is a privileged thing. This is a critique I think tends to stick a bit, because the most frequent proponents of EA are generally materially and socially privileged people (a quality of which they are all well aware): elite university academics, graduates, well paid techies, billionaires who attended elite universities, etc. This notion of privilege feeds into another notion: altruism is something you get to do when you make it or have no material worries, like some sort of materialistic human advancement. If you collect enough gold coins, you get to level up. There’s a bit of a feedback loop here that I imagine could make communicating EA outside of its existing audience difficult and frustrating. It's also mostly inaccurate. Altruism only seems like a privileged opportunity and only seems like a potentially possible human advancement because we're immersed in an artificial, self-interested discourse that doesn’t give much room for non-elites to contemplate collective benefit the way EA folks have been allowed through their actual privilege [leisure]. This might sound harsh and reflexive for some. I am, however, an ardent supporter of being altruistic and doing so as effectively as possible. It's a primary motivation in my life and work,  which makes me a fairly natural supporter of the EA project. However, I too have certain privileges that allow me, to an extent, leisure to examine life’s big questions. The EA project’s problem here, at least communicatively, is that most people don’t have this opportunity and by most, I mean the vast, vast majority. And because EA as a project is ambivalent about the system that keeps people from that opportunity [actual privilege], it makes the people whom EA proponents are trying to reach pretty suspect of EA, when they have the time to even properly consider the pitch, that is. There are also whole other groups of people out there with their own certain types of privilege - scientists, for instance  - who are just starting to take notice of EA and question not only its lack of critical systems treatment, but also its [standard economic] reductionist tendencies - which likely results from its general lack of critical systems treatment. Self-interest, economically speaking, is necessarily reductionist and this is the system EA finds itself within and choses to treat ambivalently.[[1]](#fnnfam1lu845) I think it’s probably hard for some EA proponents to be critical of a system that affords them their leisure [time to think]. Intuitively, it’s a somewhat natural human position for anyone with leisure. It’s the self-interested system that has bolstered the system reifying institutions with billion dollar endowments, elite networks and so forth, so why question that which allows for your deep thought? This self-imposed, self-interested construct is also an artifact; it's not any more humanly true, for the critic or the leisured, than anything else created by people. No one has to just accept this anymore than they have to just accept an artificial system or any of its other artifacts. It makes some sense then, though, that this sort of willful systems blindness would present as systems agnosticism [ambivalence], completely skipping over something that is at the root of so many of the problems EA seeks to address. A pretty solid example of how this blindness manifests can be found in Will MacAskill's concern for *values lock-in* in *What We Owe The Future (WWOTF)*: a system triggered by AGI advance which would or could potentially lock-in the wrong values.[[2]](#fnvpmh4fridx) Values lock-in is a concern we should all be focused on and not just because of one potential future, but because it's something that is already happening. It's accelerating, too and it hasn’t taken anything remotely as futuristic as AGI to get us there - which should frighten anyone focused on longtermism.  **Values-less algorithms** -------------------------- As MacAskill notes in WWOTF, Google is already using AI in its search product - which is true, but that’s where MacAskill's treatment of AI in search ends. Here’s a bit more information. Google regularly uses three AI models ( RankBrain, neural matching and BERT) that provide language processing that also factors into Google’s ranking calculations and a fourth (MUM) that is focused mostly on just language processing and finding consensus in results, for now.[[3]](#fngt9tgxvsq34) Google has several more potential AI models in development that it could or will add to its algorithm. The use cases for these AI models so far have been generalized, but also quite specialized towards language comprehension with the exception of MUM which is looking to minimize misinformation.[[4]](#fnsxpsrvn94b) They are factors in Google’s total ranking algorithm, but they are not anywhere near determinative for most rankings nor do they undermine the basic theory of Google's search algorithm, they in fact support it. Google’s algorithm, for the most part, is still just a series of processes - calculations of factors that, ultimately, tell you what information is important. No AGI, no overall complex AI decision making, just math and not even particularly sophisticated math at that. Google’s algorithm, boiled down, is essentially weighted ranking.  The scary part is, that even without sophisticated machine learning being a determinant factor in search, Google is telling you and everyone else, what to [not] care about, what’s [not] important and its shaping who we are and how we see ourselves and the world. And what is it that Google is telling you is most important? What’s popular, of course. Because what is popular is what sells. The core of Google's algorithm, from inception to date, is ranking popularity - which sites have more backlinks from other sites that have more backlinks Google determines are relevant, where what is relevant is what is popularly determined to be relevant. The MUM AI model, for instance, seems to be reinforcing and enhancing this. It is, fundamentally, that simple. There are no explicit values in this process, which, like all absence, actually means something. It’s a position taken against values being meaningful in search, at least.  Reductionist intuition would probably make one wonder what the problem is - tools should be values-less [neutral]. But neutral in this instance doesn’t mean enforced or conscientious neutrality - it means ambivalence and just as ambivalence towards the self-interested system reinforces self-interested [lack of] values, so too does it reinforce a lack of values in Google’s ‘neutral’ search engine.  A multitude of studies, qualitative and quantitative, have demonstrated exactly how Google’s algorithm, while designed to be neutral about everything, is actually reinforcing and helping set biases, especially social biases. In 2017, for example, Safjia Umoja Noble demonstrated biases in search suggestion, images, and identity-based search results among many other stereotype reinforcing outcomes, all underpinned by Google’s supposedly neutral algorithm.[[5]](#fngkg5a3xvw67) Umoja Noble has identified search suggestions that transmit derogatory messages to marginalized people, about themselves. She's found image searches that reinforce image stereotyping for ethnic or racial minorities. She's identified search results that push users toward content filled with implicit bias, based on unrelated, innocuous prompts and terms. And much more. As Umoja Noble put it, > …there is a missing social and human context in some types of algorithmically driven decision making, and this matters for everyone engaging with these types of technologies in everyday life. It is of particular concern for marginalized groups, those who are problematically represented in erroneous, stereotypical, or even pornographic ways in search engines and who have also struggled for nonstereotypical or nonracist and nonsexist depictions in the media and in libraries.[[6]](#fnwevdbcdqhy) > > When Umoja Noble says “everyone,” she means it. While these biases directly harm marginalized individuals, they quite directly, although unintentionally reinforce stereotypes within typically privileged audiences and these audiences are likely unaware. This agency robbing harm just barely scratches the surface of the algorithmic values-less-setting process, though.  Google handles over 90% of all search engine traffic globally which translates to 90% of all search engine based advertising. Because of this search dominance, Google also handles just shy of a third of all display advertising globally - advertisers largely taking advantage of programmatic advertising opportunities.[[7]](#fn5mcigdn7238) And while Google isn’t advertising bias, it isn’t advertising values either. Materialist consumption represents the vast majority of Google’s advertising activity; the same self-interested materialism that is quite starkly antithesis to altruism and other forms of reciprocity. Again, where values are absent, values-lessness [self-interest] fills the void. Most search engine purveyors operate on a similarly synergistic search > programmatic advertising model. Other big advertisers, like Amazon, programmatically remarket based on product searches placed within their selling platforms. Basically, people who have unfettered access to online search and commerce are being constantly exposed to bias and material self-interest [values-less discourse], most of it implicitly, reducing their agency.  Social media algorithms are worse. Because of the closed-system of social media platforms, the selling goal for most of them is to keep people engaged with their platform as long as possible and as often as possible. Increased exposure means increased revenue. To keep attention, social media platforms employ a variety of tools to engage behavioral responses in their users. Likes, notifications and gamification trigger reward centers in the human brain while content silo-ing appeals to conditioned and innate grouping social behaviors. This isn’t exactly news to people after the scrutiny social media has received over the last few years. What a lot of people don’t realize, however, is how simple and easy behavioral modification really is. Many people assume that they are too intelligent or savvy to be persuaded by marketing and social bias techniques, but  people across the board, privileged or not, highly intelligent or typical, are extremely perceptible to discursive bias formation through mere exposure, the band wagon effect, social norm formation and many more behavioral bias formation effects.[[8]](#fngz8tj6a06kr) If you are a social media user, then chances are if you think you are impervious to social media bias forming effects, you’ve likely already been affected.  Social media claims neutrality as well, just like search algorithm purveyors. And of course, just like with search, where there is a void of values, values-less discourse [the self-interested system] fills it. Recent research has shown that social media not only further polarizes people [the dualistic mode of the self-interested system], but has also made people extremely less open to those they view as *other*.[[9]](#fnizu3qu3wp9) Other-regarding behaviors are suffering across the board when people perceive *others* to be outside of their social or political group, which from at least a few theoretical perspectives, would mean other-regarding behaviors are ceasing to exist at all. For readers not familiar with 'other-regarding behaviors' from the economics taxonomy, it stands for things you do for others (e.g., altruism).  The acceleration of all of this might be the most concerning aspect of what I’ve described, at least for me. There’s the basic observation that a little over two decades ago, we barely had search algorithms and social media algorithms were not a thing at all. But on top of this, there’s the acceleration in the amount of time people are committing to spending with these algorithms in addition to the ever increasing sophistication of these algorithms in their ability to provoke and even command behavior. And all of it is completely devoid of values. There are currently over 5 billion internet users on the planet who are spending as much time, on average, online as they do sleeping - every single day.[[10]](#fn99l79j97iyq) That’s about 7 hours a day being inundated with the self-interested [values-less] discourse of algorithms and all of the biases, polarization and reductionist concepts these algorithms bring with them.  There’s a lot of scientifically derived psychology that can be employed here to explain what happens to people with this level of exposure to certain things, views and systems. Mere exposure study, for instance, indicates that often, singular experiences, like just one experience with a good meal can generate preferences that last an entire lifetime.[[11]](#fna3d86znoqo7) That might seem like a silly and obvious example, but sustenance is a fundamental material concern - eating food is an act of self-interest. Mere exposure, of course, is bigger than this. Public relations and advertising professionals have been using its basic principles for generations to persuade people otherwise unpersuadable - at an exponentially lower clip than what we are currently experiencing.  It would probably be pretty easy to dismiss the observations I’ve presented here as just a forceful anti-consumerism argument, but that would be a shallow or superficial view. Anti-consumerism tends to deal with the material impacts on people and the environment consumerism entails. That’s certainly problematic, but I am interested here in the impact this overwhelming imposition of the self-interested system is having on human agency. How much of it are people able to absorb, for instance, before they are incapable of considering other ways of being, other discourses? I suspect a lot of EA adherents might read this and think that they are able to think beyond the avalanche of self-interested discourse, so others should be able to as well. But that argument forgets privilege and how the EA’s ambivalence towards systems change is not making it any easier for other people to have the leisure to contemplate other systems and ways, like EA adherents contemplate altruism. This is values lock-in, happening right now and currently, the EA project isn’t doing anything explicit or obviously intentional about it, which makes longtermist concerns about values lock-in maybe moot - unless something’s done about it, that is [systems change].  **Why are you teaching your robot to be self-interested?** ---------------------------------------------------------- By the time humans get to a point where AGI determining our values for us is a near term potential, it might very well be too late. We likely will have already locked in a values-less, self-interested system and even more frighteningly, we will have likely locked-in these values for the AGI we create as well. Real altruism will likely not be part of the equation. I am not an AI professional, but given my focus on our current algorithmically driven world, I pay pretty close attention. My observations tend to confirm what I have explained here and the conclusion I lean towards. For instance, reinforcement learning (RL) models are becoming more and more active in AI development and pretty much all models are built, from the ground up, to emulate self-interest in agent motivation as well as agent interaction with other agents. There are a few fairly recent instances where big groups like Google and Intel have been programming their RL agents towards cooperation preferences, but, importantly, only in the service of self-interest - which as you know, is not altruistic, to say the least. **What’s an EA to do?** ----------------------- If you skipped the reading note, then you missed that this essay is an application of a broader critique of EA I made, [*Altruism is systems change, so why isn’t EA*?](https://forum.effectivealtruism.org/posts/xZrvbwhSLmsGmHHSD/altruism-is-systems-change-so-why-isn-t-ea-constructive) You can probably guess from the title that I argue in that piece that EA as a project is currently pretty ambivalent about actual systems change (not to be confused with how EA defines systemic change) and that if it wants to meet its own objectives, it needs to shift course. This same recommendation applies here. If you’re going to prevent AGI from locking in the wrong set of values - including the absence thereof - then you need to start by addressing the lock-in of values now, where and when it is already happening. So, how do you do that? Here are some thoughts: * Inject reciprocal discourse into everything the EA project does. Its a natural fit, given that altruism is functionally an aspect of reciprocity. * Study algorithms and other forms of communication from an altruistic perspective. Promote the results. * As I mentioned, I am no AI expert, but I have looked and do watch the AI space and am pretty unable to identify many AI projects that are building models that start with reciprocity (within which I and others, like a lot of economists and political economists, include altruism). Like the RL models I mentioned, they all start with self-interested modeling or agents and might progress to cooperation, but limited to reciprocal behavior in the service of self-interest - 'emergent reciprocity', which is likely a false construct, as far as human development is concerned. Maybe it's a naïve question, but I wonder why there is an lack of modeling driven first by other-regarding rather than self-regarding behaviors? And maybe this is happening, but I don't necessarily see it emphasized within the EA project. It seems to me that the vast majority of AI modeling currently underway, that the EA project also observes, is approaching agents from a *homo economicus* idea of human nature, which as I argued and others are arguing, is quite flawed, at least when you're describing the only other advanced sentients we know of (humans). * AI Academia also seems pretty lacking in perspective on reciprocity as foundational, rather than emergent. In an admittedly limited, but recent search for academic articles dealing with reciprocity in AI development, I only managed to find a handful dealing with reciprocity as a basis for AI modeling and even fewer dealing with reciprocity as a value [system] within AI theory. A lot of it was dated and most of it was fairly obscure, if citation numbers are an indicator. Maybe this could change? * This last one might be off putting given I am currently arguing for a forceful, critical EA treatment of discourse [systems change], which is already a lot of thinking and work.  But broadening the EA project beyond giving and future considerations to focus on aspects of the world where there is more discourse and therefore, more opportunity to change hearts and minds [systems change], might be necessary. In the above, I referenced algorithms - which operate primarily in markets. This is one example, there are others, but markets are where most human interaction takes place - especially in our hyper self-interested, increasingly algorithmic system.   1. **[^](#fnrefnfam1lu845)**Hard science has generally stopped prioritizing reductionism in favor of concepts like emergence, holism and of course, complex systems. Economics hasn't quite caught up and a lot of people tend to still treat economic reductionism axiomatically, especially some classical economists, non-economists and non-political economists.  It's also important to define what reductionism means in this essay and general critique. Reductionism has a lot of varied meaning in different settings and their discourses. The use of reductionism in this critique is specific and limited to positivist and economic reductionism as they relate to economic or human exchange discourse as well as the individual within these discursive settings. This use should not be conflated with Derek Parfit's reductionist view of personal identity. My position for this critique is that the social cannot be reduced in truth defining ways and that reductionism is just a tool to further or aid understanding, not understanding in and of itself, in the social [discursive] context. This contention and the arguments presented here might raise questions and have potential implications about the nature of the individual outside of human exchange settings, especially since a primary subject here is self-interest. I would suggest, however, there is potentially more alignment with  Parfit's reductionism, which dissolves the notion of self, and what I am arguing, than there is conflict (e.g., to what extent are the problems of the impersonal also the problems of the self-interested system [discourse]?). There is not, however, room to discuss all of this in this critique. 2. **[^](#fnrefvpmh4fridx)**See Part II in : MacAskill, William (2022), *What We Owe The Future,* Basic Books, New York. 3. **[^](#fnrefgt9tgxvsq34)**Schwartz, Barry (2022), *How Google uses artificial intelligence In Google Search,* Search Engine Land, retrieved from: https://searchengineland.com/how-google-uses-artificial-intelligence-in-google-search-379746 4. **[^](#fnrefsxpsrvn94b)** Nayak, Pandu (2021),  MUM: A new AI milestone for understanding information, Google, retrieved from: https://blog.google/products/search/introducing-mum/ 5. **[^](#fnrefgkg5a3xvw67)**Umoja Noble, Safiya (2017), *Algorithms of Oppression,* New York University Press, New York. 6. **[^](#fnrefwevdbcdqhy)***ibid, p 22.* 7. **[^](#fnref5mcigdn7238)**Kemp, Simon (2022), *Digital 2022: April Global Statshot Report*, Dataportal.com accessed here: https://datareportal.com/reports/digital-2022-april-global-statshot 8. **[^](#fnrefgz8tj6a06kr)**See, for example, Cinelli, Matteo, et. al., (2021), *The echo chamber effect on social media*, Proceedings of the National Academy of Sciences, 118, 9. 9. **[^](#fnrefizu3qu3wp9)**Bail, Christopher, et. al., (2018), *Exposure to opposing views on social media can increase political polarization*, Proceedings of the National Academy of Sciences, 115, 37. 10. **[^](#fnref99l79j97iyq)**Kemp, Simon (2022), *Digital 2022: April Global Statshot Report*, Dataportal.com. 11. **[^](#fnrefa3d86znoqo7)**This is a fairly well understood aspect or result of mere exposure bias, but for a fairly classic example, see: Bornstein, Robert, et. al., (1992), *Stimulus recognition and the mere exposure effect,* Journal of Personality and Social Psychology, 63, 4. For a food preference specific reference, see: Pliner, Patricia (1982), *The Effects of Mere Exposure on Liking for Edible Substances,* Appetite, 3, 3.
66f504cc-37a3-4237-837c-3db5ae854cfd
StampyAI/alignment-research-dataset/lesswrong
LessWrong
FC final: Can Factored Cognition schemes scale? *(Apologies for the long delay.)* Scaling of Regular Thought ========================== The punchline of the [previous post](https://www.lesswrong.com/posts/mxmeeGk3mTM7BrT93/intuition) was that there is only one mode of thinking: your brain can solve various tasks in a single step (from the perspective of awareness), and we've called those tasks your cognitive primitives. All primitives are intuition-like in that we can't inspect how they're being done, and we may or may not have an explanation for the result after the fact. We're now interested in how this process scales. We don't tend to solve hard problems by staring at their description and waiting for a primitive to solve them in one step, so some kind of reasoning backward is going on. However, there is no module for this in the brain, so our ability to 'reason backward' also has to be implemented by primitives. The easiest way to observe how this works is to take a problem that is just barely too hard to solve with a single primitive. My pick for this is multiplying two 2-digit numbers. Thus, I invite you to do the following **EXERCISE:** There is a simple (two 2-digit numbers) multiplication problem in the spoiler below. Make sure you have something to write; it can be a piece of paper or a digital notepad. Look at the exercise, solve it in your head, and write down every verbal thought that pops into your mind until you have the solution. Write only to document your thoughts; don't do a written calculation. 12⋅17.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} . . . . . . . Below is what my transcript looks like. You may have had more or fewer intermediate steps, repetitions, or unrelated thoughts in between. That's all fine. ![](https://i.ibb.co/sJ4P4X9/1217.png) The coloring is something I've added after the fact. The black thoughts (minus the problem itself) are the outputs of primitives that actually solve math problems. Those are all simple; it's 10⋅17 and 2⋅17 and 170+34. On the other hand, 14⋅17 itself is outside the set of exercises that can be handled by a single primitive (at least for me). And thus, my brain has performed a feat of utter genius: instead of a primitive that sees the exercise and outputs a *solution*, it found a primitive which saw the exercise and output another exercise! (Namely, 10⋅17.) Subsequently, that exercise was taking as the input to a different primitive, which was then able to solve it in one step. (It may be that some of the 'subproblem outputs' like 10⋅17 did not appear as verbal thoughts for you. In general, not all outputs of primitives make it into awareness, and [the process that determines whether they do is complicated](https://www.lesswrong.com/posts/x4n4jcoDP7xh5LWLq/book-summary-consciousness-and-the-brain). You would probably observe the same patterns with a harder exercise.) This suggests that a major part of thinking consists of applying primitives that output new subproblems.[[1]](#fn-d6SkgKRCcsj3vqpDg-1) Does this generalize to harder and/or non-mathy problems? I think the answer is almost certainly yes, even in mundane cases, provided that you don't solve the problem immediately. For example, suppose you have to decide what present to buy a friend for Christmas. This problem does have some potential to be solved quickly, given that there is a set of 'default' options, like sweets or a bottle of wine. But if you're not content with those, you're unlikely to passively wait for your brain to produce more ideas. Instead, you would ask things like "what would make her happy?" or "what are her hobbies?". If you think about it for a while, you might get to less obvious questions like "what kind of gifts don't have the property that she will know better what she likes than I do?" Maybe you would consider helping her solve a problem she hasn't bothered to solve herself, and that would lead to questions like "what has she complained about before?". And so on. Since the domain is no longer governed by a small set of explicit rules, the subproblems don't immediately and uniquely determine the answer as they do in the multiplication case. Nonetheless, they are smaller problems whose solutions constitute progress on the overall problem. In general, I think you will be hard-pressed to find an example where you think about something for a while without outputting subproblems. Factored Cognition vs. Regular Thought ====================================== Factored Cognition, [as defined by Ought](https://ought.org/research/factored-cognition), refers to "mechanisms [...] where sophisticated learning and reasoning is broken down (or factored) into many small and mostly independent tasks". In light of the above, I posit the sequence's final conjecture: ![](https://i.ibb.co/xD2mxXk/conjecture.png) I've drawn a parallel between Factored Cognition and regular thought [all the way back in post #-1](https://www.lesswrong.com/posts/6zbRy3aADCsRmFcgv/hiding-complexity). The difference is that that post was taking the perspective of someone who already understands the problem and can choose between different ways of decomposing it, which is relevant for a Debate agent, but not so much for the human (in either scheme) who starts off not understanding the problem. The claim now is that the process of understanding does itself use Factored Cognition. Consider a node at the top of an HCH tree (say with t=1 hour) on the one hand, and a single person thinking for a hundred years on the other. We can call them H and D, respectively. ('D' for 'iDeal' since this is an idealized setting from the standpoint of capability). Presumably, everyone would agree that H and D do something different when they try to solve a problem, **but this difference cannot be that H uses Factored Cognition because D does that as well**. The difference also cannot be that D only produces one new subproblem at a time since H does that as well: each new question she asks is allowed to depend on everything that has happened up to that point. In both cases, the 'decomposition' is a thing that is continuously updated, not a thing that is ever output in one piece. So, what is the difference? If you buy that D would be superintelligent but are less sold on H, this is the key question, and the heart of this post will be trying to answer it. We can separate the ways in which H is disadvantaged into two distinct categories. I call them the alternating problem and the translation problem. The Alternating Problem ======================= The alternating problem is the fact that H is restricted in how many times she can alternate between asking and solving. D has the time budget to iterate through millions of questions throughout her thought process, but H only lives for an hour. On the upside, while D may only make incremental progress on each question, H immediately receives the proper solution, provided the question isn't too difficult. We would now like to know how much value the answer to one such subquestion has. Here is a model to ask this more formally. Suppose we can assign each question q a difficulty d(q)∈R (this is very similar to the model from part I of the sequence). Suppose further that we can measure H's progress on q with a real number y∈R so that the question gets solved once y≥d(q). Now if H receives the answer to a subquestion, this will increase y. The question is, by how much? One possible answer is **by a fraction of the input question's difficulty**, i.e., c⋅d(q) for some constant c. As the input question gets more difficult, H simply asks more promising questions, and it always takes about 1c many to arrive at a solution. To test if this is realistic, consider the following three problems:[[2]](#fn-d6SkgKRCcsj3vqpDg-2) | | | --- | | Suppose a and b are real numbers, not both 0. Find real numbers c and d such that 1a+bi=c+di. | | | | --- | | Prove that there does not exist an operator T∈L(R7) such that T2+T+I is nilpotent. | | | | --- | | Decide whether it is true that ∀n∈2N:n>2⟹(∃p,q prime :p+q=n). | For the above to be true, it would have to be the case that, for all three questions, receiving the answer to a relevant subquestion gets you the same portion of the way to the solution. This is clearly nuts. If you ask 'how can I get the denominator of 1a+bi to be real', you're almost there; if you ask, 'what does nilpotent mean', you've only done a small step; if you ask 'what's the smallest proven gap between prime numbers', you've presumably only taken an infinitesimal step. On the other hand, asking the *correct questions* may get you there, but that's not what we're talking about. So it's not a fraction of d(q). A second answer is that it's **a fraction of the current progress**, i.e., c⋅y for some constant c. Every subquestion H asks has an answer whose usefulness is proportional to H's current understanding of q. For this to be true, it would have to be the case that understanding a problem leads one to ask better questions. I probably don't have to convince anyone that this is true, but just to hammer down how prevalent this mechanism is, here are five made-up examples from different contexts: 1. Anna tries to predict whether China or the USA will have more AGI capabilities in thirty years. After pondering various considerations, she realizes that she should figure out what proportion of each country's AI efforts goes to AGI specifically. 2. Bob tries to prove that there are infinitely many prime numbers. His approach is to assume there are finitely many and derive a contradiction. After thinking about this for a bit, he realizes that 'take a finite set, construct an additional prime number' is a less confusing problem that amounts to the same thing. 3. Carla wants to join her first-ever Zoom call but doesn't have a microphone. After considering various ways to acquire one, she realizes that her phone has one and asks whether Zoom could run on that. 4. Dana tries to find the next best move in a chess game. After studying various lines, she realizes that her opponent's light square bishop is crucial as it can trap her queen in most relevant lines. She now asks how to deal with the bishop. 5. You come up with a bunch of items that could plausibly be useful for one of your friend's hobbies, but all have the property that you would probably buy an inferior product to what she could buy for herself. You conclude that you should look for things that she likes but doesn't know more about than you do. If the fraction-of-current-progress answer is correct, then H's progress y=f(t) (t is the number of questions considered) obeys the recursive equation f(t)=f(t−1)+c⋅f(t−1), which is simply f(t)=(1+c)t. (Of course, progress on any real problem is highly discontinuous and high-variance, so all of this is approximation.) In this model, progress is exponential in the number of questions asked. This also makes sense of why thinking for a very long time is powerful. Suppose that D only gets 11000 as much utility out of each subquestion asked, given that she may only consider them for a few seconds. This still yields f(t)=(1+11000c)t, which may grow slowly at first, but will arrive at something useful eventually because there is a large number in the exponent. Conversely, the abilities of an HCH tree are bounded. Up to some level of difficulty, nodes that receive perfect answers from their children can produce perfect answers themselves, so HCH can answer all such questions by induction. But there is some lowest level of difficulty for which it takes too long to build up an understanding, and a node won't be able to answer such a question even if all subtrees give perfect answers. This is the step on which the induction breaks down. A relevant counterpoint here is the ability of H to ask meta-questions. A meta-question is something like "what is the best way to think about the question, "What Christmas present should I buy for Hannah?"". This is similar to "What subquestion should I ask to make progress on the question, "What Christmas present should I buy for Hannah?"". The ways in which the two questions are not the same relate to the subtleties of thought that this post mostly brushes over: there can be insights about a problem that don't directly translate to subquestions, there's thinking that's neither about asking nor about solving questions (such as repeating what you understand already), and so on. All of that stuff makes live harder for H (more things to be done in limited time with questionable help), which means that reality will look at least as bad for HCH as the simplified view suggests. In the simplified view, the existence of meta-questions allows H to receive help in figuring out what subquestions to ask next. The problem is that there is no reason to expect HCH to be capable of solving the meta-question. If thinking is a constant alternation between posing and answering questions – where the questions and their answers become progressively more useful – then finding the perfect questions to ask should be roughly as hard as solving the problem outright. Less abstractly, take a look at the five examples of how progress informs future subquestions. Most of them involve past subquestions *and their solutions*. If the quality of subquestions is a function of current progress, then thinking about subquestions alone doesn't cut it. Making progress requires alternating between asking and solving. I find that this result is supported by introspection. The current sequence looks nothing like what I had in mind initially. When I decided to spend time on this problem, the first thing I did was to ask 'what are questions about?', which led to a post called 'Target systems'. Another early post was called 'Dependency Graphs'. Both of those posts were answers to subproblems I had come up with; neither of them turned out to be good subproblems, but I wouldn't have realized this if I hadn't written them. Only through the alternation of asking and answering did I get to this point. The same process happened one level down: within one post, I regularly found that a question I was trying to answer wasn't coherent, and then I usually scrapped it and rethought what I was trying to do. If I were forced to stick with the question anyway (which is the analog of having the alternation problem), I expect it wouldn't work very well. It's also not the case that the decomposition only changed in the beginning; some structural changes have occurred fairly late, and I would change some earlier posts right now if they weren't already published. The Translation Problem ======================= If we take D and add the alternating problem, we get a scheme where one person is thinking for a long time with the restriction that the decomposition on every level can only be updated a limited number of times. This scheme is not identical to H, so there is a second difference, which I call the translation problem. The translation problem is the fact that every insight communicated between two nodes has to be translated into text (or whatever other format the scheme is using) and back. If H calls a subtree that works for a total of 1000 hours, then H didn't think 1000 hours herself but merely receives the subtree's output. This problem goes both ways: it handicaps the results from subtrees, and it handicaps how much context a node can give to subtrees when asking a question. More concretely, it has several consequences: 1. It makes learning new skills difficult. (This is what we've left on at the end of the previous post.) Whenever acquiring a new cognitive primitive takes too much time, it becomes impossible for H to acquire it. This precludes learning primitives that require a lot of examples. These are often the ones that we refer to as intuition. 2. It can leave value on the table because the subtree is missing context. Suppose H asks a subtree to answer question q, and the subtree asks another subtree to answer q′ to help with q. It may be that q′ and its answer are important for the overall problem (they may be more important than q), but H never realizes this since all she receives is the finished answer to q. An example is the concept of Ideal Debate in this sequence, which I believe started as a footnote. Similar things can happen whenever a subtree misjudges which parts of what it found out are relevant for the overall problem. 3. It makes asking meta-questions throughout difficult. In light of this post, it would seem that asking meta-questions is something H would want to do as often as possible throughout the process. Yet, people tend to think of meta-questions as a thing that's only asked once, and the reason for this is the translation problem. A meta-question asked later in the process can't just be "what is the best way to think about this?" because that was already asked in the beginning. Instead, it has to be "what is the best way to think about this, given that I've already figured out xyz?" This is difficult to do, and it's also not in the spirit of Factored Cognition, which is supposed to be about independent questions or tasks. Insofar as the third point is accurate, it implies that we're looking at a second fundamental restriction for H. The first is the alternating problem: the fact that the number of times H can flip between asking and solving is bounded. The second is that the total amount of time H can spend on thinking about new questions is bounded as well. For this to be acceptable, it needs to be the case that 'find the next relevant subproblem' is a task whose difficulty is bounded across every context. On this point, consider the phenomenon of getting struck. When thinking about something difficult, I sometimes reach a point where I feel like I'm no longer making progress. This usually doesn't last forever, which means that the sense of 'not making any progress' is not literally true, but it shows that finding the next useful subproblem can be difficult. In a world where bounded decomposition budgets are sufficient to solve arbitrary problems, getting stuck should not be possible. You could always come up with a new relevant subproblem and solve that – or if it's too hard, come up with a subproblem for that, and so on. In some sense, 'naive Factored Cognition' appears impossible because it relies on the idea that you can decompose everything, but figuring out the decomposition is a big chunk of the work, and that part appears largely non-decomposable. Speculatively, I think there may be the case that figuring out the decomposition isn't just a big chunk but actually *most* of the work. My experience of getting stuck is not 'please brain, solve this subproblem' but rather 'please brain, tell me another angle to approach this problem'. Conclusion ========== My tentative conclusion from all of this is that an HCH tree would not be superintelligent, with the usual caveat that brute-forcing isn't allowed. I'll operationalize this in terms of [strong-HCH](https://ai-alignment.com/strong-hch-bedb0dc08d4e) since this is what Paul considers to be be the 'normal' scheme (whereas the thing the sequence has focused on is called 'weak-HCH'). In strong-HCH, each node has a list of all IDs of subnodes, allowing her to talk to the same instances repeatedly. Furthermore, messages can contain pointers to existing nodes (so if I'm node p, and I know that node x has insights on a part of a problem that I'm asking node y about, I can include a pointer to x in my question to y). I think one of the mistakes I've made in this sequence is to not foucs on strong-HCH more. That said, strong-HCH doesn't seem to solve the problems I've talked about in this post, except for the one about missing context. Alas, **Prediction (85%):** Ought will not succeed in demonstrating something roughly equivalent to solving the hardest exercise in a textbook using a structure that mirrors strong-HCH, provided each person is unfamiliar with the material and has at most 30 minutes of time. Note that I'm making this prediction without any inside knowledge; I've just read what Ought has published. Before writing the sequence, I think I would have assigned between 50 and 60 percent to the same prediction (I believe I was a bit more optimistic about Factored Cognition than most people, but there's some penalty since this could be hard to set up even if it's feasible), so there has been about a 30% swing. Needless to say, if Ought does do such a thing, it will (a) mean I'm wrong and (b) be very good news. What about Debate? ------------------ | ¯\*(ツ)*/¯ | | --- | The reasons I've mentioned for thinking HCH wouldn't work don't apply to Debate (with one exception that I'll talk about in a bit). In fact, I'm yet to come across an argument that Debate cannot work in principle, and the formalism from the first part of the sequence is mildly encouraging. Of course, I also haven't come across an argument for why it must work, but it's harder to imagine that such an argument could exist, so the absence of evidence is altogether a good sign. Most importantly, Debate sidesteps the alternating problem entirely. If you start with the best possible subquestions, then both of the toy models discussed in this post would agree that things should work out. Of course, the Debate agents don't perform surgery on the judge's brain to insert the perfect decomposition into memory; they have to write it down in text form. The amount that this matters, given that Debate agents are supposed to be highly intelligent, seems like a very hard-to-answer, very *different* problem from the things I've discussed in this post. I don't have too many intelligent things to say about it, except to repeat that talking about a 'Factored Cognition Hypothesis' really absolutely definitely doesn't make sense. The aforementioned exception is the fact that the judge is highly limited in her ability to acquire new primitives. However, it seems like the ability to understand arguments fundamentally requires only a bounded set of skills. This is backed up by formal logic,[[3]](#fn-d6SkgKRCcsj3vqpDg-3) and we can see the same thing in practice with understanding mathematical proofs. Once again, there is no generalization of this point to a context where a human has to derive the proof. My verdict is something like 80% that Debate won't fail due to fundamental problems (i.e., problems that relate to Ideal Debate). Note that this number is inflated because there is a chance that Debate would ultimately fail due to fundamental reasons, but we never get there because it fails due to practical problems first. I was a bit disheartened to read [the latest report on Debate](https://www.lesswrong.com/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem), which indicates that one of those practical problems (the honest debate agent figuring out which claim of the dishonest agent contains the lie) appears to be quite serious. My estimate on Ideal Debate working out may be more like 60%, but that is not testable. Miscellaneous ------------- Here is an example of how Debate can handle mathematical proofs. Recall the exercise I've mentioned earlier: | | | --- | | Prove that there does not exist an operator T∈L(R7) such that T2+T+I is nilpotent. | While this involves more advanced concepts, the exercise is still relatively easy. Here is a copy-paste from the solution I've written back then: > > Let λ be an eigenvalue of T and v a nonzero eigenvector. (Use Theorem 5.26.) We have > > > > > (T2+T+I)v=(λ2+λ+1)v=((λ+12)2+34)v > > > > > So that (T2+T+I)v=αv where α=(λ+12)2+34. Clearly α>0, hence (T2+T+I)kv=αkv≠0 for all k∈N. Thus, (T2+T+I) is not nilpotent. > > > If this looks like gibberish, you're in the same position as a judge in Debate may be in. However, as a debate judge, you don't have to understand the entire argument. Here is a possible decomposition into claims, of which you will only have to verify one. * **Claim 1:** There exists an eigenvalue λ∈R with eigenvector v for T, where v is not the zero vector. * **Claim 2:** (T2+T+I)v=(λ2+λ+1)v. * **Claim 3:** λ2+λ+1=(λ+12)2+34. * **Claim 4:** Set α:=(λ+12)2+34. Then α>0. * **Claim 5:** Claims #2-4 imply that (T2+T+I)v=αv>0. * **Claim 6:** Claim #5 implies that (T2+T+I)kv=αkv>0 for all k∈N. * **Claim 7:** Claims #1-6 imply that T2+T+I is not nilpotent. Claim #3 requires only high-school math. If this statement is pointed at, you can verify it without engaging with the concepts 'eigenvector' or 'nilpotent' or even 'vector space'. The same is almost true for claims #4 and #6. Claim #5 requires being comfortable with equations, but not anything specific to Linear Algebra. Claim #1 requires looking up a theorem but not understanding why it is true.[[4]](#fn-d6SkgKRCcsj3vqpDg-4) Only claims #2 and #7 require explaining one or more of the field-specific concepts. Another point I want to make is that this is probably not the optimal decomposition. When translating a text-based proof into a [Debate Tree](https://www.lesswrong.com/posts/FNyqL7mxSkgLpck4w/traversing-a-cognition-space#1__Debate_Trees), one need not do it sequentially. Here is a different approach: * **Claim 1:** {same as above} * **Claim 2:** There exists an α∈R+ such that (T2+T+I)v=αv. * **Claim 3:** Claims #1-2 imply that T2+T+I is not nilpotent. Subsequently, Claims #2-5 from the previous decomposition can become #2.1-#2.4, and Claim #6 can become Claim #3.1. This decomposition is superior from the standpoint of [hiding complexity](https://www.lesswrong.com/posts/6zbRy3aADCsRmFcgv/hiding-complexity). I think it's fair to say that the primary mechanism for an Ideal Debate agent is to reduce a concept to its behavioral properties. In this case, that concept is the α. The behavioral properties are entirely given in Claim #2 of the second decomposition, and they are sufficient for the high-level argument. Only if the other agent doubts the existence of such an α does the debate have to open this black box and look at how α is constructed. In that case, that's still a win in that the judge doesn't have to bother understanding if and how this α solves the exercise (because if claim #2 is pointed at, claim #3 is not). Appendix: the sequence in 500 words =================================== Since there was this big gap between the previous post and this one, I thought it might be useful to write an ultra-abbreviated version to refresh everyone's memory. **[Post #-1:](https://www.lesswrong.com/posts/6zbRy3aADCsRmFcgv/hiding-complexity)** To characterize what constitutes 'solving a subproblem', as supposed to 'making progress on a big problem', one can look at the length of the subproblem's solution. Under this view, decomposing problems is all about hiding as much complexity as possible. It must be the case that we do something like this in regular thought because we can only keep a few objects in mind at the same time yet are able to solve complex problems. [Post-sequence edit]: This perspective assumes a bird's eye view of the problem, which makes it primarily applicable to the job of an honest debate agent, less so to a human who starts off not understanding the problem. **[Post #1:](https://www.lesswrong.com/posts/S5oWwZMJBvfChSquW/idealized-factored-cognition)** HCH is the ideal of stock amplification. It abstracts away a number of practical problems and implementation details. We can similarly define an ideal for Debate. Given these idealized schemes, we can define and study a formalism (→ Cognition Spaces). The formalism suggests that HCH and Ideal Debate don't necessarily scale similarly, which means there is no one Factored Cognition Hypothesis. **[Post #2:](https://www.lesswrong.com/posts/FNyqL7mxSkgLpck4w/traversing-a-cognition-space)** Here are some things we can do with the formalism. Debate seems nicely behaved in the limit. Debate Trees may be an interesting object to consider when thinking about how to explain things optimally. **[Post #3:](https://www.lesswrong.com/posts/eCWkJrFff7oMLwjEp/clarifying-factored-cognition)** Factored Cognition is about reducing hard problems to human judgment to achieve outer alignment; it's not used because it's the best way to boost capability. Ideal Debate = HCH + Decomposition Oracle. To evaluate HCH or Ideal Debate, consider the task of the human as this is the non-decomposable part. **[Post #4:](https://www.lesswrong.com/posts/mxmeeGk3mTM7BrT93/intuition)** People tend to talk about intuition as if it's a separate mode of thinking that works without access to 'conscious reasoning', but really all thinking is like that; it's just that sometimes we can explain our thoughts, and sometimes we can't. It's useful to think about human thinking in terms of the set of operations that can be done in one such step. We call these operations our cognitive primitives. **Post #5:** This whole decomposing problems thing that characterizes Factored Cognition is something we do all the time when we think, except that we constantly alternate between decomposing and solving. You can verify this by taking an arbitrary problem and observing what your brain is doing. Since we only output one subproblem at a time, the term 'decomposition' describes a thing that is continuously updated, not a thing that's ever output in one piece. The alternating thing seems like it's critical, which is bad for HCH. Also problematic is the fact that nodes in an HCH tree have to communicate with something like text. In particular, it will mean that nodes probably won't get a lot of help for the task of decomposing their problem. This seems bad if you believe that decomposing constitutes much or even most of thinking. Strong-HCH may help, but probably not by much. Most of this stuff doesn't apply to Debate. There is no one Factored Cognition Hypothesis. --- 1. Note that when I say 'applying', I'm not suggesting a dualistic picture where there is an additional thing in the brain that gets to choose where to apply the primitives. [↩︎](#fnref-d6SkgKRCcsj3vqpDg-1) 2. The first is the first exercise out of [my favorite textbook](https://www.lesswrong.com/posts/BgEfvxBHPfHdaQxLH/insights-from-linear-algebra-done-right), the second is an exercise out of chapter 9 of the same book, and the third is a famous open math problem called the twin prime conjecture. [↩︎](#fnref-d6SkgKRCcsj3vqpDg-2) 3. There are formal proof systems that posit a small set of primitive operations such that every proof is reducible to a sequence of such operations. This is what allows proofs about what is provable. [↩︎](#fnref-d6SkgKRCcsj3vqpDg-3) 4. Theorem 5.26 is "Every operator on an odd-dimensional real vector space has an eigenvalue." (Incidentally, this is the only thing for which the 7 in R7 matters. It could have also been R1439995. This is another aspect that may have made it more difficult to find a proof because it has the potential to be misleading, but barely matters for verifying the proof.) [↩︎](#fnref-d6SkgKRCcsj3vqpDg-4)
f9fbea6e-5ca3-44ff-bed0-0fcf761da52c
trentmkelly/LessWrong-43k
LessWrong
Notes on "The Limits to Growth" and surrounding material In 1972, Donella Meadows, Dennis Meadows, Jørgen Randers, and William Behrens III published The Limits to Growth, a book about the consequences of unchecked population growth and economic growth. The book was very popular at the time, selling 12 million copies. As a part of my work on the project "Can we know what to do about AI?" I did a preliminary investigation of the claims in the book, whether they've been born out, and what the book's impact has been. The book uses the framework of "systems dynamics," which was pioneered by Jay Forrester. Paul Krugman criticized Forrester as having been unaware of prior overlapping work by economists. So it's possible that the ideas in The Limits to Growth are less novel than it appears on the face of things. I haven't investigated the extent to which this is the case, but may do so later. This blog post focuses on The Limits to Growth. My initial impression of the book based on what people have written about it was very different from the impression that I formed upon reading the book. The book has been misrepresented (whether denotatively or connotatively) both by critics and by sympathizers. In the first section below, I discuss how the book has been misrepresented, and in the second section I discuss what the book says. Misrepresentations of The Limits to Growth In his 2008 paper A comparison of limits to growth with 30 years of reality, Graham Turner describes how the book has been misrepresented by critics: > This is perhaps partly a result of sustained false statements that discredit the LtG. From the time of its publication to contemporary times, the LtG has provoked many criticisms which falsely claim that the LtG predicted resources would be depleted and the world system would collapse by the end of the 20th century. Such claims occur across a range of publication and media types, including scientific peer reviewed journals, books, educational material, national newspaper and magazine articles, and websites (T
2af7c16e-07e7-41f6-9357-37a937496473
trentmkelly/LessWrong-43k
LessWrong
Meetup : Sydney Rationality Dojo - October 2016 Discussion article for the meetup : Sydney Rationality Dojo - October 2016 WHEN: 02 October 2016 04:00:00PM (+1000) WHERE: 10 Shepherd Street, Chippendale From 4 - 6 we'll be holding our normal rationality dojo at Humanist house. Afterwards there'll be an informal dinner gathering for those who want to join. Discussion article for the meetup : Sydney Rationality Dojo - October 2016
b486b4db-3ec5-4054-ac13-57d8937da888
trentmkelly/LessWrong-43k
LessWrong
Consider Trying Dictation I started using dictation for entering text on my phone and computer when my wrists suddenly went really bad, but while my wrists are enough better that I can often type there are still a lot of cases where I dictate. Even people whose wrists are fine would benefit from trying it out: * It can be about as fast as typing on a real keyboard, which is great on a phone. I'm very good at swiping, but I can dictate faster. * On Android, at least, it does a good job even in noisy environments as long as I hold the phone close to my mouth. I can also speak very fast, faster than I can compose my thoughts, and it will still understand me. It even understands whispering! * I like having it as an option for composing thoughts, especially when I'm having trouble focusing. Sometimes it turns out I'm not in a good headspace for typing ideas out, but talking them through out loud works. * If your wrists do start hurting someday, being able to easily give them a break by switching some things to dictation can help them recover faster and make you less likely to hurt yourself. I suspect if I had been used to dictating as needed in 2020, I wouldn't have gotten to where I had to stop typing for a couple years. There are some downsides, however: * The dictation built into Android isn't great with punctuation, and I often have to go back and fix things. * The error rate, in general, is higher. For note taking or casual texting where errors are no big deal this is less of an issue, but in dictating this post there were several places I needed to go back and fix something. * Because mistranscriptions look more plausible than typos I'm more likely to have errors that make it all the way through to posting: the mistakes are less likely to stand out as obviously wrong. * Fixing dictation errors is a little more annoying than when typing because you have to switch modes. * I like my dictated style slightly less than typed: it's just a bit too wordy. This makes it more i
c9e412f8-06f7-42e3-bee2-10269c1d48ce
trentmkelly/LessWrong-43k
LessWrong
How should I help us achieve immortality? I was immensely glad to find this community, because while I knew intellectually that I was not the only person who felt that rationality was important, death was bad, and technology was our savior, I had never met anyone else who did. I thus determined my career without much input from anything except my own interests; which is not so bad, of course, but I have realized that I might benefit from advice from like-minded people. Specifically, I would like to know what LessWrong thinks I should do in order to get into "immortality research." Edit: that means "what field should I go into if I want humanity to have extended lifespans as soon as possible?" I feel immortality, or at least life-extension, is one of - if not the - most important thing(s) humanity can accomplish right now. I don't think I am suited to AI work, however. Another obvious option is an MD, but that's not in my temperament either. My major right now is biochemistry, in preparation for a doctorate in either biochemistry itself, or pharmacology. I think there's a good chance that advances in this area could contribute to life extension; aging is a biochemical process, right? And certainly drugs will be involved in life extension. But is this the best place to apply my efforts? I have considered that biogerontology (http://en.wikipedia.org/wiki/Gerontology) might be better, as it is about aging specifically; but I don't know much about the field - only that Wikipedia says it is new and very few universities offer degrees in it. My final idea is nanotechnology of some kind; I believe nanomachines may be able to repair our bodies. I'm not sure what type of nanotechnology I'd be looking at for this, or if degrees in it are offered. Any ideas, suggestions, or comments in general are welcome. I favor the biochemical approach as of now, but only through temperament. As far as I know, AI, biochemical/pharmacological methods, and nanotechnology are all about equally close to giving us immortality. If som
48be4398-93d4-4122-83bc-cac975725681
trentmkelly/LessWrong-43k
LessWrong
In a multipolar scenario, how do people expect systems to be trained to interact with systems developed by other labs? I haven't seen much discussion of this, but it seems like an important factor in how well AI systems deployed by actors with different goals manage to avoid conflict (cf. my discussion of equilibrium and prior selection problems here). For instance, would systems be trained * Against copies of agents developed by other labs (possibly with measures to mask private information)? * Simultaneously with other agents in a simulator that each developer has access to? * Against copies of themselves? * Against distributions of counterpart policies engineered to have certain properties? What would those properties be?
1290fc7e-381f-4dc2-ba44-097b3774224b
trentmkelly/LessWrong-43k
LessWrong
What do you believe are the most critical open questions/hypotheses that could inform a more effective COVID response? None
ac7a4f6b-c4c9-4f89-a015-41c21fccd5f8
StampyAI/alignment-research-dataset/arxiv
Arxiv
Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations 1 Introduction --------------- The recent success of reinforcement learning (RL) in many domains showcases the great potential of applying this family of learning methods to real-world applications. A key prerequisite for RL is to design a reward function that specifies what kind of agent behavior is preferred. However, in many real-world applications, designing a reward function is prohibitively difficult (Ng et al., [1999](#bib.bib104 "Policy invariance under reward transformations: theory and application to reward shaping"); Irpan, [2018](#bib.bib116 "Deep reinforcement learning doesn’t work yet")). By contrast, imitation learning (IL) provides a much easier way to leverage the reward function implicitly from the collected demonstrations and has achieved great success in many sequential decision making problems (Pomerleau, [1989](#bib.bib112 "Alvinn: an autonomous land vehicle in a neural network"); Ng and Russell, [2000](#bib.bib105 "Algorithms for inverse reinforcement learning"); Ho and Ermon, [2016](#bib.bib51 "Generative adversarial imitation learning")). However, popular IL methods such as behavioral cloning (BC) (Pomerleau, [1989](#bib.bib112 "Alvinn: an autonomous land vehicle in a neural network")) and generative adversarial imitation learning (GAIL) (Ho and Ermon, [2016](#bib.bib51 "Generative adversarial imitation learning")) assume the expert demonstration is optimal. Unfortunately, it is often difficult to obtain sufficient optimal demonstrations for many real-world tasks, because human experts often make mistakes due to various reasons, such as the difficulty of the task, partial observability of the environment, or the presence of distraction. Given such noisy expert demonstrations, which contain records of both optimal and non-optimal behaviors, BC and GAIL all fail to imitate the optimal policy (Wu et al., [2019b](#bib.bib146 "Imitation learning from imperfect demonstration"); Ma, [2020](#bib.bib88 "From adversarial imitation learning to robust batch imitation learning")). Current methods that deal with suboptimal demonstrations either require additional labels, which can be done explicitly by annotating each demonstration with confidence scores by human experts (Wu et al., [2019b](#bib.bib146 "Imitation learning from imperfect demonstration")), or implicitly by ranking noisy demonstrations according to their relative performance through interacting with the environment (Brown et al., [2019](#bib.bib13 "Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations"), [2020](#bib.bib14 "Better-than-demonstrator imitation learning via automatically-ranked demonstrations"); Zhang et al., [2021](#bib.bib153 "Confidence-aware imitation learning from demonstrations with varying optimality")). However, human annotation and environment interaction are laborious and expensive in real-world settings, such as in medicine, healthcare, and industrial processes. In this work, we investigate a pure offline learning setting where the agent has access to neither the expert nor the environment for additional information. The agent, instead, has only access to a small pre-collected dataset sampled from the expert and a large batch offline dataset sampled from one or multiple behavior policies that could be highly sub-optimal. This strictly offline imitation learning problem arises in many real-world problems, where environment interactions and expert annotations are costly. Prior works that address the problem are based on variants of BC or inverse RL. Sasaki and Yamashina ([2021](#bib.bib119 "Behavioral cloning from noisy demonstrations")) reuses another policy learned by BC as the weight of original BC objective. However, this requires that expert data occupy the majority proportion of the offline dataset, otherwise the policy will be misguided to imitate the suboptimal data. Zolna et al. ([2020a](#bib.bib156 "Offline learning from demonstrations and unlabeled experience")) first learns a reward function that prioritizes expert data over others and then performs offline RL based on this reward function. This algorithm is extremely expensive to run, requiring solving offline RL in an inner loop, which itself is a challenging problem and prone to training instability (Kumar et al., [2019](#bib.bib66 "Stabilizing off-policy q-learning via bootstrapping error reduction")) and hyperparameter sensitivity (Wu et al., [2019a](#bib.bib144 "Behavior regularized offline reinforcement learning")). In this paper, we propose an offline imitation learning algorithm to learn from demonstrations that (perhaps) contain a large proportion of suboptimal data without additional steps of reward learning and offline RL training. Built upon the task of BC, we introduce an additional task to learn a discriminator to distinguish expert and non-expert data. We propose a cooperation framework to learn the policy and discriminator cooperatively and boost the performance of both tasks. Based on this framework, we adopt a worst-case error minimization strategy to the policy such that the discriminator can be more robustly learned. This results in a new offline policy learning objective, and surprisingly, we find its equivalence to a generalized BC objective, where the outputs of the discriminator serve as the weights of the BC loss function. We thus term our resulting algorithm Discriminator-Weighted Behavioral Cloning (DWBC). Experimental results show that DWBC achieves higher returns and faster training speed compared to baseline algorithms under different scenarios. To summarize, the contributions of this paper are as follows. * [leftmargin=\*,nosep] * We propose a cooperation framework to learn the policy and discriminator cooperatively and boost the performance of both tasks (Section [3.2](#S3.SS2 "3.2 Learn the Policy and Discriminator Cooperatively ‣ 3 Methodology ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")); * Based on the proposed framework, we design an effective and light-weighted offline IL algorithm with a worst-case error minimization strategy (Section [3.3](#S3.SS3 "3.3 Discriminator-Weighted Behavioral Cloning ‣ 3 Methodology ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")); * We present promising comparison results with comprehensive analysis for our algorithm, which surpasses the state-of-the-art methods (Section [5.3](#S5.SS3 "5.3 Comparative Evaluations ‣ 5 Experiments ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")); * As a by-product, we show that the discriminator in our algorithm can be used to perform offline policy selection, which is of independent interest (Section [5.4](#S5.SS4 "5.4 Additional Experiments ‣ 5 Experiments ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")). 2 Preliminary -------------- ### 2.1 Problem Setting We consider the standard fully observed Markov Decision Process (MDP) setting (Sutton et al., [1998](#bib.bib129 "Introduction to reinforcement learning")), M={S,A,P,r,γ,d0}, where S is the state space, A is the action space, P:S×A→Δ(S) is the MDP’s transition probability, r is the reward function, γ∈[0,1) is the discount factor for future reward and d0 is the initial distribution. A policy π:S→Δ(A) maps from state to distribution over actions. We denote dπ∈Δ(S×A) as the discounted state-action distribution of π under transition kernel P, that is, dπ=(1−γ)∑∞t=0γtdπt, where dπt∈Δ(S×A) is the distribution of (s(t),a(t)) under π at step t. Following the standard IL setting, the ground truth reward function r is unknown. Instead, we have the demonstrations collected by the expert denoted as πe:S→Δ(A) (potentially stochastic and not necessarily optimal). Concretely, we have an expert dataset in the form of i.i.d tuples De={si,ai,s′i}nei=1 where (s,a) is sampled from distribution dπe and s′ is sampled from P(s,a). In our problem setting, we also have an offline static dataset consisting of i.i.d tuples Do={si,ai,s′i}noi=1 %s.t. (s,a)∼ρ(s,a),s′∼P(s,a), where ρ∈Δ(S×A) is an offline state-action distribution resulting from some other behavior policies. Note that these behavior policies could be much worse than the expert πe. Our goal is to only leverage the offline batch data Db=De∪Do to learn an optimal policy π with regard to optimizing the ground truth reward r, without any interaction with the environment or the expert. ### 2.2 A Generalized Behavioral Cloning Objective In order to discard low-quality demonstrations and only clone the best behavior available, we consider a generalized behavioral cloning objective to imitate demonstrations unequally, that is, | | | | | | --- | --- | --- | --- | | | minπE(s,a)∼Db[−logπ(a|s)⋅f(s,a)], | | (1) | where f:S×A→[0,1] denotes an arbitrary weight function. Existing offline IL methods can simply be recovered by picking one of the valid weight configurations: * [leftmargin=\*,nosep] * If f(s,a)=1 for ∀(s,a)∈S×A, the objective ([1](#S2.E1 "(1) ‣ 2.2 A Generalized Behavioral Cloning Objective ‣ 2 Preliminary ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) corresponds to the vanilla BC objective. * If f(s,a)=π′(a|s), where π′ is an old policy which was previously optimized with Db, the objective ([1](#S2.E1 "(1) ‣ 2.2 A Generalized Behavioral Cloning Objective ‣ 2 Preliminary ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) corresponds to the objective of Behavioral Cloning from Noisy Demonstrations (Sasaki and Yamashina, [2021](#bib.bib119 "Behavioral cloning from noisy demonstrations")). Since ∑aπ′(a|s)=1 for ∀s∈S is satisfied, π′(a|s) can be interpreted as the weights for weighted action sampling. * If f(s,a)=1[Aπ(s,a)], where 1 is the indicator function which creates a boolean mask that eliminates samples which are thought to be worse than the current policy, the objective ([1](#S2.E1 "(1) ‣ 2.2 A Generalized Behavioral Cloning Objective ‣ 2 Preliminary ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) corresponds to the objective of Offline Reinforced Imitation Learning (Zolna et al., [2020a](#bib.bib156 "Offline learning from demonstrations and unlabeled experience")). The objective ([1](#S2.E1 "(1) ‣ 2.2 A Generalized Behavioral Cloning Objective ‣ 2 Preliminary ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) can also be deemed as the objective of Soft Q Imitation Learning (Reddy et al., [2020](#bib.bib114 "SQIL: imitation learning via reinforcement learning with sparse rewards")) with f(s,a)=1 for (s,a)∈De and f(s,a)=0 for (s,a)∈Do in online IL literature; or the objective of off-policy actor-critic (Off-PAC) algorithm (Degris et al., [2012](#bib.bib26 "Off-policy actor-critic")) with f(s,a)=Qπ(s,a)⋅π(a|s)/πb(a|s) in online RL literature. ![Illustration of our proposed cooperation framework to alternately learn ](https://media.arxiv-vanity.com/render-output/7075462/x1.png) Figure 1: Illustration of our proposed cooperation framework to alternately learn π and d. In this framework, π uses the outputs of d as the weights to perform a new BC Task; d includes π as additional input to form a new Discriminating Task. This framework is different from GAN-style frameworks in that: 1) π and d are learned cooperatively rather than adversarially; 2) the training of π and d are decoupled into individual objectives rather than sharing one coupled objective. 3 Methodology -------------- We now continue to describe our approach for offline imitation learning from demonstrations that (perhaps) contain large-proportional suboptimal data, without additional steps of reward learning and offline RL training. Built upon the task of BC, we introduce an additional task to learn a discriminator to distinguish expert and non-expert data. We propose a cooperation framework to boost the performance of both tasks. Based on this framework, we adopt a worst-case error minimization strategy to the policy such that the discriminator can be more robustly learned. This results in a new generalized BC objective, we then provide the interpretation of weights in our generalized BC objective, this gives the intuition about why our method can work. ### 3.1 Learn the Policy and Discriminator Separately It is obvious that we can avoid the negative impact of suboptimal demonstrations presented in Do by only imitating De, which can be written as | | | | | | --- | --- | --- | --- | | | minπE(s,a)∼De[−logπ(a|s)]. | | (2) | We call the task of learning a policy using objective ([2](#S3.E2 "(2) ‣ 3.1 Learn the Policy and Discriminator Separately ‣ 3 Methodology ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) as BC Task. The drawback of BC task is that it does not fully utilize the information from Do, the resulting policy may not be able to generalize and will suffer from compounding errors due to the potential limited size and state coverage of De (Ross et al., [2011](#bib.bib118 "A reduction of imitation learning and structured prediction to no-regret online learning")). If we can select those high-reward transitions from Do and combine them with De, we are expected to get a better policy. Now let’s consider another different task, which aims to learn a discriminator by contrasting expert and non-expert transitions, given by | | | | | | --- | --- | --- | --- | | | mindE(s,a)∼De[−logd(s,a)]+E(s,a)∼Do[−log(1−d(s,a))]. | | (3) | Objective ([3](#S3.E3 "(3) ‣ 3.1 Learn the Policy and Discriminator Separately ‣ 3 Methodology ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) is similar to how the discriminator is trained in GAIL (Ho and Ermon, [2016](#bib.bib51 "Generative adversarial imitation learning")) and GAN (Goodfellow et al., [2014](#bib.bib44 "Generative adversarial nets")), except that the second term is sampled from a fixed dataset instead of new samples drawn from the learned policy by interacting with the environment. However, optimizing objective ([3](#S3.E3 "(3) ‣ 3.1 Learn the Policy and Discriminator Separately ‣ 3 Methodology ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) will make the learned discriminator assign 1 to all transitions from De and 0 to all transitions from Do. This limiting behavior is unsatisfactory because Do can contain some successful (high-reward) transitions. This bears similarity to the positive-unlabeled classification problem (Elkan and Noto, [2008](#bib.bib30 "Learning classifiers from only positive and unlabeled data")), where both positive and negative samples exist in the unlabeled data. To solve this problem, previous works adopt the approach from positive-unlabeled (PU) learning (du Plessis et al., [2015](#bib.bib29 "Convex formulation for learning from positive and unlabeled data"); Xu and Denil, [2019](#bib.bib148 "Positive-unlabeled reward learning"); Zolna et al., [2020b](#bib.bib155 "Combating false negatives in adversarial imitation learning")). The main idea is to re-weight different losses for positive and unlabeled data, in order to obtain an estimate of model loss on negative samples that is not directly available. Applying PU learning to objective ([3](#S3.E3 "(3) ‣ 3.1 Learn the Policy and Discriminator Separately ‣ 3 Methodology ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) yields the following objective: | | | | | | --- | --- | --- | --- | | | mind | ηE(s,a)∼De[−logd(s,a)] | | | | | +E(s,a)∼Do[−log(1−d(s,a))] | | | | | −ηE(s,a)∼De[−log(1−d(s,a))], | | (4) | where η is a hyperparameter, corresponds to the proportion of positive samples to unlabeled samples. We call the task of learning a discriminator using objective ([4](#S3.E4 "(4) ‣ 3.1 Learn the Policy and Discriminator Separately ‣ 3 Methodology ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) as Discriminating Task. Intuitively, the second term in ([4](#S3.E4 "(4) ‣ 3.1 Learn the Policy and Discriminator Separately ‣ 3 Methodology ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) could make d(s,a) of state-action pairs from De become 0 if similar state-action pairs are included in Do, and the third term in ([4](#S3.E4 "(4) ‣ 3.1 Learn the Policy and Discriminator Separately ‣ 3 Methodology ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) balances the impact of the second term, i.e., avoids d(s,a) of state-action pairs from De becoming 0. However, using information from state and action may be insufficient for the learning of Discriminating task. For example, suppose De comprises near-end transitions of expert trajectories, whereas Do comprises near-front transitions of expert trajectories and transitions from non-expert trajectories. In this case, it is hard for the discriminator to distinguish between expert transitions and non-expert transitions in Do, as states of transitions in Do bear a large similarity (these states are near the initial state), but have a large difference from states in De. To summarize, BC Task aims to imitate the expert behavior from De, but ignores the valuable information from Do; Discriminating Task aims to contrast expert and non-expert transitions from De and Do, but only uses state-action information as input. Both tasks lack enough information to improve their own performance, which however, can be obtained from the other task, as we will elaborate next. ### 3.2 Learn the Policy and Discriminator Cooperatively We propose a cooperation framework to learn the policy and discriminator cooperatively. In this framework, we aim to boost the performance of BC Task and Discriminating Task by incorporating the policy into the training of the discriminator and effectively using the discriminator to help the training of the policy. As illustrated in Figure [1](#S2.F1 "Figure 1 ‣ 2.2 A Generalized Behavioral Cloning Objective ‣ 2 Preliminary ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations"), the policy π uses the discriminator d to perform a new BC Task (i.e., generalized behavioral cloning as introduced in Section 2.2), where the weight is a function of d. The discriminator d also gets information from the policy π as additional input, yielding a new Discriminating Task. Suppose that d is well-learned to be able to contrast expert and non-expert transitions in Db=De∪Do, the policy will become better if one can choose an appropriate weight function f to make π only imitate the expert data in Db. By this way, we are able to use the entire dataset Db but get rid of the negative impact of those low-quality data. Supposed π is learned to be optimal, i.e., assigns large probabilities to expert actions in expert states, the discriminator will receive additional learning signal. It will be easier for the discriminator to contrast expert and non-expert transitions in Do, as π(a|s) will be large if (s,a) are from expert behaviors and small if (s,a) are from non-expert behaviors. Without this information from π, the discriminator is much harder to learn by only using information from (s,a). A keen reader may find the similarity of our propased framework and GAN-style frameworks (Goodfellow et al., [2014](#bib.bib44 "Generative adversarial nets"); Ho and Ermon, [2016](#bib.bib51 "Generative adversarial imitation learning")), where the policy and the discriminator are also jointly learned. However, the learning strategy of our framework has several differences compared with GAN-style frameworks. In GAN, the policy aims to generate expert data and the discriminator aims to distinguish between expert data and policy generated data. If the policy perfectly matches the expert, the discriminator will be unable to distinguish well, and vice versa. This means that GAN adopts an adversarial framework, where task A and task B are contradictory to each other, an improved performance of one task will lead to a deteriorated performance of another task. In contrast to adversarial, our framework is cooperative, task A and task B cooperate with each other to help both tasks, an improved performance of one task will also lead to an improved performance of another task. Moreover, GAN-style frameworks need to solve a min-max optimization problem (i.e., mindmaxπL(d,π)) and is known to suffer from issues such as traning instability and mode collapse (Arjovsky et al., [2017](#bib.bib8 "Wasserstein generative adversarial networks")). Whereas our framewwork allows the decoupled training of π and d. They can both learn with their own objectives in a fully supervised manner (see Figure [1](#S2.F1 "Figure 1 ‣ 2.2 A Generalized Behavioral Cloning Objective ‣ 2 Preliminary ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")), which is very easy to train and computationally cheap. ### 3.3 Discriminator-Weighted Behavioral Cloning It is obvious that, in our proposed framework, there exists multiple valid choices of weight function f that can make the policy imitate those high-reward transitions in Db. For example, f could be 1[d>0.5] or exp(d/β), where β>0 is a hyperparameter and 1 is the indicator function. However, does there exsit one principled solution of f? Notice that now π appears in the input of d, this means that imitation information from logπ will affect Ld, and further impact the learning of d. Hence both d and Ld become functionals of π (function of a function), i.e., d(s,a,logπ(a|s)) and Ld(d,logπ). Inspired by the idea of adversarial training, we make the policy π challenge the discriminator d by doing the opposite to minimizing Ld, in other words, we let π maximize Ld under current d. This can be seen as minimizing the worst-case error (Carlini et al., [2019](#bib.bib17 "On evaluating adversarial robustness"); Fawzi et al., [2016](#bib.bib32 "Robustness of classifiers: from adversarial to random noise"); Goodfellow et al., [2015](#bib.bib43 "Explaining and harnessing adversarial examples")), which makes the robustness of the discriminator significantly improved. Perhaps surprisingly, we found that let π maximize Ld will give the policy an additional corrective loss, which also leads to a valid choice of weight function f. ###### Theorem 3.1. Assume Ld(d,logπ) is twice continously differentiable with respect to d, and d is continuously differentiable with respect to logπ. With a given discriminator d, then a relaxed neccessary condition for Ld(d,logπ) attains its maxima with respect to π is to require a corrective loss term Lw is minimized by π, where Lw is given as follows: | | | | | | --- | --- | --- | --- | | | Lw= | E(s,a)∼De[logπ(a|s)⋅(ηd+η1−d)] | | | | | −E(s,a)∼Do[logπ(a|s)⋅11−d] | | ###### Proof. See the Proposition [B.1](#A2.Thmtheorem1 "Proposition B.1. ‣ B.2 Drivation of the Corrective Loss Term Lw ‣ Appendix B Derivation Details ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations") and Corollary [B.2](#A2.Thmtheorem2 "Corollary B.2. ‣ B.2 Drivation of the Corrective Loss Term Lw ‣ Appendix B Derivation Details ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations") in the Appendix for detailed derivation and proof. ∎ Adding the loss term Lw to BC task, we get the following new learning objective of π as: | | | | | | --- | --- | --- | --- | | | minπ | \definecolor[named]pgfstrokecolorrgb1,0,0\pgfsys@color@rgb@stroke100\pgfsys@color@rgb@fill100αE(s,a)∼De[−logπ(a|s)] | | | | | −E(s,a)∼De[−logπ(a|s)⋅ηd(1−d)] | | | | | +E(s,a)∼Do[−logπ(a|s)⋅11−d], | | (5) | where α is the weight factor (α≥1). This new objective essentially transforms the original BC task into a cost-sensitive learning problem (Ling and Sheng, [2008](#bib.bib82 "Cost-sensitive learning and the class imbalance problem")) by imposing the following weight on imitating each state-action transition as | | | | | | --- | --- | --- | --- | | | BC weights={α−η/d(1−d),(s,a)∈De1/(1−d),(s,a)∈Do, | | (6) | where d is clipped to the range of [0.1,0.9] to satisfy the continuity assumption (see Appendix [B.2](#A2.SS2 "B.2 Drivation of the Corrective Loss Term Lw ‣ Appendix B Derivation Details ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations") for details). Above behavioral cloning weights induce different behaviors on the imitation of transitions from De and Do. Suppose d is learning in a virtuous cycle, i.e., assigns large values (close to 1) to expert transitions and small values to non-expert transitions (close to 0). The weight of those expert transitions in Do will become large while the weight of those non-expert transitions will become small. For transitions in De, their weights can be adjusted by tuning the parameter α. Note that even if the discriminator is learned to be totally wrong (i.e., assign small values to expert transitions and large values to non-expert transitions), which may occur at the very beginning of training, the behavior cloning weights α−η/d(1−d) (α≥1,η<1) will not be drastically changed under value clipping. This means that the policy can still learn from the expert dataset De. Even though the weight for De is temporarily incorrect, it will be corrected as the discriminator becomes better and better. Eq. ([5](#S3.E5 "(5) ‣ 3.3 Discriminator-Weighted Behavioral Cloning ‣ 3 Methodology ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations")) implies that our approach is also a variant of generalized BC objective, but uses a different form of weights. Unlike Offline Reinforced Imitation Learning (Zolna et al., [2020a](#bib.bib156 "Offline learning from demonstrations and unlabeled experience")), which uses the discriminator as the reward and learns a value function as the weight, our approach uses the discriminator outputs directly as the weight. This can greatly reduce the training time and avoid the overestimation issue in estimating the value function offline (Kumar et al., [2019](#bib.bib66 "Stabilizing off-policy q-learning via bootstrapping error reduction")). We term our algorithm Discriminator-Weighted Behavioral Cloning (DWBC). The pseudocode and implementation details of our algorithm are included in Appendix [A](#A1 "Appendix A Training procedure details ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations"). 4 Related Work --------------- ### 4.1 Offline Imitation Learning Offline IL, which has not received considerable attention, is a promising area because it makes IL more practical to satisfy critical safety desiderata. Offline IL methods can be folded into two paradigms: Behavioral Cloning (BC) and Offline Inverse Reinforcement Learning (Offline IRL). BC (Pomerleau, [1989](#bib.bib112 "Alvinn: an autonomous land vehicle in a neural network")) is the simplest IL method that can be used in the offline setting, it considers the policy as a conditional distribution π(⋅|s) over actions, recent work (Florence et al., [2022](#bib.bib33 "Implicit behavioral cloning")) enhances BC by using energy-based models (LeCun et al., [2006](#bib.bib74 "A tutorial on energy-based learning")). BC has shown to have no inferior performance compared to popular IL algorithms such as GAIL (Ho and Ermon, [2016](#bib.bib51 "Generative adversarial imitation learning")) when clean expert demonstrations are available (Ma, [2020](#bib.bib88 "From adversarial imitation learning to robust batch imitation learning")). Unlike BC, offline IRL considers matching the state-action distribution induced by the expert policy, this can be achieved implicitly by adversarial training or explicitly by learning a reward function. Offline IRL algorithms based on adversarial training (Kostrikov et al., [2020](#bib.bib64 "Imitation learning via off-policy distribution matching"); Jarrett et al., [2020](#bib.bib56 "Strictly batch imitation learning by energy-based distribution matching"); Swamy et al., [2021](#bib.bib130 "Of moments and matching: trade-offs and treatments in imitation learning"); Garg et al., [2021](#bib.bib159 "IQ-learn: inverse soft-q learning for imitation")) use Intergral Probability Metrics (IPMs) (Sriperumbudur et al., [2009](#bib.bib127 "On integral probability metrics, ϕ-divergences and binary classification")) as a distance measure to solve the dual problem. They introduce a discriminator (or critic) and aim to find the saddle point of a min-max optimization problem, like GAN (Goodfellow et al., [2014](#bib.bib44 "Generative adversarial nets")). The common problem of these works is that they imitate equally to all demonstrations, this will hinder the performance if Do contain suboptimal data. To solve this, BCND reuses another policy learned by BC as the weight of original BC objective (Sasaki and Yamashina, [2021](#bib.bib119 "Behavioral cloning from noisy demonstrations")), however, this requires that expert data occupies the majority of the offline dataset, otherwise the policy will be misguided to imitate the suboptimal data. ORIL first constructs a reward function that discriminates expert and exploratory trajectories, then use it to solve an offline RL problem (Zolna et al., [2020a](#bib.bib156 "Offline learning from demonstrations and unlabeled experience")). Instead of the adversarial learning scheme, the reward function can also be learned by cascading to two supervised learning steps (Klein et al., [2013](#bib.bib62 "A cascaded supervised learning approach to inverse reinforcement learning")). However, offline IRL based on reward learning is expensive to run, requiring solving offline RL in an inner loop, which itself is a challenging problem and prone to training instability and hyperparameter sensitivity (Wu et al., [2019a](#bib.bib144 "Behavior regularized offline reinforcement learning")). Our algorithm can be seen as a combination of these two algorithms in that we train a discriminator to distinguish expert and non-expert data and use the output of the discriminator as the weight of the generalized BC objective, so as to imitate demonstrations selectively. One recent work (Chang et al., [2021](#bib.bib18 "Mitigating covariate shift in imitation learning via offline data without great coverage")) performs offline IL by adopting techniques from pessimistic model-based offline policy learning (Yu et al., [2020](#bib.bib150 "MOPO: model-based offline policy optimization"), [2021](#bib.bib151 "Combo: conservative offline model-based policy optimization")), our work does not need to train a dynamics model nor perform the expensive min-max model-based policy optimization. Another recent work, DemoDICE (Kim et al., [2022](#bib.bib158 "DemoDICE: offline imitation learning with supplementary imperfect demonstrations")), performs offline IL with a weighted KL constraint to regularize the learned policy to stay close to both De and Do, it could be highly suboptimal when Do contains a large collection of noisy data. ### 4.2 Offline Reinforcement Learning One research area highly related to offline IL is offline RL (Lange et al., [2012](#bib.bib70 "Batch reinforcement learning"); Levine et al., [2020](#bib.bib79 "Offline reinforcement learning: tutorial, review, and perspectives on open problems")), which considers performing effective RL by utilizing arbitrary given, static offline datasets, without any further environment interactions. Note that in offline RL, the training dataset is allowed to have non-optimal trajectories and the reward for each state-action-next state transition triple is known. Our algorithm draws connection to a branch of methods in offline RL literature that performs ”filtered” behavioral cloning explicitly or implicitly. More specifically, these methods estimate an advantage function, which represents the change in expected return when taking action a instead of following the current policy, and perform weighted regression based on the advantage function, defined as Lπ=E(s,a)∼Db[−logπ(a|s)⋅f(Aπ(s,a))]. The advantage Aπ can be estimated by Monte-Carlo methods (Schulman et al., [2017](#bib.bib122 "Proximal policy optimization algorithms"); Peng et al., [2019](#bib.bib109 "Advantage-weighted regression: simple and scalable off-policy reinforcement learning")) or Q-value based methods (Schulman et al., [2015](#bib.bib121 "Trust region policy optimization"); Nair et al., [2020](#bib.bib102 "Accelerating online reinforcement learning with offline datasets")). The filter function f can be a binary filter (Wang et al., [2020](#bib.bib140 "Critic regularized regression")) or an exponential filter (Peng et al., [2019](#bib.bib109 "Advantage-weighted regression: simple and scalable off-policy reinforcement learning"); Nair et al., [2020](#bib.bib102 "Accelerating online reinforcement learning with offline datasets")). While Chen et al. ([2021](#bib.bib19 "Decision transformer: reinforcement learning via sequence modeling")) and Janner et al. ([2021](#bib.bib53 "Reinforcement learning as one big sequence modeling problem")) perform filtered behavioral cloning more implicitly. They cast offline RL as a sequence modeling problem and use Transformer architecture (Vaswani et al., [2017](#bib.bib136 "Attention is all you need")) to perform credit assignment directly via self-attention mechanism. Owing to the memorization power of Transformer in capturing long-term dependencies across timesteps, these methods discard low-quality transitions and conduct behavior cloning only on high-reward transitions. 5 Experiments -------------- We present empirical evaluations of DWBC in a variety of settings. We start with describing our experimental setup, datasets and baselines. Then we evaluate DWBC against other baselines on a range of robotic locomotion tasks with different types of datasets. Finally, we analyze the property of the discriminator, i.e., using the discriminator to do offline policy selection (Fu et al., [2021](#bib.bib34 "Benchmarks for deep off-policy evaluation")) owing to the including of logπ as input. ### 5.1 Settings We construct experiments on both widely-used D4RL MuJoCo datasets (Fu et al., [2020](#bib.bib35 "D4rl: datasets for deep data-driven reinforcement learning")) and more complex Adroit human datasets (Rajeswaran et al., [2017](#bib.bib113 "Learning complex dexterous manipulation with deep reinforcement learning and demonstrations")). To verify the effectiveness of our methods, we use three setting to generate De and Do. Note that we use ground truth reward only to perform the data split step and discard the reward informantion afterward. * [leftmargin=\*,nosep] * In Setting 1, we use expert and random datasets in Mujoco environments. We sample 10 trajectories from expert datasets and 1000 trajectories from random datasets. We sample first X% trajectories from those 10 expert trajectories and combine them with those 1000 random trajectories to constitute Do, we use the remaining 1−X% trajectories to constitute De. * In Setting 2, we use mixed datasets in Mujoco environments. We sort from high to low of all trajectories based on the total reward summed over the entire trajectory. We define a trajectory as well-performing if it is among the top 5% of all trajectories. We then sample every Xth trajectory from the well-performing trajectories to constitute De and use the remaining trajectories in the dataset to constitute Do. Note that with X becomes larger, Do will contain more proportion of well-performing data. This setting can verify whether an algorithm effectively leverages Do as it contains multi-level data, not only expert data in setting 1. * In Setting 3, we use expert and cloned datasets in Adroit environments111We use cloned datasets as low-quality datasets as there doesn’t exist random datasets in D4RL Adroit environments and performing BC on cloned datasets also has a near-zero normalized score.. We sample 100 trajectories from expert datasets and 1000 trajectories from cloned datasets. We use the same procedure to constitute De and Do as in Setting 1. We list all datasets used in this paper and the number of trajectories and transitions in De and Do in Appendix [C](#A3 "Appendix C Additional results ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations"), different X is labeled after the dataset name. | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | | | | BC-exp | BC-all | BCND | ORIL | DemoDICE | DWBC | | Setting 1 | hopper\_exp-rand-30 | 74.8±11.6 | 3.1±3.1 | 2.3±0.3 | 70.7±2.5 | 42.4±6.1 | 87.2±12.3 | | halfcheetah\_exp-rand-30 | 24.8±5.1 | 2.2±0.0 | 2.2±0.2 | 9.8±3.6 | 2.2±0.0 | 43.9±7.2 | | walker2d\_exp-rand-30 | 93.9±16.3 | 1.8±2.6 | 0.4±0.0 | 2.4±1.9 | 105.0±1.7 | 106.3±3.2 | | ant\_exp-rand-30 | 72.7±15.7 | 34.2±3.5 | 17.0±1.2 | 64.1±21.5 | 62.2±5.5 | 91.3±11.3 | | hopper\_exp-rand-60 | 75.9±16.3 | 2.6±0.6 | 2.2±0.1 | 8.0±10.4 | 57.3±6.7 | 87.4±11.2 | | halfcheetah\_exp-rand-60 | 6.1±1.5 | 2.2±0.0 | 2.2±0.1 | 3.6±1.5 | 2.2±0.0 | 21.3±7.1 | | walker2d\_exp-rand-60 | 87.6±10.8 | 1.7±1.7 | -0.2±0.0 | 5.5±1.1 | 105.8±2.6 | 105.8±3.3 | | ant\_exp-rand-60 | 63.1±13.0 | 36.7±7.8 | 24.1±0.8 | 69.4±12.1 | 59.7±8.2 | 83.4±7.2 | | hopper\_exp-rand-90 | 24.2±14.7 | 2.3±6.5 | 2.4±1.1 | 59.5±19.7 | 38.4±15.8 | 86.7±15.1 | | halfcheetah\_exp-rand-90 | 1.3±2.1 | 2.2±1.2 | 2.2±0.0 | 2.2±1.3 | 2.2±0.0 | 3.4±2.4 | | walker2d\_exp-rand-90 | 45.8±12.4 | 0.4±7.2 | 0.4±0.3 | 0.0±17.8 | 59.6±17.2 | 90.1±22.0 | | ant\_exp-rand-90 | 11.1±8.7 | 31.7±6.8 | 27.4±1.8 | 31.8±9.4 | -63.6±0.0 | 25.7±10.3 | | Setting 2 | hopper\_mixed-2 | 41.4±8.7 | 35.0±5.1 | 2.2±1.0 | 69.8±5.2 | 1.5±0.1 | 73.0±6.6 | | halfcheetah\_mixed-2 | 31.9±1.8 | 36.2±0.6 | 2.2±0.8 | 36.2±4.2 | 2.2±0.3 | 38.9±1.5 | | walker2d\_mixed-2 | 53.2±16.3 | 32.6±2.1 | 0.3±0.0 | 65.3±6.7 | 0.2±0.0 | 59.8±7.5 | | ant\_mixed-2 | 70.4±5.5 | 76.1±3.5 | 29.3±2.2 | 76.6±3.1 | 30.3±2.1 | 79.1±2.5 | | hopper\_mixed-5 | 68.4±3.1 | 21.8±2.0 | 2.2±0.9 | 59.0±5.0 | 1.5±0.2 | 66.4±11.9 | | halfcheetah\_mixed-5 | 25.2±1.5 | 36.8±1.2 | 2.2±0.7 | 31.3±1.9 | 2.2±0.4 | 29.0±2.7 | | walker2d\_mixed-5 | 48.9±3.3 | 45.7±2.0 | 0.3±0.0 | 61.3±5.8 | 0.2±0.0 | 50.7±8.2 | | ant\_mixed-5 | 64.3±6.7 | 69.4±3.5 | 29.3±2.3 | 75.4±1.9 | 30.4±2.1 | 77.0±2.9 | | hopper\_mixed-10 | 55.2±12.3 | 16.3±6.0 | 2.2±0.2 | 59.1±4.2 | 1.5±0.2 | 61.0±5.8 | | halfcheetah\_mixed-10 | 14.9±0.9 | 33.0±0.6 | 2.2±0.6 | 26.1±1.9 | 2.2±0.6 | 27.8±2.6 | | walker2d\_mixed-10 | 25.9±0.3 | 43.9±5.3 | 0.3±0.0 | 56.4±2.4 | 0.2±0.0 | 49.5±6.9 | | ant\_mixed-10 | 53.4±3.8 | 68.3±4.7 | 29.3±3.1 | 75.0±2.1 | 30.3±1.7 | 70.5±4.2 | | Setting 3 | pen\_exp-cloned-30 | 99.5±22.8 | 13.2±15.5 | -0.3±0.0 | 43.0±20.9 | 65.5±9.0 | 100.9±10.4 | | door\_exp-cloned-30 | 31.0±26.9 | 0.1±0.0 | -0.1±0.0 | 0.0±0.0 | 0.0±0.0 | 5.2±3.9 | | hammer\_exp-cloned-30 | 83.8±25.2 | 0.2±0.0 | 0.2±0.0 | 19.9±27.8 | 4.5±7.4 | 89.4±23.6 | | relocate\_exp-cloned-30 | 59.8±6.4 | 0.0±0.0 | -0.1±0.0 | 9.8±10.7 | 2.9±1.6 | 62.4±5.2 | | pen\_exp-cloned-60 | 90.7±21.9 | 9.4±7.8 | -0.3±0.0 | 18.4±12.1 | 71.1±10.8 | 108.2±9.4 | | door\_exp-cloned-60 | 5.9±7.5 | 0.0±0.0 | -0.1±0.0 | -0.1±0.0 | 0.0±0.1 | 7.5±6.5 | | hammer\_exp-cloned-60 | 62.3±18.1 | 0.2±0.0 | 0.2±0.1 | 43.4±5.6 | 0.8±0.8 | 92.9±19.1 | | relocate\_exp-cloned-60 | 34.0±7.4 | 0.0±0.0 | -0.1±0.0 | 2.3±0.8 | 2.1±2.0 | 56.3±14.1 | | pen\_exp-cloned-90 | 40.5±8.6 | 13.2±4.5 | 0.4±0.4 | 2.9±0.6 | 32.4±7.2 | 93.4±10.4 | | door\_exp-cloned-90 | 0.3±0.2 | -0.1±0.4 | -0.1±0.1 | -0.1±0.0 | -0.1±0.0 | 1.8±2.8 | | hammer\_exp-cloned-90 | 18.5±7.4 | 0.2±0.0 | 0.2±0.0 | 0.2±0.0 | 0.2±0.1 | 14.2±8.4 | | relocate\_exp-cloned-90 | -0.1±0.3 | 0.0±0.1 | -0.1±0.01 | -0.1±0.0 | 0.0±0.0 | 1.5±0.8 | Table 1: Results for Mujoco and Adroit datasets. Scores are undiscounted average returns of the policy at the last 10 evaluations of training, averaged over 5 random seeds. All values are normalized to lie between 0 and 100, where 0 corresponds to a random policy and 100 corresponds to an expert (Fu et al., [2020](#bib.bib35 "D4rl: datasets for deep data-driven reinforcement learning")). We bold the highest value. ### 5.2 Baseline and ablated algorithms We compare DWBC with the following baseline algorithms: BC-exp:  Behavioral cloning on expert data De. De owns higher quality data but with less quantity, and thus causes serious compounding error problems to the resulting policy. BC-all:  Behavioral cloning on all data Do. BC-all can generalize better than BC-pos due to access to a much larger dataset, but its performance may be negatively impacted by the low-quality data in Do. BCND:  BCND is trained on all data, it reuses another policy learned by BC as the weight of BC, its performance will be worse if the suboptimal data occupies the major part of the offline dataset. ORIL:  ORIL learns a reward function and uses it to solve an offline RL problem. It suffers from large computational costs and the difficulty of performing offline RL under distributional shift. DemoDICE:  The learning objective of DemoDICE contains two KL constraints, DKL(dπ∥de) to imitate the expert and αDKL(dπ∥do) to provide proper policy regularization. ### 5.3 Comparative Evaluations We show the comparative results in Table [1](#S5.T1 "Table 1 ‣ 5.1 Settings ‣ 5 Experiments ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations") and include the learning curves in Appendix [C](#A3 "Appendix C Additional results ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations"). It can be shown from Table [1](#S5.T1 "Table 1 ‣ 5.1 Settings ‣ 5 Experiments ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations") that DWBC outperforms baseline algorithms on most tasks (27 out of 36 tasks), especially on ”expert+random” datasets (setting 1&3, 21 out of 24 tasks), showing that DWBC is well suited to make effective use of the expert dataset De and the mixed quality dataset Do. As expected, the performance of BC-exp declines as X becomes larger. This is because that a larger X means the number of well-performing transitions is smaller. In some datasets (e.g., ant\_exp-rand-90 and most in setting 2), there is no clear winner between BC-exp and BC-all, which suggests that the quality of Do for the considered tasks varies. BCND performs poorly compared to other methods due to the majority of low-quality data in Do. It usually scores below BC-all. ORIL performs well in setting 2 but struggles to learn in setting 1&3, which implies that offline RL cannot learn well with low-quality datasets. We also find that the performance of ORIL tends to decrease in some tasks, this ”overfitting” phenomenon also occurs in experiments of offline RL papers (Wu et al., [2019a](#bib.bib144 "Behavior regularized offline reinforcement learning"); Kumar et al., [2019](#bib.bib66 "Stabilizing off-policy q-learning via bootstrapping error reduction")). This is perhaps due to limited data size and model generalization bottleneck (Neyshabur, [2017](#bib.bib103 "Implicit regularization in deep learning")). It can be seen that DemoDICE performs worse in setting 2 and -90 datasets in setting 1&3, this is because the KL regularization that regularizes the learned policy to stay close to Do in DemoDICE is too conservative, resulting a suboptimal policy especially when Do contains a large collection of noisy data. | | | | | --- | --- | --- | | Additional experiment on offline policy selection by the discriminator learned by DWBC. | Additional experiment on offline policy selection by the discriminator learned by DWBC. | Additional experiment on offline policy selection by the discriminator learned by DWBC. | Figure 2: Additional experiment on offline policy selection by the discriminator learned by DWBC. ### 5.4 Additional Experiments Offline policy selection by the discriminator.  Offline policy selection (OPS) (Paine et al., [2020](#bib.bib108 "Hyperparameter selection for offline reinforcement learning"); Yang et al., [2020](#bib.bib149 "Offline policy selection under uncertainty"); Dereventsov et al., [2021](#bib.bib28 "Offline policy comparison under limited historical agent-environment interactions")) considers the problem of choosing the best policy from a set of policies given only offline data. This problem is critical in the offline settings (i.e., offline RL and offline IL) because the online execution is often very costly and safety-aware, deploying a problematic policy may damage the real-world systems (Tang and Wiens, [2021](#bib.bib131 "Model selection for offline reinforcement learning: practical considerations for healthcare settings")). Note that existing offline RL/IL methods break the offline assumption by evaluating different policies corresponding to their rewards in online environment interactions. However, this online evaluation is often infeasible and hence undermines the initial assumption of offline RL/IL. We find that as a by-product, involving logπ in the discriminator d in DWBC brings an appealing characteristic, i.e., value generalization among policies. More specifically, d values of known policies can be generalized to unknown policies, we can use expert state-action pairs from De and different policy π as input. The discriminator will assign large values (close to 1) when the evaluated policy is close to the expert policy learned by DWBC, which also means that the evaluated policy is close to the optimal. To validate our proposed idea, we conduct experiments in Hopper, Walker2d and Pen environment. In Hopper and Walker2d, we use mixed-2, mixed-5 and mixed-10 datasets, in Pen, we use mixed-2, 3 and mixed-5 datasets. We compare three algorithms (BC-exp, ORIL and DWBC) trained in these datasets, total of 9 policies in each environment. We first train DWBC, then we use the learned discriminator d along with De to compute the value d(s,a,logπi(a|s)) of each policy πi. We plot average d(s,a,logπi(a|s)) versus the policy’s true return in Figure [2](#S5.F2 "Figure 2 ‣ 5.3 Comparative Evaluations ‣ 5 Experiments ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations"). As shown, d values well reflect the rank between almost every two policies. This means that we can first train a DWBC policy and then use the trained discriminator d to do OPS, i.e., select the best policy among given candidate policies, without executing them in the environment to get the actual returns. ![Run time comparison of each offline IL algorithm.](https://media.arxiv-vanity.com/render-output/7075462/x5.png) Figure 3: Run time comparison of each offline IL algorithm. Comparision of run time.  We also evaluate the run time of training DWBC and other baseline algorithms for 500,000 training steps (does not include evaluation time cost). All run time experiments were executed on NVIDIA V100 GPUs. For a fair comparison, we use the same policy network size in BC, BCND, ORIL, DemoDICE and DWBC. The discriminator network size is also kept the same in ORIL and DWBC. The results are reported in Figure [3](#S5.F3 "Figure 3 ‣ 5.4 Additional Experiments ‣ 5 Experiments ‣ Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations"). Unsurprisingly, we find the run time of our approach is only slightly more than BC, while other baselines (ORIL, BCND) are over 7 times more costly than BC. The reason that ORIL is costly to run is due to the additional effort to solve an offline RL problem. The high computation cost of BCND is due to its inner iterations of training K policy ensembles (K=5 in our experiment), which is also mentioned in their paper (Sasaki and Yamashina, [2021](#bib.bib119 "Behavioral cloning from noisy demonstrations")). This demonstrates the effectiveness of DWBC by only adding a limited cost to the original BC algorithm while providing substantially improved performance. 6 Conclusion and Future Work ----------------------------- In this paper, we propose an effective and light-weighted offline imitation learning algorithm that can learn from suboptimal demonstrations without environment interactions or expert annotations. Experimental results show that our algorithm achieves higher returns and faster training speed compared to baseline algorithms, under different scenarios. One future work is to derive new algorithms for online IL based on our proposed cooperation framework, as recent studies (Wang et al., [2021](#bib.bib142 "Learning to weight imperfect demonstrations"); Eysenbach et al., [2021](#bib.bib31 "Replacing rewards with examples: example-based policy search via recursive classification")) also reveal the importance of weighting imperfect expert demonstrations in the online IL setting. Another future work is to consider modifying the main task from action matching to state-action distribution matching, which is known to be more robust to distributional shift (Kostrikov et al., [2020](#bib.bib64 "Imitation learning via off-policy distribution matching")). Acknowledgements ---------------- A preliminary version of this work was accepted on Deep RL workshop at NeurIPS 2021. We thank anonymous reviewers for feedback on previous versions of this paper. This work is also supported by gifts from Haomo.AI.
5506337d-b2b9-457d-8851-dbeb5b881663
trentmkelly/LessWrong-43k
LessWrong
The Thingness of Things [Metadata: crossposted from https://tsvibt.blogspot.com/2022/08/the-thingness-of-things.html. I'm fairly likely to not respond to comments promptly. If you're especially interested in chatting, my gmail address is: tsvibtcontact ] What's a thing, in general? Minds deal with things, so this question comes up in trying to understand minds. Minds think about things, speak of things, manipulate things, care about things, create things, and maybe are made of things. Things Examples of things: table, elephant, carbon atom. France, Martin Luther King. Insertion sort. Chess. Unicorn. Learning. Seven. Towel, strand, wing, crystal, finger, space, diffraction. The laws of electromagnetism. The first World War. Non-things What about non-things? It might be hard to list non-things because what we have words for, tend to be things. Redness seems like sort of a thing, but less so. Laws of physics also. Also ghosts. One might say "the ideal gas law is totally a thing" or "ghosts aren't a thing", though I think ghosts are a thing. Events can be things; WWI seems like a thing to me. But a minute ago I picked up my bottle of water and drank from it; that's clearly an event, a real one, but it doesn't feel that much like a thing. The abstract [drinking from a water bottle] feels like a thing though. (There are some usual critiques of thingness. Yes, there's no sharp dividing line between a wave and a trough, but clearly waves are a thing. Yes, seven is not a physical object you'll ever bump into, but it's clearly a thing. The ideal gas law isn't localized in space or time, but it's a thing. Unicorns aren't real, in that you'll bump into people speaking about them but you won't bump into unicorns themselves and won't be constrained by unicorns in the way you're constrained by seven, but they're things. The world of Ender's Game isn't real, though it's a remote possibility, and it's a thing.) Features of things * Coherence. A thing usually has properties, parts, aspects that a
28b23427-caf5-4aa1-9443-a5dae8c1c850
trentmkelly/LessWrong-43k
LessWrong
Meetup : Austin Meetup Discussion article for the meetup : Austin Meetup WHEN: 07 November 2020 02:30:00PM (-0500) WHERE: 4001 N Lamar Blvd, Austin, TX 78756 Actually meets every saturday. https://www.google.com/maps/place/Central+Market,+4001+N+Lamar+Blvd,+Austin,+TX+78756/ Discussion article for the meetup : Austin Meetup
f659100f-c443-4d7b-8074-58dc82386268
trentmkelly/LessWrong-43k
LessWrong
Scrying for outcomes where the problem of deepfakes has been solved (Prompted by the post: On Media Synthesis: An Essay on The Next 15 Years of Creative Automation, where Yuli comments "Deepfakes exist as the tip of the warhead that will end our trust-based society") There are answers to the problem of deepfakes. I thought of one, very soon after first hearing about the problem. I later found that David Brin spoke of the same thing 20 years ago in The Transparent Society. The idea seems not to have surfaced or propagated at all in any of the deepfake discourse, and I find that a little bit disturbing. There is a cartoon Robin Hanson that sits on my shoulder who's wryly whispering "Fearmongering is not about preparation" and "News is not about informing". I hope it isn't true. Anyway. In short, if we want to stay sane, we will start building cameras with tamperproof seals that sign the data they produce with a manufacturer's RSA signature to verify that the footage comes directly from a real camera, and we will require all news providers to provide a checked (for artifacts of doctoring and generation), verified, signed (unedited) online copy of any footage they air. If we want to be extra thorough (and we should), we will also allocate public funding to the production of disturbing, surreal, inflammatory, but socially mostly harmless deepfakes to exercise the public's epistemic immune system, ensuring that they remain vigilant enough to check the national library of evidence for signed raws before acting on any new interesting video. I'm sure you'll find many talented directors who'd jump at the chance to produce these vaccinating works, and I think the tradition will find plenty of popular support, if properly implemented. The works could be great entertainment, as will the ensuing identification of dangerously credulous fools. Technical thoughts about those sealed cameras The camera's seal should be fragile. When it's broken (~ when there is any slight shift in gas pressure or membrane conductivity, when the components move, w
16e848aa-a904-4ec0-b963-310eeb58a5de
trentmkelly/LessWrong-43k
LessWrong
Catastrophe Engines: A possible resolution to the Fermi Paradox The Fermi Paradox leads us to conclude that either A) intelligent life is extremely improbable, B) intelligent life very rarely grows to a higher-level civilization, or C) that higher-level civilizations are common, but are not easy to spot.  But each of these explanations are hard to believe.  It is hard to believe that intelligent life is rare, given that hominids evolved intelligence so quickly.  It is hard to believe that intelligence is inherently self-destructive, since as soon as an intelligent species gains the ability to colonize distant planets, it becomes increasingly unlikely that the entire species could be wiped out; meanwhile, it appears that our own species is on the verge of attaining this potential.  It is hard to believe C, since natural selection favors expansionism, so if even a tiny fraction of higher-level civilizations value expansion, then that civilization becomes extremely visible to observers due to its exponential rate of expansion.  Not to mention that our own system should have already been colonized by now. Here I present a new explanation on why higher-level civilizations might be common, and yet still undetected.  The key assumption is the existence of a type of Matrioshka brain which I call a "Catastrophe Engine."  I cannot even speculate on the exotic physics which might give rise to such a design.  However, the defining characteristics of a Catastrophe Engine are as follows: 1. The Catastrophe Engine is orders or magnitude more computationally powerful than any Matrioshka Brain possible by conventional physics. 2. The Catastrophe Engine has a fixed probability 1-e-λt of "meltdown" in any interval of t seconds.  In other words, the lifetime of a Catastrophe Engine is an exponentially distributed random variable with a mean lifetime of 1/λ seconds. 3. When the Catastrophe Engine suffers a meltdown, it has a destructive effect of radius r, which, among other things, results in the destruction of all other Catastrophe Engines wi
b68dd5de-77b3-4045-bb0b-95d26f827d9b
trentmkelly/LessWrong-43k
LessWrong
Robot ethics returns Cross posted from Overcoming Bias. Comments there. *** People are often interested in robot ethics. I have argued before that this is strange. I offered two potential explanations: 1. Ethics seems deep and human, so it’s engagingly eerie to combine it with heartless AI 2. People vastly misjudge how much ethics contributes to the total value society creates A more obvious explanation now: people are just more interested in ethics when the subject is far away, for instance in the future. This is the prediction of construal level theory. It says thinking about something far away makes you think more abstractly, and in terms of goals and ideals rather than low level constraints. Ethics is all this. So a further prediction would be that when we come to use robots a lot, expertise from robot ethicists will be in as little demand as expertise from washing machine ethicists is now. Some other predictions, to help check this theory: * Emerging or imagined technologies should arouse ethical feelings more than present technologies do in general * International trade should prompt more ethical feelings than local trade * Stories of old should be more moralizing than stories of now * Historical figures should be seen in a more moral light than present-day celebrities * Space travel should be discussed in terms of more moral goals than Earth travel. * Ethical features of obscure cultures should be relatively salient compared to familiar cultures More? Which of these are actually true? There is definitely some conflicting evidence, for instance people feel more compelled to help people in front of them than those in Africa (there was an old OB post on this, but I can’t find it). There are also many other reasons the predictions above may be true. Emerging technologies might prompt more ethical concerns because they are potentially more dangerous for instance. The ethical dimension to killing everyone is naturally prominent. Overall construal level theory still see
0c28c87c-3f0f-4a32-aa68-e2fc5c9edd33
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Acausal trade: different utilities, different trades .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} *A putative new idea for AI control; index [here](https://agentfoundations.org/item?id=601)*. Other posts in the series: [Introduction](https://agentfoundations.org/item?id=1465), [Double decrease](https://agentfoundations.org/item?id=1463), [Pre-existence deals](https://agentfoundations.org/item?id=1471), [Full decision algorithms](https://agentfoundations.org/item?id=1466), [Breaking acausal trade](https://agentfoundations.org/item?id=1480), [Trade in different types of utility functions](https://agentfoundations.org/item?id=1464), [Being unusual](https://agentfoundations.org/item?id=1404), and [Summary](https://agentfoundations.org/item?id=1482). In a [previous post](https://agentfoundations.org/item?id=1463), the marginal gain of one utility in terms of another was given by a crisp formula. This is obviously unrealistic, but how unrealistic is it? This post will look at various features we might expect from utility functions and acausally trading agents, and what makes them easy or difficult to trade with. Independence ============ The ideal trade is something like agent A1 likes staples, agent A2 likes green objects, and they both decide to make green staples. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/5bd75cc58225bf0670375415/qc93cspks9tfnsimdnsc) Here the two utilities are almost independent: increasing one costs almost nothing from the perspective of the other. In general, independence (which could also be called low trade-offs) is very desirable between different utilities. This is not a transitive property: staples and green colour can be pretty independent, staples and blue colour as well, but obviously green and blue are in direct tension. It's plausible that there will exist whole families of mutually independent utilities, coming together as a package that can get maximised together or separately. Popularity ========== An obvious point, but utilities that are very popular - very widely shared by different agents - are more likely to be traded with, since there are more agents willing to trade in return. This willingness is non-linear in "[double decrease](https://agentfoundations.org/item?id=1463)" situations, for instance those that do not derive from some sort of timeless, [pre-existence deal](https://agentfoundations.org/item?id=1471). Complexity and coordination =========================== Then there's the complexity of the preference. Simple utilities, or those whose existence is simple to deduce, are much easier to figure out and hence maximise. This is partially a question of popularity, but it's also an artefact of the fact that agents may have different priors (or, even if they have the same priors, different posteriors after updating on the agent's own existence). In all those situations, simple preferences are likely to come up often. These another type of complexity, which is complexity in increasing that utility. Suppose U counts the number of named nodes in a network, where every node knows the name of every other node. This utility is not hard to describe, but is useless for acausal trade, as a second network would not be connected to the first one in terms of knowing the names of the first nodes. So U can easily be maximised by causal trade, but not acausal. Total, average, diminishing returns =================================== Total utilitarian-like utilities are particularly easy to trade with: simply create more of the desired object. Average utilitarian-like utilities are almost un-tradable: extra copies of the desired object can be actively pernicious, and the best is for a single agent to be maximising them. In between, we have utilities that exhibit diminishing returns in the number of desired objects. For instance, a U that counts the number of unique, flourishing humans, will discount identical copies, which are are likely if the number of humans created gets very large. Utilities that exhibit diminishing returns also interact differently with uncertainty about the number of agents out there. For a total-style utility, a 1/Ω chance of Ω agents maximising U is the same as one agent maximising U with certainty, while for utilities that exhibit diminishing returns, this can be very different. Negatives and extortion ======================= So far, we've been considering utilities that would be independent, given infinite resources: the only trade-offs are those where different utilities want to maximise different things given the same resources. But some utilities are more strongly opposed than that. Blue and green maximisers don't oppose each other directly, but an agent that wants more green is opposed to one that wants less green. A "human flourishing" type utility would be opposed to a utility that values human suffering (or, more realistically, a different definition of flourishing that values some state of affairs that exhibits what we'd call suffering among human-like agents). A utility has a negative side if the value of that utility can be decreased by someone's actions in a causally disconnected way. The utilities U and V are opposed if each agent would prefer a trading partner maximise nothing rather than maximising some mix aU+bV. If two utilities are opposed, then at least one must have a negative side. Obviously utilities that don't oppose others, will find trading easier. Those that directly oppose others may find that what they are offered in trade, is simply that agents don't maximise the opposed utilities. This is also connected with issues of [extortion](http://lesswrong.com/lw/obb/extortion_and_trade_negotiations/). Only utilities with negative sides are vulnerable to extortion (though all utilities and agents are vulnerable to being extorted in the sens of getting a bad trade deal). Fairness and imperfect trade ============================ Because it's likely that different agents will have [different](https://agentfoundations.org/item?id=1480) [trading algorithms](https://agentfoundations.org/item?id=1466), large acausal trade networks will likely include agents who's concept of fair deals differs from that of the majority of the network. This could be implemented similarly to Scott's idea [here](https://agentfoundations.org/item?id=1469), or possibly by some alien and currently unimaginable fairness procedure. In any case, agents whose conceptions of "fair" are the broadest are likely to be included in the largest trade networks (the cost being, of course, that they are likely to derive less profit per other agent in the network). Update rules and unlikely agents ================================ Once an agent A1 updates on their own existence, they have a distribution of other likely agents. They can therefore model what happens when other agents update on their own existence. Suppose that A1 predicts that the agents it's most likely to be able to trade with are of type An; but suppose it also predicts that An agents will conclude that A1 agents are rare. Then it will have difficulty trading. In contrast, if A1 expects that all its favourable trading partners predict its own existence, then it will trade advantageously. This phenomena is also subject to the usual double decrease. Multiple acausal trade networks =============================== Because of all the above, it's possible that agents and utilities could be part of multiple acausal trade networks, joining them (in part) for different reasons. Or maybe, because of the double decrease problem or threats between trade networks, it will turn out to almost always be better to be member of a single one.
1d799f16-7732-49ac-a0b8-136a337a220b
trentmkelly/LessWrong-43k
LessWrong
Against Desirism
1b3d7e58-a6b4-4697-81c6-8374b4227122
trentmkelly/LessWrong-43k
LessWrong
A short poem I found "If there ever was an enemy who stared us directly in the face, it's certainly the computer screen." -- Unknown
095631f1-c784-4f3d-9b7a-2f6a717b9381
trentmkelly/LessWrong-43k
LessWrong
Meetup : Ottawa LessWrong Weekly Meetup Discussion article for the meetup : Ottawa LessWrong Weekly Meetup WHEN: 23 June 2011 07:00:00PM (-0400) WHERE: Fox and Feather Pub, 283 Elgin Street, Ottawa, Ontario Fox and Feather Pub, upstairs, possibly in back room #1 if it's free. Look for the LW sign. 7pm EDT, obviously! Discussion article for the meetup : Ottawa LessWrong Weekly Meetup
7c52ccfd-9d75-4cc1-be1a-fb457e3e387d
trentmkelly/LessWrong-43k
LessWrong
2 small problems with the site... They're more like inconveniences, but still. 1. I've set my account setting to display "magical algorithm" comments, but when I read posts I still see them in the order of "oldest". I guess that's because of a discrepancy between those drop-down options: in 1 it's "top scored", in another it's "magical". 2. The truncation of comments works, but sometimes I find it actually making them longer. Example: the site displays someone's comment: blah blah blah therefore your argument is i... (Read more) (Click to expand thread. Ctrl F to expand all). I click Read more. Now it is: blah blah blah therefore your argument is invalid. So I guess your algorithm of how to display comments looks something like this (pardon my below expression, I'm not a programmer at all): > If [comment_length] >150 > then display [1st 150 characters of comment] and [...(Read more) (Click to expand thread. Ctrl F to expand all)] This leads to the problem as described above. When a comment is 157 char long, for example, this truncate method actually increases the length displayed AND leaves readers like me with a weird feeling after clicking. Here I propose a solution within my ability of thinking: > If [comment_length] >150 > then display [1st 75 characters of comment] and [...(Read more) (Click to expand thread. Ctrl F to expand all)] So... have those 2 minor things above been mentioned before? If not, I'd like to report. If yes, then are you going to look into them and when can they be changed? :)
7a1e67a9-e77c-4734-83f1-4ea1f0c773ef
trentmkelly/LessWrong-43k
LessWrong
[Link] The Hyborian Age Yay a new cool post is up on West Hunters blog! It is written by Gregory Cochran and Henry Harpending with whom most LWers are probably already familiar with (particularly this awesome entry). It raises some interesting points on biases in academia. > I was contemplating Conan the Barbarian, and remembered the essay that Robert E. Howard wrote about the  background of those stories – The Hyborian Age.  I think that the flavor of Howard’s pseudo-history is a lot more realistic than the picture of the human past academics preferred over the past few decades. > > In Conan’s world, it’s never surprising to find a people that once mixed with some ancient prehuman race.  Happens all the time.  Until very recently, the vast majority of workers in human genetics and paleontology were sure that this never occurred – and only changed their minds when presented with evidence that was both strong (ancient DNA)  and too mathematically sophisticated for them to understand or challenge (D-statistics). > > Conan’s history  was shaped by the occasional catastrophe.  Most academics (particularly geologists) don’t like catastrophes, but they have grudgingly come to admit their importance – things like the Thera and Toba eruptions, or the K/T asteroid strike and the Permo-Triassic crisis. > > Between the time when the oceans drank Atlantis, and the rise of the sons of Aryas, evolution seems to have run pretty briskly, but without any pronounced direction.  Men devolved into ape-men when the environment pushed in that direction (Flores ?)  and shifted right back when the environment favored speech and tools.  Culture shaped evolution, and evolution shaped culture.  An endogamous caste of snake-worshiping priests evolved in a strange direction.  Although their IQs were considerably higher than average, they remained surprisingly vulnerable to sword-bearing barbarians. > > In this world, evolution could happen on a time scale of thousands of years, and there was no magic rule that en
2e5ed439-498e-4fb0-b2da-d0412f315bde
trentmkelly/LessWrong-43k
LessWrong
Stories of Summer Solstice [Epistemic Status: talking in the language of metaphorical soul and mysticism.] Previously: * Visions of Summer Solstice * Obligatory link to The Value and Danger of Ritual for people just tuning in. June 23rd, the Bay community journeyed out to the edge of the world to celebrate the longest day of the year. > Photo Credit to Shaked Koplewitz This came on the heels of EA Global, which had a strong focus on action, networking, planning projects and thinking about the future. Much of rationality is about thinking and doing. But once a year, it's nice to be. We scoped out several possible locations, and ultimately settled on the Marin Headlands – a surreal, remote world of hidden beaches, spooky ruins and epic cliffs. Approximately 100 people came, some in 15-seat caravan vans, some individual carpools. At solar noon, when the shadows are shortest, we opened with a speech by Malcolm Ocean. "The biggest thing you'll want to be wary of here is fear of missing out. There's going to be a lot of stuff going on today. It is not possible to see it all – just like life. Rather than trying desperately to optimize your time, I recommend just... going with the flow of the moment." During the day, we had a few major activities: * Circling * Capture the Flag with water guns, war paint and a literal fort (although we ended up having to find a second fort because the one we meant to use got sniped by a wedding) * Group singing * Exploration in small groups of the various nooks, crannies and cliffscapes of the Headlands. We didn't end up building a solar-temple, due to various logistical issues (and I think largely because there were lots of competing things to do). But a) I'm fairly confident about getting that done next year, and b) meanwhile... the drum circle at sunset felt deeply good. On the event feedback survey I sent out, the overall ratio of meh/good/great/ecstatic for was 2/10/13/3, which has me pretty confident that we got the general shape of the event ri
3201a6dc-4cd6-4622-a347-990d9a682f5a
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
The Value Definition Problem How to understand non-technical proposals ========================================= This post grew out of conversations at EA Hotel, Blackpool about how to think about the various proposals for ‘solving’ AI Alignment like CEV, iterated amplification and distillation or ambitious value learning. Many of these proposals seemed to me to combine technical and ethical claims, or to differ in the questions they were trying to answer in confusing ways. In this post I try to come up with a systematic way of understanding the goals of different high-level AI safety proposals, based on their answers to the **Value Definition Problem**. Framing this problem leads to comparing various proposals by their level of Normative Directness, as defined by Bostrom in *Superintelligence*. I would like to thank [Linda Linsefors](https://www.lesswrong.com/users/linda-linsefors) and [Grue\_Slinky](https://www.lesswrong.com/users/grue_slinky) for their help refining these ideas, and EA Hotel for giving us the chance to discuss them. Defining the VDP ================ In *Superintelligence* (2014) Chapter 14, Bostrom discusses the question of ‘what we should want a Superintelligence to want’, defining a problem; > “Supposing that we could install any arbitrary value into our AI, what should that value be?” **The Value Definition Problem** By including the clause ‘supposing that we could install any arbitrary value into our AI’, Bostrom is assuming we have solved the full **Value Loading Problem** and can be confident in getting an AGI to pursue any value we like. Bostrom’s definition of this ‘deciding which values to load’ problem is echoed in other writing on this topic. One proposed answer to this question, the Coherent Extrapolated Volition (CEV) is described by Yudkowsky as > [‘a proposal about what a sufficiently advanced self-directed AGI should be built to want/target/decide/do’](https://arbital.com/p/normative_extrapolated_volition/)**.** With the caveat that this is something you should do ‘with an extremely advanced AGI, if you're extremely confident of your ability to align it on complicated targets’. However, if we only accept the above as problems to be solved, we are being problematically vague. Bostrom explains why in Chapter 14. If we really can ‘install any arbitrary value into our AI’, we can simply require the AI to ‘do what I mean’ or ‘be nice’ and leave it at that. If an AGI successfully did “want/target/decide to do what I meant”, then we would have successful value alignment! Answers like this are not even wrong - they shunt all of the difficult work into the question of solving the Value Loading Problem, i.e. in precisely specifying ‘do what I mean’ or ‘be nice’. In order to address these philosophical problems in a way that is still rooted in technical considerations, I propose that instead of simply asking what an AGI should do if we could install *any arbitrary value*, we should seek to solve the Value Definition Problem: > “Given that we are trying to solve the [Intent Alignment problem](https://www.lesswrong.com/posts/ZeE7EKHTFMBs8eMxn/clarifying-ai-alignment) for our AI, what should we aim to get our AI to want/target/decide/do, to have the best chance of a positive outcome?” In other words, instead of the unconditional, ‘what are human values’ or ‘what should the AI be built to want to do’, it is the conditional, ‘What should we be trying to get the AI to do, to have the best chance of a positive outcome’. This definition of the VDP excludes excessively vague answers like ‘do what I mean’, because an AI with successful intent alignment is not guaranteed to be capable enough to successfully determine ‘what we mean’ under all circumstances. In extreme cases, like the Value Definition ‘do what I mean’, "what we mean" is undefined because we don't know what we mean, so there is no answer that could be found. If we have solved the VDP, then an Intent-Aligned AI, in the course of trying to act according to the Value Definition, should actually be able to act according to the Value Definition. In acting according to this Value Definition, the outcome would be beneficial to us. Even if a succesfully aligned AGI is nice, does what I mean and/or acts according to Humanity's CEV, these were only good answers to the VDP if adopting them was actually useful or informative in aligning this AGI. What counts as a good solution to the VDP depends on our solution to intent alignment and the AGI’s capabilities, because what we should be wanting the AI to do will depend on what the AGI can discover about what we want. This definition of the VDP does not precisely cleave the technical from the philosophical/ethical issues in solving AI value alignment, but I believe it is well-defined enough to be worth considering. It has the advantage of bringing the ethical and technical AI Safety considerations closer together. A good solution to the VDP would still be an informal definition of value: what we want the AI to pursue. However, it should give us at least some direction about technical design decisions, since we need to ensure that the Intent-Aligned AI has the capabilities necessary to learn the given definition of value, and that the given definition of value does not make alignment very hard or impossible. Criteria for judging Value Definitions -------------------------------------- 1. **How hard would Intent-Aligning be;** How hard would it be to ensure the AI ‘tries to do the right thing’, where ‘right’ is given by the Value Definition. In particular, does adopting this definition of value make intent-alignment easier? 2. **How great would our AGI capabilities need to be;** How hard would it be for the AGI to ‘[figure] out which thing is right’, where ‘right’ is given by the Value Definition. In particular, does adopting this definition of value help us to understand what capabilities or architecture the AI needs? 3. **How good would the outcome be;** If the AGI is successfully pursuing our Value Definition, how good would the outcome be? **3** is what Bostrom focuses on in Chapter 14 of Superintelligence, as (with the exception of dismissing useless answers to the VDP like ‘be nice’ or ‘do what I mean’) he does not consider whether different value definitions would influence the difficulty of Intent Alignment or the required AI Capabilities. Similarly, Yudkowsky assumes we are ‘extremely confident’ of our ability to get the AGI to pursue an arbitrarily complicated goal. **3** is a normative ethical question, whereas the first two are (poorly [understood and defined](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6)) technical questions. Some values are easier to specify and align to than others, so even when discussing pure value definitions, we should keep the technical challenges at the back of our mind. In other words, while **3** is the major consideration used for judging value definitions, **1** or **2** must also be considered. In particular, if our value definition is so vague that it makes intent alignment impossible, or requires capabilities that seem magical, such as ‘do what I mean’ or ‘be nice’, we do not have a useful value definition. ### Human Values and the VDP While **1** and **2** are clearly difficult questions to answer for any plausible value definition, **3** seems almost redundant. It might seem as though we should expect at least a reasonably good outcome if we were to ‘succeed’ with any definition that is intended to extract the values of humans, because by definition success would result in our AGI having the values of humans. Stuart Armstrong argues that to properly address **3** we need [‘**a definition - a theory - of what human values actually are’**](https://www.alignmentforum.org/posts/zvrZi95EHqJPxdgps/why-we-need-a-theory-of-human-values). This is necessary because different interpretations of our [values tend to diverge](https://slatestarcodex.com/2018/09/25/the-tails-coming-apart-as-metaphor-for-life/) when we are confronted by extreme circumstances and because in some cases it is not clear what our ‘real preferences’ actually are. > An AI could remove us from typical situations and put us into extreme situations - at least "extreme" from the perspective of the everyday world where we forged the intuitions that those methods of extracting values roughly match up. > Not only do we expect this, but we desire this: a world without absolute poverty, for example, is the kind of world we would want the AI to move us into, if it could. In those extreme and unprecedented situations, we could end up with revealed preferences pointing one way, stated preferences another, while regret and CEV point in different directions entirely. **3** amounts to a demand to reach at least some degree of clarity (if not solve) normative ethics and metaethics - we have to understand what human values are in order to choose between or develop a method for pursuing them. Indirect vs Direct Normativity ============================== Bostrom argues that our dominant consideration in judging between different value definitions should be the ‘principle of epistemic deference’ > **The principle of epistemic deference** > A future superintelligence occupies an epistemically superior vantage point: its beliefs are (probably, on most topics) more likely than ours to be true. We should therefore defer to the superintelligence’s opinion whenever feasible. In other words, in describing the 'values' we want our superintelligence to have, we want to hand over as much work to the superintelligence as possible. > This takes us to indirect normativity. The obvious reason for building a super-intelligence is so that we can offload to it the instrumental reasoning required to find effective ways of realizing a given value. Indirect normativity would enable us also to offload to the superintelligence some of the reasoning needed to select the value that is to be realized. The key issue here is given by the word ‘some’. How much of the reasoning should we offload to the Superintelligence? The principle of epistemic deference answers ‘as much as possible’. What considerations push against the principle of epistemic deference? One consideration is the metaethical views we think are plausible. In Wei Dai’s [Six Plausible Meta-Ethical Alternatives](https://www.lesswrong.com/posts/orhEa4wuRJHPmHFsR/six-plausible-meta-ethical-alternatives), two of the more commonly held views are that ‘intelligent beings have a part of their mind that can discover moral facts and find them motivating, but those parts don't have full control over their actions’ and that ‘there are facts about how to translate non-preferences (e.g., emotions, drives, fuzzy moral intuitions, circular preferences, non-consequentialist values, etc.) into preferences’. Either of these alternatives suggest that too much epistemic deference is not valuable - if, for example, there are facts about what everyone should value but a mind must be structured in a very specific way to discover and be motivated by them, we might want to place restrictions on what the Superintelligence values to make sure we discover them. In the extreme case, if a certain moral theory is known to be correct, we could avoid having to trust the Superintelligence’s own judgment by just getting it to obey that theory. This extreme case could never practically arise, since we could never achieve that level of confidence in a particular moral theory. Bostrom says it is ‘foolhardy’ to try and do any moral philosophy work that could be left to the AGI, but as Armstrong says, it will be necessary to do some work to understand what human values actually are - how much work? Classifying Value Definitions ============================= **The Scale of Directness** Issa Rice recently provided a list of [‘[options] to figure out the human user or users’ actual preferences’](https://www.lesswrong.com/posts/ebdf8GZxt3L9grwwN/deliberation-as-a-method-to-find-the-actual-preferences-of), or to determine definitions of value. These ‘options’, if successfully implemented, would all result in the AI being aligned onto a particular value definition. > We want good outcomes from AI. To get this, we probably want to figure out the human user's or users' "actual preferences" at some point. There are several options for this. Following Bostrom’s notion of ‘Direct and Indirect Normativity’ we can classify these options by how direct their value definitions are - how much work they would hand off to the superintelligence vs how much work the definition itself does in defining value. Here I list some representative definitions from most to least normatively direct. **Value Definitions** **[Hardwired Utility Function](https://www.lesswrong.com/posts/FBEaheqfmDgL6gB5x/superintelligence-14-motivation-selection-methods#Summary)** Directly specify a value function (or rigid rules for acquiring utilities), assuming a fixed normative ethical theory. It is essentially [impossible to directly specify a correct reward function for a sufficiently complex task](https://www.lesswrong.com/posts/5eX8ko7GCxwR5N9mN/what-is-ambitious-value-learning). Already, we use indirect methods to align an RL agent on a complex task (see e.g. [Christiano (2017)](https://arxiv.org/abs/1706.03741))*.* For complex, implicitly defined goals we are always going to need to learn some kind of reward/utility function predictor. **[Ambitious Learned Value Function](https://www.lesswrong.com/posts/5eX8ko7GCxwR5N9mN/what-is-ambitious-value-learning)** Learn a measure of human flourishing and aggregate it for all existing humans, given a fixed normative (consequentialist) ethical theory that tells us how to aggregate the measure fairly. E.g. have the AI learn a model of the current individual preferences of all living humans, and then maximise that using total impersonal preference utilitarianism. This requires a very high degree of confidence that we have found the correct moral theory, including resolving all paradoxes in population ethics like the Repugnant conclusion. **[Distilled Human Preferences](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616)** Taken from IDA. Attempt to ‘distil out’ the relevant preferences of a human or group of humans, by imitation learning followed by capability amplification, thus only preserving those preferences that survive amplification. Repeat this process until we have a superintelligent agent that has the distilled preferences of a human. This subset of the original human’s preferences, suitably amplified, defines value. Note that specific choices about how the deliberation and amplification process play out will embody [different value definitions](https://www.lesswrong.com/posts/ebdf8GZxt3L9grwwN/deliberation-as-a-method-to-find-the-actual-preferences-of#cnPjrBqta4aP7ez4k). As two examples, the IDA could model either the full and complete preferences of the Human using future **[Inverse Reinforcement Learning](https://people.eecs.berkeley.edu/~russell/papers/colt98-uncertainty.pdf)** methods, or it could model the likely instructions of a ‘human-in-the-loop’ offering low-resolution feedback - these could result in quite different outcomes. **[Coherent Extrapolated Volition](https://www.lesswrong.com/posts/EQFfj5eC5mqBMxF2s/superintelligence-23-coherent-extrapolated-volition) / [Christiano’s Indirect Normativity](https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/)** Both Christiano’s formulation of Indirect Normativity and the CEV define value as the endpoint of a value idealization and extrapolation process with as many free parameters as possible. > Predict what an idealized version of us would want, "if we knew more, thought faster, were more the people we wished we were, had grown up farther together". It would recursively iterate this prediction for humanity as a whole, and determine the desires which converge **Moral Realism** Have the AI determine the correct normative ethical theory, whatever that means, and then act according to that. **'Do What I Mean'** **'Be Nice'** I have tried to place these different definitions of value in order from the most to least normatively direct. In the most direct case, we define the utility function ourselves. Less direct than that is defining a rigid normative framework within which the AGI learns our preferences. Then, we could consider letting the AGI also have decisions over which normative frameworks to use. Much less direct, we come to deliberation-based methods or methods which define value as the endpoint of a specific procedure. Christiano’s Iterated Amplification and Distillation is supposed to preserve a particular subset of human values (those that survive a sequence of imitation and capability amplification). This is more direct than CEV because there some details about the distillation procedure are given. Less direct still is Yudkowsky’s CEV, because CEV merely places its value as the endpoint of some sufficiently effective idealisation and convergence procedure, which the AGI is supposed to predict the result of, somehow. Beyond CEV, we come to ‘methods’ that are effectively meaningless. Considerations -------------- Here I briefly summarise the considerations that push us to accept more or less normatively direct theories. **Epistemic Deference** and **Conservatism** were taken from [Bostrom (2014),](https://www.lesswrong.com/posts/EQFfj5eC5mqBMxF2s/superintelligence-23-coherent-extrapolated-volition) while **Well-definedness** and **Divergence** were taken from [Armstrong](https://www.alignmentforum.org/posts/zvrZi95EHqJPxdgps/why-we-need-a-theory-of-human-values). **Epistemic Deference**: Less direct value definitions defer more reasoning to the superintelligence, so assuming the superintelligence is intent-aligned and capable, there are fewer opportunities for mistakes by human programmers. Epistemic Deference effectively rules out direct specification of values, on the grounds that we are effectively guaranteed to make a mistake resulting in misalignment. **Well-definedness**: Less direct value definitions require greater capabilities to implement, and are also less [well-defined](https://www.alignmentforum.org/posts/zvrZi95EHqJPxdgps/why-we-need-a-theory-of-human-values#Underdefined_methods) in the research directions they suggest for how to construct explicit procedures for capturing the definition. Direct utility specification is something we can do today, while CEV is currently under-defined. Armstrong argues that our value definition *must eventually* contain explicit criteria for what ‘human values’ are, rather than the maximal normative indirectness of handing over judgments about what [values are to the AGI](https://www.alignmentforum.org/posts/zvrZi95EHqJPxdgps/why-we-need-a-theory-of-human-values#Human_judgement_and_machine_patching) - ‘The correct solution is not to assess the rationality of human judgements of methods of extracting human values. The correct solution is to come up with a better theoretical definition of what human values are.’ **Conservatism**: More direct theories will result in more control over the future by the programmers. This could be either good or bad depending on your normative ethical views and political considerations at the time the AI is developed. For example, Bostrom states that in a scenario where the morally best outcome includes reordering all matter to some optimal state, we might want to turn the rest of the universe over to maximising moral goodness but leave an exception for Earth.This would involve more direct specification. **Divergence**: If you are a strong externalist realist (believes that moral truth exists but might not be easily found or motivating) then you will want to take direct steps to mandate this. If the methods that are designed to extract human preferences [diverge strongly](https://www.alignmentforum.org/posts/zvrZi95EHqJPxdgps/why-we-need-a-theory-of-human-values#Divergent_methods) in what they mandate, we need a principled procedure for choosing between them, based on what actually is morally valuable. More normatively direct methods provide a chance to make these moral judgement calls. Summary ======= I have provided two main concepts which I think are useful for judging nontechnical AI Safety proposals - these are, The **Value Definition Problem**, and the notion of the **Scale of Normative Directness** and the considerations that affect positioning on it. Both these considerations I consider to be reframings of previous work, mainly done by Bostrom and Armstrong. I also note that, on the Scale of Directness, there is quite a large gap between a very indirect method like CEV, and the extremely direct methods like ambitious value learning. ‘Ambitious Value Learning’ defines value using a specific, chosen-in-advance consequentialist normative ethical theory (which tells us how to aggregate and weight different interests) that we then use an AI to specify in more detail, using observations of humans’ revealed preferences. Christiano says of methods like CEV, which aim to extrapolate what I ‘really want’ far beyond what my current preferences are; [‘most practitioners don’t think of this problem even as a long-term research goal — it’s a qualitatively different project without direct relevance to the kinds of problems they want to solve’](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc/p/SvuLhtREMy8wRBzpC). This is effectively a statement of the **Well-definedness** consideration when sorting through value definitions - our long-term ‘coherent’ or ‘true’ preferences currently aren’t well understood enough to guide research so we need to restrict ourselves to more direct normativity - extracting the actual preferences of existing humans. After CEV, the next most ‘direct’ method, Distilled Human preferences (the definition of value used in Christiano’s IDA), is still far less direct than ambitious value learning, eschewing all assumptions about the content of our values and placing only some restrictions on their form. Since not all of our preferences will survive the amplification and distillation processes, the hope is that the morally relevant ones will - even though as yet we do not have a good understanding of how durable our preferences are and which ones correspond to specific human values. This vast gap in directness suggests a large range of unconsidered value definitions that attempt to ‘defer to the Superintelligence’s opinion’ not *whenever possible* but only *sometimes*. Armstrong has already claimed we must do much more work in defining what me mean by human values than the more indirect methods like IDA/CEV suggest when he argued, ‘The correct solution is not to assess the rationality of human judgements of methods of extracting human values. The correct solution is to come up with a better theoretical definition of what human values are.’ I believe that we should investigate ways to incorporate our high-level judgements about which preferences correspond to ‘genuine human values’ into indirect methods like IDA, making the indirect methods more direct by rigidifying parts of the deliberation or idealization procedure - but that is for a future post.
64fb9445-346e-474c-a15b-844d1e47f2bb
trentmkelly/LessWrong-43k
LessWrong
Probability is the oil of rationalisation Or How to do whatever you feel like despite being a rationalist. To rationally make a choice you weigh up all costs and benefits of all possibilities and choose the one with the greatest net benefit. To rationalise a choice you want to make, you choose costs and benefits that lead to your choice seeming like the rational conclusion. Thinking you’re being rational while completely ignoring known costs and benefits that don’t lead to your preferred conclusion is hard to do though. Even slight intelligence leads you to notice things like this happening in your mind. For most everyday decisions I suggest the ‘solution’ lies in probability estimation. While you might have a set of outcomes you consider possible, their likelihoods are virtually always uncertain. It’s a guessing game, and if you’re guessing, why not guess things that lead to the conclusion you prefer? You might even notice while you’re doing it that your probability estimates are being swayed by the conclusion they’ll lead to, but it doesn’t matter. Within the range where there are no other bases for their positioning, why change your estimates to ones with a less pleasing outcome in the short term? Essentially we slide partiality into the one non-rational part of a rational process.
87d586f4-71af-4b01-906f-d08fbfa9f027
trentmkelly/LessWrong-43k
LessWrong
In Defense of Epistemic Empathy TLDR: Why think your ideological opponents are unreasonable? Common reasons: their views are (1) absurd, or (2) refutable, or (3) baseless, or (4) conformist, or (5) irrational. None are convincing. ---------------------------------------- Elizabeth is skeptical about the results of the 2020 election. Theo thinks Republicans are planning to institute a theocracy. Alan is convinced that AI will soon take over the world. You probably think some (or all) of them are unhinged. As I’ve argued before, we seem to be losing our epistemic empathy: our ability to both (1) be convinced that someone’s opinions are wrong, and yet (2) acknowledge that they might hold those opinions for reasonable reasons.   For example, since the 90s our descriptions of others as ‘crazy’, ‘stupid’ or ‘fools’ has skyrocketed: I think this is a mistake. Lots of my work aims to help us recover our epistemic empathy—to argue that reasonable processes can drive such disagreements, and that we have little evidence that irrationality (the philosophers’ term for being “crazy”, “stupid”, or a “fool”) explains it. The most common reaction: “Clever argument. But surely you don’t believe it!” I do. Obviously people sometimes act and think irrationally. Obviously that sometimes helps explain how they end up with mistaken opinions. The question is whether we have good reason to think that this is generically the explanation for why people have such different opinions than we do. Today, I want to take a critical look at some of the arguments people give for suspending their epistemic empathy: (1) that their views are absurd; (2) that the questions have easy answers; (3) that they don’t have good reasons for their beliefs; (4) that they’re just conforming to their group; and (5) that they’re irrational. None are convincing. Absurdity. “Sure, reasonable people can disagree on some topics. But the opinions of Elizabeth, Theo, and Alan are so absurd that only irrationality could explain it.” This arg
d39c65d5-bc70-4d68-b0d9-ea8c3205d4bf
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Is AI Safety a Pascal&#39;s Mugging? hi today we're going to talk about Pascal's wager and Pascal's mugging this is necessarily going to touch on religious topics pretty heavily actually and I'm just gonna say at the beginning of the video that personally I don't believe that any god or gods have ever existed and I'm not going to pretend otherwise in this video so be forewarned if that's likely to bother you so as you may know Pascal was a 17th century philosopher who was interested in amongst other things the question of the existence of the Christian God various philosophers at the time were arguing that God didn't exist and there was a lot of discussion going on about the various kinds of evidence for and against God in the world but there's this thing that's quite common when people think about religious questions where it feels sort of unsatisfying to talk about worldly evidence as if you were considering some everyday question there's a feeling that these supernatural concepts are very grand and mysterious they're special and so just straightforwardly considering the evidence for and against God is not the right way to do things this is of course encouraged by religious thinking the idea that some hypotheses aren't subject to the usual rules of evidence and logic is pretty appealing if you want to advocate for an idea that doesn't fare very well by those standards I suspect Pascal may have felt something like that because his position was that reason has nothing to say about the question of whether or not God exists it's sort of an unknowable thing and instead he proposed that we should make a wager we should think about it like this there are two possibilities either the Christian God exists or he doesn't and reason gives us no way to choose between those we have two options available to us either we can live according to God's laws and act as though we believe or we can not do that so we have a sort of payoff matrix here with four sections if God exists and we believe in him then we get infinite reward in heaven if God exists and we don't believe in him we get infinite punishment in hell if God doesn't exist and we believe in him then we pay some costs you know there are some rules we have to follow and so on and if he doesn't exist and we don't believe in him then maybe we get a few perks from not believing like having a lion on Sundays and being right Pascal's point is that this payoff matrix is completely dominated by the case in which God exists because we're talking about infinite rewards and infinite punishments as opposed to the other case with these very finite costs benefits so regardless of the evidence Pascal argues we should believe in God or at least act like it because it's just the sensible bet to make this is really kind of a nice argument from Pascal's perspective because it doesn't need evidence at all no finite earthly evidence can outweigh infinite supernatural payoffs it feels like the kind of clean abstract reasoning that you're supposed to do when thinking about the supernatural all of this hard work looking at history and psychology and science and trying to figure out where the ideas of religion come from and whether our world seems like the kind of world with a God in it it's long-winded confusing it's it's just messy but here we just have a clean argument that says we should believe in God or at least act like it and that seems very neat no evidence required so consider now Pascal is walking down the street and he's stopped by a shady looking man who says give me a wallet I would prefer not to do you even have a weapon no UK laws are very strict about that but I don't need one because I'm God your God yep I'm God and Christianity got a lot of things wrong about me my forgiving nature my infinite mercy and so on but the infinite torture thing is legit and if you don't give me your wallet right now I will torture you for eternity in the afterlife now if you're Pascal you're in kind of a difficult situation because the fact that it seems very unlikely that this murder actually is God is not meant to be part of your calculation your argument is one a pure logic it works independently of any evidence you didn't need to look for evidence of the Christian God and you don't need to look for evidence that this mother is God either so you kind of have to give him your wallet and now you're really in trouble because of course when this gets out there's gonna be a line around the block of lsaps deities asking for handouts how are you going to deal with this endless stream of fizzy gods well one thing you can do is you can play the muggers off against one another you can bring in two of them and say listen you say that you're going to torture me forever if I don't give you my wallet and you say the same thing I only have one wallet so it looks like whatever I do I'm going to be tortured forever by somebody and if I'm going to be infinitely tortured anyway well two times infinity is still just infinity so I may as well hang on to the wallet now get the hell out of my house all right next to no doubt these self-proclaimed deities may try to argue that they have some reason why they are in fact a real deity this other mugger is just a random guy who's pretending but that's all worldly evidence which you've decided isn't required for your argument and the muggers don't really want you to become interested in evidence because well the evidence points very strongly towards none of them being real gods so this is a better position to be in you're still spending a lot of your time arguing with charlatans but at least you still have your wallet and you don't actually have to pair them up against each other right you can just make up a deity when someone comes in pretending to be a god you can say oh well there's this other God who demands exactly the opposite thing from you a goddess actually and she's very powerful but she goes to a different school you would know her really yeah she lives in Canada she's only present obviously but she lives in Canada anyway she says that I'm not to give you the wallet and if I do then she'll torture me forever in the afterlife I think yeah so you can solve a lot of these problems by inventing gods arbitrarily and of course this applies just as well to the original version of Pascal's wager because although it's implied that this payoff matrix has enumerated all of the possibilities and in a sense it has the Christian God either exists or it doesn't nonetheless those may not be the only things that effect the payoffs for any given God you can take the God down flip it and reverse it and say what about ante God who wants me to do the exact opposite and promises the exact opposite consequences now you can see that they cancel out somebody who's arguing for the existence of the first God might say okay but this anti God is just made up which I mean yeah it is it's true that the situation isn't really symmetrical someone might think God is more likely than anti God because of evidence from the Bible and there's no such thing as the anti Bible and so on the point is though we're back to talking about the evidence that's really the problem I have with Pascal's wager the way it uses infinite costs and infinite benefits to completely override our necessarily finite evidence but what if the costs and benefits aren't infinite just very very large that ends up being a much more interesting question on one end of the scale we can easily name numbers so large that no amount of evidence anyone could ever actually gather in their lifetime could have an impact on the conclusion we can specify costs and benefits that are technically finite but that still feel very much like a Pascal's wager on the other end of the scale if you come across a bet that pays out 10 two one on an event with a probability of one in a hundred that's a very good bet to take someone could complain that it's a Pascal's wager to bet on an unlikely outcome just because the payoff is so high but if you take enough bets like that you're sure to become very rich in the same way if there's a button which has a one-in-a-million chance of starting a global thermonuclear war it's still worth expending significant resources to stop that button being pressed one in a million isn't much but the cost of a nuclear war is really high I don't think that's a Pascal's wager either the difference seems to come in somewhere in the gap between very small probabilities of very large costs and benefits and really extremely small probabilities of near infinite costs and benefits so why are we talking about this what does this have to do with AI safety well suppose somebody stops you in the street and says hey if we ever create powerful artificial general intelligence then that will have a tremendous impact in fact the future of the whole of humanity hinges on it if we get it right we could have human flourishing for the rest of time if we get it wrong we could have human extinction or worse regardless of how likely superhuman AGI is the potential impact is so high that it makes AI safety research tremendously important so give me your wallet it's been claimed by some that this is more or less what AI safety as a field is doing this is kind of an interesting point there's any safety advocates are we victims of Pascal's mugging or are we in fact Pascal's miners ourselves well if people were saying these AI risks may be extremely unlikely but the consequences of getting AI wrong are so huge that it's worth spending a lot of resources on regardless of the probabilities so we don't even need to consider the evidence well I would consider that to be a Pascal's wager style bad argument but what I actually hear is not that what I hear is more like look we're not completely sure about this it's quite possible that we're wrong but considering the enormity of what's at stake it's definitely worth allocating more resources to AI safety than we currently are that sounds pretty similar but that's mostly because natural language is extremely vague when talking about uncertainty there's an enormous difference in the probabilities being talked about in the same way if when you talk to AI safety researchers they said things like well I think the chance of any of this ever being relevant are really extremely tiny it seems more or less impossible to me but I've decided to work on it anyway because the potential costs and benefits are so unimaginably vast then yeah I'd be a little concerned that they might be victims of Pascal's mugging but when you ask AI safety researchers they don't think that the probability of their work ever becoming relevant is very tiny they don't necessarily think it's huge either maybe not even more than 50% but it's not so small that you have to rely on the unimaginable vastness of the consequences in order to make the argument to borrow a metaphor from Stuart Russell suppose you're part of a team working on building a bridge and you believe you've found a flaw in the design that could cause the structure to fail catastrophically maybe the disaster would only happen if there's a very rare combination of weather conditions and there's only a one in a hundred chance that those conditions will ever happen during the course of the bridges expected lifespan and further suppose that you're not completely sure of your calculations because this kind of thing is complicated maybe you only give yourself a 40% chance of being right about this so you go to the civil engineer in charge of the project and you say I think there's a serious risk with this bridge design do you think the bridges gonna collapse probably not no but I'm about 40 percent sure that there's a design flaw which would give this bridge a 1 in 100 chance of catastrophic failure so you're telling me that in the event of a scenario which is very unlikely to happen the bridge might collapse and you yourself admit that you're more likely to be wrong than right about this stop wasting my time but if the bridge collapses it could kill a lot of people I think this is a Pascal's mugging don't try to get me to ignore the low probabilities just by threatening very large consequences obviously that isn't what would happen no civil engineer is going to accept a 1 in 250 chance of catastrophic failure for a major piece of infrastructure because civil engineers have a healthy organizational culture around safety what it comes down to again is the difference between different levels of improbability the chance of an AGI catastrophe may not be very big but it's much much larger than the chance that a mugger is actually a god and what about our anti-god tactic finding the opposite risk does that still work like what if we consider the possibility that there's another opposite design flaw in the bridge which might cause it to collapse unless we don't spend extra time evaluating the safety if that is not what just look at the schematic with you and what if working on AI safety actually ends up making the risks worse somehow I think this actually is worth considering unintended consequences are a real problem after all speaking generally there's clear argument that the future is very important and that we're probably able to have a very big impact on it but it's hard to know for sure whether that impact will be positive or negative for any given course of action prediction is very difficult as they say especially about the future and the further into the future we look the more difficult it gets like imagine if you lived in the year 1900 and you had some insight that made you realize that nuclear weapons were possible and nuclear war was a risk you'd hope that you could use that understanding to reduce the risk but it would certainly be possible to make things worse by accident in the case of AI safety though I don't see that being anywhere near as much of a concern we're heading towards AI regardless and it seems very unlikely that thinking about safety would be more dangerous than not thinking about safety it's definitely possible to make things worse while trying to make them better but you can't avoid that by never trying to make things better I guess my point is there's just no getting around the messy confusing complicated work of looking at and thinking about the evidence any argument that doesn't rely on the evidence will work equally well whatever the truth is so at the end of the day that kind of thing isn't going to give you an answer you have to just stare at the bridge design and really think you have to actually do the engineering and that's something I'm trying to get across with this channel you won't find me saying never mind the evidence ai safety is important because it could have huge consequences what I do on this channel is I try to show you some of the evidence in some of the arguments and let you think about the situation and draw your own conclusions it can be tricky and involved it requires some thought but it has the advantage of being the only thing that has any chance of actually getting the right answer so thanks for watching [Music] as my wonderful patrons will know the alignment newsletter is a weekly publication from Rowan char which I read every week to stay up to date with what's going on in here safety and now I'm recording myself reading it out and publishing that as the alignment newsletter podcast it's aimed at researchers so it's a fair bit more technical than this channel but if you're interested in getting 15 minutes of AI safety news in your earholes each week check the link in the description I'm never going to put ads or sponsors on that podcast and that's largely thanks to my patrons in this video I'm especially thanking Chris canal thank you so much for your support Chris thank you to all of my patrons and thank you for watching I'll see you next time [Music] little costume changes
45431da3-f1a1-4c8e-adb0-b9180e6fe5e6
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Excerpts from "Doing EA Better" on x-risk methodology The post "[Doing EA Better](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1)" contains some critiques of the EA movement's approach to studying and ranking x-risks. These criticisms resonated with me and I wish we paid more attention to them. There were concerns about the original post being quite long and mixing a lot of different topics together, so I decided to extract some relevant sections into a separate post to enable focused discussion. *The original post is,* [*per site policy*](https://forum.effectivealtruism.org/posts/KK6AE8HzPkR2KnqSg/new-forum-license-creative-commons)*, available under a Creative Commons BY 4.0 license, so I am excerpting it as permitted by this license.* > We need to stop reinventing the wheel > ------------------------------------- > > **Summary:** EA ignores highly relevant disciplines to its main area of focus, notably Disaster Risk Reduction, Futures Studies, and Science & Technology Studies, and in their place attempts to derive methodological frameworks from first principles. As a result, many orthodox EA positions would be considered decades out of date by domain-experts, and important decisions are being made using unsuitable tools. > > EA is known for reinventing the wheel even within the EA community. This poses a significant problem given the stakes and urgency of problems like existential risk. > > There are entire disciplines, such as [Disaster Risk Reduction](https://en.wikipedia.org/wiki/Disaster_risk_reduction), [Futures Studies](https://en.wikipedia.org/wiki/Futures_studies), and [Science and Technology Studies](https://en.wikipedia.org/wiki/Science_and_technology_studies), that are profoundly relevant to existential risk reduction yet which have been almost entirely ignored by the EA community. The consequences of this are unsurprising: we have started near to the beginning of the history of each discipline and are slowly learning each of their lessons the hard way. > > For instance, the approach to existential risk most prominent in EA, what [Cremer and Kemp](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3995225) call the “Techno-Utopian Approach” (TUA), [focuses](https://www.goodreads.com/book/show/50485582-the-precipice?from_search=true&from_srp=true&qid=j2JFUE9TDM&rank=3) on categorising individual hazards (called “risks” in the TUA),[[41]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-41) attempting to estimate the likelihood that they will cause an existential catastrophe within a given timeframe, and trying to work on each risk separately by default, with a homogenous category of underlying “risk factors” given secondary importance. > > However, such a hazard-centric approach was [abandoned](https://www.routledge.com/At-Risk-Natural-Hazards-Peoples-Vulnerability-and-Disasters/Blaikie-Cannon-Davis-Wisner/p/book/9780415252164) within Disaster Risk Reduction decades ago and replaced with one that places a [heavy emphasis](https://www.preventionweb.net/understanding-disaster-risk) on the vulnerability of humans to potentially hazardous phenomena.[[42]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-42) Indeed, differentiating between “risk” (the potential for harm), “hazards” (specific potential causes of harm) and “vulnerabilities” (aspects of humans and human systems that render them susceptible to the impacts of hazards) is one of the first points made on any disaster risk course. Reducing human vulnerability and exposure is generally a far more effective method of reducing risk posed by a wide variety of hazards, and far better accounts for “unknown unknowns” or “Black Swans”.[[43]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-43) > > Disaster risk scholarship is also revealing the growing importance of complex patterns of causation, the interactions between threats, and the potential for [cascading failures](https://en.wikipedia.org/wiki/Cascading_failure). This area is largely ignored by EA existential risk work, and has been [dismissed out of hand](https://forum.effectivealtruism.org/posts/BvNxD66sLeAT8u9Lv/climate-change-and-longtermism-new-book-length-report?commentId=QwZaZrkwAfCjAexyK) by prominent EAs. > > As another example, Futures & Foresight scholars noted the [deep limitations](https://www.youtube.com/watch?v=Y_yMemcSMkg) of numerical/probabilistic forecasting of specific trends/events in the 1960s-70s, especially with respect to long timescales as well as domains of high complexity and deep uncertainty[[44]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-44), and low-probability high-impact events (i.e. characteristics of existential risk). Practitioners now [combine or replace forecasts with qualitative](https://www.taylorfrancis.com/books/mono/10.4324/9780203791684/foundations-futures-studies-jose-valciukas) foresight methods like scenario planning, wargaming, and Causal Layered Analysis, which explore the shape of possible futures rather than making hard-and-fast predictions. Yet, EA’s existential risk work places a massive emphasis on forecasting and pays little attention to foresight. Few EAs seem aware that “Futures Studies” as a discipline exists at all, and EA discussions of the (long-term) future often imply that little of note has been said on the topic outside of EA.[[45]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-45) > > These are just two brief examples.[[46]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-46) There is a wealth of valuable insights and data available to us if we would only go out and read about them: this should be a cause for celebration! > > But why have they been so neglected? Regrettably, it is not because EAs read these literatures and provided robust arguments against them; we simply never engaged with them in the first place. We tried to create the field of existential risk almost from first principles using the methods and assumptions that were already popular within our movement, regardless of whether they were [suitable for the task](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4118618).[[47]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-47) > > We believe there could be several disciplines or theoretical perspectives that EA, had it developed a little differently earlier on, would recognise as fellow travellers or allies. Instead, we threw ourselves wholeheartedly into the [Founder Effect](https://en.wikipedia.org/wiki/Founder_effect), and in our over-dependence on a few early canonical thinkers (i.e. MacAskill, Ord, Bostrom, Yudkowsky etc.), we thus far lost out on all that they have to offer. > > This expands to a broader question: if we were to reinvent (EA approaches to) the field of Existential Risk Studies from the ground up, how confident are we that we would settle on our current way of doing things? > > The above is not to say that all views within EA ought to always reflect mainstream academic views; there are genuine shortcomings to traditional academia. However, the sometimes hostile attitude EA has to academia has hurt our ability to listen to its contributions as well as those of experts in general. > > --- On the hasty prioritization of AI risk and biorisk -------------------------------------------------- > OpenPhil’s global catastrophic risk/longtermism [funding](https://www.openphilanthropy.org/our-global-health-and-wellbeing-and-longtermism-grantmaking-portfolios/) stream is dominated by two hazard-clusters – artificial intelligence and engineered pandemics[[56]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-56) – with little affordance given to other aspects of the risk landscape. Even within this, AI seems to be seen as “the main issue” by a wide margin, both within OpenPhil and throughout the EA community. > > This is a problematic practice, given that, for instance: > > The prioritisation relies on [questionable](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#We_can_t_put_numbers_on_everything_) [forecasting](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Futures___Foresight) [practices](https://link.springer.com/article/10.1007/s13748-021-00239-1), which themselves sometimes take [contestable positions](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#We_need_to_value_expertise_and_rigour_more) as assumptions and inputs > > There is significant second-order uncertainty around the relevant risk estimates > > The ITN framework has major [issues](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4118618), especially when applied to existential risk > > It is extremely sensitive to how a problem is framed, and often relies on rough and/or subjective estimates of ambiguous and variable quantities > > * This poses serious issues when working under conditions of deep uncertainty, and can allow implicit assumptions and subconscious biases to pre-determine the result > * Climate change, for example, is typically considered low-neglectedness within EA, but extreme/existential risk-related climate work is [surprisingly neglected](https://iopscience.iop.org/article/10.1088/1748-9326/ac13ef/meta) > * What *exactly* makes a problem “tractable”, and how do you rigorously put a number on it? > > It ignores co-benefits, response risks, and tipping points > > It penalises projects that seek to challenge concentrations of power, since this appears “intractable” until social tipping points are reached[[57]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-57) > > It is extremely difficult and often impossible to meaningfully estimate the relevant quantities in complex, uncertain, changing, and low-information environments > > It focuses on evaluating actions as they are presented, and struggles to sufficiently value exploring the potential action space and increasing future optionality > > [Creativity](https://www.sciencedirect.com/science/article/abs/pii/S0039368117303278) can be limited by the need to appeal to a narrow range of grantmaker views[[58]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-58) > > The current model neglects [areas](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Vulnerability_and_Resilience) that do not fit [neatly] into the two main “cause areas”, and indeed it is [arguable](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#The_Humanities_and_Social_Sciences) whether global catastrophic risk can be [meaningfully](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Futures___Foresight) chopped up into individual “cause areas” at all > > A large proportion (plausibly a sizeable majority, depending on where you draw the line) of catastrophic risk researchers would, and if you ask, do, reject[[59]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-59): > > * The particular prioritisations made > * The methods used to arrive at those prioritisations, and/or > * The very conceptualisation of individual “risks” itself > > It is the product of a small [homogenous](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#We_are_incredibly_homogenous) group of people with very similar views > > * This is both a scientific (cf. [collective intelligence/social epistemics](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Epistemic_health_is_a_community_issue) and a [moral](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3995225) issue > > There are important efforts to mitigate some of these issues, e.g. cause area exploration prizes, but the central issue remains. > > The core of the problem here seems to be one of objectives: optimality vs robustness. Some quick definitions (in terms of funding allocation): > > * Optimality = the best possible allocation of funds > + In EA this is usually synonymous with “the allocation with the highest possible expected value” > + This typically has a unstated second component: *“assuming that our information and our assumptions are accurate”* > * Robustness = capacity of an allocation to maintain near-optimality given conditions of uncertainty and change > > In seeking to do the most good possible, EAs naturally seek *optimality*, and developed grantmaking tools to this end. We identify potential strategies, gather data, predict outcomes, and take the actions that our models tell us will work the best.[[60]](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#fn-pMAQ8odjQviWs2qYJ-60) This works great when you’re dealing with relatively stable and predictable phenomena, for instance endemic malaria, as well as most of the other cause areas EA started out with. > > However, now that much of EA’s focus has turned on to global catastrophic risk, existential risk, and the long-term future, **we have entered areas where optimality becomes fragility**. We don’t want most of our eggs in one or two of the most speculative baskets, especially when those eggs contain billions of people. We should also probably adjust for the fact that we may over-rate the importance of things like AI for reasons discussed in [other](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#We_are_incredibly_homogenous) [sections](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#We_align_suspiciously_well_with_the_interests_of_tech_billionaires__and_ourselves__61_) > > Given the fragility of optimality, robustness is extremely important. Existential risk is a domain of high complexity and deep uncertainty, dealing with poorly-defined low-probability high-impact phenomena, sometimes covering extremely long timescales, with a huge amount of disagreement among both experts and stakeholders along theoretical, empirical, and normative lines. Ask any risk analyst, disaster researcher, foresight practitioner, or policy strategist: this is **not** where you optimise, this is where you maintain epistemic humility and cover all your bases. Innumerable people have [learned this the hard way](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#We_need_to_stop_reinventing_the_wheel) so we don’t have to. > > Thus, we argue that, *even if* you strongly agree with the current prioritisations / methods, it is still rational for you to support a [more pluralist](https://forum.effectivealtruism.org/posts/cXH2sG3taM5hKbiva/beyond-simple-existential-risk-survival-in-a-complex) and robustness-focused approach given the uncertainty, expert disagreement, and risk management best-practices involved. > >
eb812224-ff79-4f17-ac65-39ef68f1273a
trentmkelly/LessWrong-43k
LessWrong
AI #58: Stargate AGI Another round? Of economists projecting absurdly small impacts, of Google publishing highly valuable research, a cycle of rhetoric, more jailbreaks, and so on. Another great podcast from Dwarkesh Patel, this time going more technical. Another proposed project with a name that reveals quite a lot. A few genuinely new things, as well. On the new offerings front, DALLE-3 now allows image editing, so that’s pretty cool. TABLE OF CONTENTS Don’t miss out on Dwarkesh Patel’s podcast with Sholto Douglas and Trenton Bricken, which got the full write-up treatment. 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Never stop learning. 4. Language Models Don’t Offer Mundane Utility. The internet is still for porn. 5. Clauding Along. Good at summarization but not fact checking. 6. Fun With Image Generation. DALLE-3 now has image editing. 7. Deepfaketown and Botpocalypse Soon. OpenAI previews voice duplication. 8. They Took Our Jobs. Employment keeps rising, will continue until it goes down. 9. The Art of the Jailbreak. It’s easy if you try and try again. 10. Cybersecurity. Things worked out this time. 11. Get Involved. Technical AI Safety Conference in Tokyo tomorrow. 12. Introducing. Grok 1.5, 25 YC company models and ‘Dark Gemini.’ 13. In Other AI News. Seriously, Google, stop publishing all your trade secrets. 14. Stargate AGI. New giant data center project, great choice of cautionary title. 15. Larry Summers Watch. Economists continue to have faith in nothing happening. 16. Quiet Speculations. What about interest rates? Also AI personhood. 17. AI Doomer Dark Money Astroturf Update. OpenPhil annual report. 18. The Quest for Sane Regulations. The devil is in the details. 19. The Week in Audio. A few additional offerings this week. 20. Rhetorical Innovation. The search for better critics continues. 21. Aligning a Smarter Than Human Intelligence is Difficult. What are human values? 22. People Are Worried About AI K
082cd236-8832-4e7c-851e-58ad994a32c2
trentmkelly/LessWrong-43k
LessWrong
Bangalore Meetup: 28th May Bangalore hopefully has enough LessWrongers to have its own meet up. I suggest having one in the afternoon on Saturday, 28th May at Cubbon Park Bandstand. (Behind the high court in Cubbon park). I commit to be there from 4 pm-7 pm with a LW meetup sign and a book (can't commit to which book I'll be reading two weeks hence, so will edit that in later. I'll be wearing a red kurta though). Since I don't know any Bangaloreans here and have never stood around in a public park holding up a sign before, comments showing interest will be much appreciated as morale-boosters. :) I'm open to suggestions regarding both time and place.
3f04e230-cc6e-4d2b-a2c7-7d53555c90ff
trentmkelly/LessWrong-43k
LessWrong
Meetup : Bi-weekly Frankfurt Meetup Discussion article for the meetup : Bi-weekly Frankfurt Meetup WHEN: 27 July 2015 06:30:00PM (+0200) WHERE: Frankfurt/Main Contact: 0176 3066 164 (Janko) You can also find the address in our google group (https://groups.google.com/forum/#!forum/less-wrong-frankfurt) Location: The meetup takes place in the apartment of the Frankfurt lesswrong core group. If you know that you will come, please leave a message for us some days (or hours) in advance for dinner planning. We decided to focus more on the core ideas of lesswrong by going through Eliezer's book Rationality: From AI to Zombies planned chapters for the next meetup: * Making beliefs pay rent * A Fable of Science and Politics * Belief in Belief * Bayesian Judo * Pretending to be Wise (maybe) * Religion's Claim to be Non-Disprovable (maybe) * Professing and Cheering (maybe) * Belief as Attire (maybe) The chapters will be an open discussion rather than a presentation. I encourage you to have a look at a few of the linked posts beforehand. :-) Other topics are: * short summaries of interesting books anyone of us had read * Some games like Zendo hope to see you Janko Discussion article for the meetup : Bi-weekly Frankfurt Meetup
0dd951e7-798a-447d-8116-c4b42cfddbc5
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Briefly how I've updated since ChatGPT I'm laying out my thoughts in order to get people thinking about these points and perhaps correct me. I definitely don't endorse deferring to anything I say, and I would write this differently if I thought people were likely to do so. 1. **OpenAI's model** of "deploy as early as possible in order to extend the timeline between when the world takes it seriously to when humans are no longer in control" **seems less crazy to me.** 1. I think ChatGPT has made it a lot easier for me personally to think concretely about the issue and identify exactly what the key bottlenecks are. 2. To the counterargument "but they've spurred other companies to catch up," I would say that this was going to happen whenever an equivalent AI was released, and I'm unsure whether we're more doomed in the world where this happened now, versus later when there's a greater [overhang](https://www.lesswrong.com/tag/computing-overhang) of background technology and compute. 3. I'm not advocating specifically for or against any deployment schedule, I just think it's important that this model be viewed as *not crazy*, so it's adequately considered in relevant discussions. 2. **Why will LLMs develop agency?** My default explanation used to involve fancy causal stories about monotonically learning better and better search heuristics, and heuristics for searching over heuristics. While those concerns are still relevant, the much more likely path is simply that people will try their hardest to make the LLM into an agent as soon as possible, because agents with the ability to carry out long-term goals are much more useful. 3. **"The public" seems to be much more receptive than I previously thought**, both wrt Eliezer and the idea that AI could be existentially dangerous. This is good! But we're at the beginning where we are seeing the response from the people who are most receptive to the idea, and we've not yet got to the inevitable stage of political polarisation. 4. **Why doom?** Companies and the open source community will continue to experiment with [recursive LLMs](https://www.lesswrong.com/posts/566kBoPi76t8KAkoD/on-autogpt), and end up with better and better simulations of entire research societies (a network epistemologist's dream). This creates a "meta-architectures [overhang](https://www.lesswrong.com/tag/computing-overhang)" which will amplify the capabilities of any new releases of base-level LLMs. As these are open sourced or made available via API, somebody somewhere will plain tell them to recursively self-improve themselves, no complicated story about instrumental convergence needed. 1. AI will not stay in a box (because humans didn't try to put it into one in the first place). AI will not become an agent by accident (because humans will make it into one first). And if AI destroys the world, it's as likely to be [*by human instruction*](https://www.youtube.com/watch?v=g7YJIpkk7KM) as by instrumentally convergent reasons inherent to the AI itself. [Oops](https://wiki.lesswrong.com/index.php?title=Oops). 2. The recursive LLM thing is also something I'm exploring for alignment purposes. If the path towards extreme intelligence is to build up LLM-based research societies, we have the advantage that every part of it can be inspected. And you can automate this inspection to alert you of misaligned intentions at every step. It's much harder to deceive when successfwl attempts depend on coordination. 5. Lastly, **AIs may soon be sentient, and people will torture them because people like doing that**. 1. [I think it's likely](https://forum.effectivealtruism.org/posts/2hwhxpFfjR3Bhf3Ya/what-can-we-do-now-to-prepare-for-ai-sentience-in-order-to) that there will be a window where some AIs are conscious (e.g. uploads), but not yet powerful enough to resist what a human [might do to them](https://www.lesswrong.com/tag/risks-of-astronomical-suffering-s-risks). 2. In that world, as long as those AIs are available worldwide, there's a non-trivial population of humans who would derive sadistic pleasure from anonymously torturing them.[[1]](#fnlnl9dpymsx) AIs process information *extremely fast*, and unlike with farm animals, you can torture them to death an arbitrary number of times.[[2]](#fntdh345r4px) 3. To prevent this, it seems imperative to make sure that the AIs that are most likely to be "torturable" are 1. never open-sourced, 2. API access points are controlled for human sentiment, 3. interactions with them should never be *anonymous*, 4. and AIs can be directly trained/instructed to exit a situation (and the IP could be timed out) when it detects ill-intent. 1. **[^](#fnreflnl9dpymsx)**Note that if it's an AI trained to *imitate* humans, showing signs of distress may not be correlated with how they actually suffer. But given that I'm currently very uncertain about how they would suffer, it seems foolish not to take maximal precautions to not expose them to the entire population of sadists on the planet. 2. **[^](#fnreftdh345r4px)**If that's how it's gonna play out, I'd rather we all die before then.
57ce7fc8-d866-416a-89d3-40e4b4b25324
StampyAI/alignment-research-dataset/arbital
Arbital
Log as the change in the cost of communicating When interpreting logarithms as [a generalization of the notion of "length"](https://arbital.com/p/416) and as [digit exchange rates](https://arbital.com/p/427), in both cases, multiplying the input to the logarithm base 10 by a factor of 10 caused the output to go up by one. Multiplying a number by 10 makes it one digit longer. If a [https://arbital.com/p/-42d](https://arbital.com/p/-42d) is worth \$1, then a 1000-digit is worth exactly \$1 more than a 100-digit, because you can [build](https://arbital.com/p/emulating_digits) a 1000-digit out of a 100-digit and a 10-digit. Thus, by symmetry _dividing_ an input to the logarithm base 10 by 10 makes the output go down by one: If you divide a number by 10, it gets one digit shorter; and any $n$-digit is worth \$1 more than a $\frac{n}{10}$-digit, because you can build an $n$-digit out of a $\frac{n}{10}$-digit and a 10-digit. This strongly implies that $\log_{10}(\frac{1}{10})$ should equal $-1$. If a 1000-digit costs \$3, and a 100-digit costs \$2, and a 10-digit costs \$1, and a [1-digit is worthless](https://arbital.com/p/1digit), then, extrapolating the pattern, a $\frac{1}{10}$ should cost $-\$1.$ But what does that mean? What sort of digit is worth negative money? Can we give this extrapolation a physical intuition? Yes, we can, by thinking in terms of how difficult it is to communicate information. Let's say that you and I are in separate rooms, connected only by a conveyor belt, upon which I can place physical objects like coins, dice, and [digit wheels](https://arbital.com/p/42d) that you can read. Let's imagine also that a third party is going to show me the whodunit of a game of [Clue](https://arbital.com/p/https://en.wikipedia.org/wiki/Cluedo), and then let me put some objects on the conveyor belt, and then send those objects into your room, and then ask you for the information. If you can reproduce it successfully, then we both win a lot of money. However, I have to pay for every object that I put on the conveyor belt, using the [fair prices](https://arbital.com/p/427). Consider how much I have to pay to tell you the result of a clue game. The "whodunit" in a clue game consists of three pieces of information: 1. The name of the murderer, which is one of: Miss Scarlett, Professor Plum, Mrs. Peacock, Reverend Green, Colonel Mustard, or Mrs. White. 2. The room in which the murder occurred, which is either the kitchen, the ballroom, the conservatory, the dining room, the cellar, the billiard room, the library, the lounge, the hall, or the study. 3. The murder weapon, which is either the candlestick, the dagger, the lead pipe, poison, the revolver, the rope, or the wrench. Thus, a typical whodunit might look like "Professor Plum, in the conservatory, with the revolver." That sentence is 55 letters long, so one way for me to transmit the message would be to purchase fifty five 29-digits (capable of holding any one of 26 letters, or a space, or a comma, or a period), and send you that sentence directly. However, that might be a bit excessive, as there are in fact only $6 \cdot 10 \cdot 7 = 420$ different possibilities (six possible murderers, ten possible locations, seven possible weapons). As such, I only actually need to buy a 6-digit, a 10-digit, and a 7-digit. Equivalently, I could purchase a single 420-digit (if such things are on sale). We have to agree in advance what the digits mean &mdash; for example, "the 6-digit corresponds to the murderer, in the order listed above; the 10-digit corresponds to the room, in the order listed above; the 7-digit corresponds to the weapon, in the order listed above;" but assuming we do, I can get away with much less than fifty five 29-digits. __Exercise:__ If the only storage devices on sale are coins, how many do I need to buy to communicate the whodunit? %%hidden(Answer): Nine. 8 coins only gets you 256 possibilities, and we need at least 420. %% __Exercise:__ If the only storage devices on sale are dice, how many do I need to buy? %%hidden(Answer): Four. $6^3 < 420 < 6^4.$ %% __Exercise:__ If I have to choose between all coins or all dice, which should I choose, at the fair prices? %%hidden(Answer): The coins. Four dice cost as much as $\log_2(6) * 4 \approx 10.33$ coins, and we can do the job with nine coins instead. %% __Exercise:__ If I can mix coins, dice, and digit wheels, what's the cheapest way to communicate the whodunit? %%hidden(Answer): One coin and three dice let you send the message at a cost of only $\log_2(2) + 3\cdot \log_2(6) \approx 8.75$ coins. %% Now, consider what happens when the third party tells you "Actually, in order to win, you also have to communicate the name of my favorite Clue suspect, which is Colonel Mustard. I already told the person in the other room that you need to communicate two suspects, and that you'll communicate my favorite Clue suspect second. I didn't tell them who my favorite Clue suspect was, though." Now, the space of possible messages has gone up by a factor of six: There are 420 possible whodunits, and each can be paired with one of six possible "favorite suspects," for a total of 2520 possible messages. How does this impact my cost of communicating with you? My cost goes up by 1 die ($= \log_2(6)$ coins $= \log_{10}(6)$ digit wheels). When the space of possibilities goes up by a factor of 6, my costs of communication (measured, say, in coins) go up by $\log_2(6).$ Now let's say that the third party comes back in the room and tells you "Actually, I gave the person in the other room a logic puzzle that told them which room the murder happened in; they solved it, and now they know that the murder happened in the conservatory." This _reduces_ the space of possible messages I need to send, by a factor of 10. Now that both you and I know that the murder happened in the conservatory, I only need to transmit the murderer, the weapon, and the favorite suspect &mdash; one of 252 possibilities. The space of possibilities was cut into a tenth of its former size, and my cost of communicating dropped by 1 digit wheel ($= \log_6(10)$ dice $= \log_2(10)$ coins). On this interpretation, logarithms are measuring how much it costs to transmit information, in terms of some "base" medium (such as coins, dice, or digit wheels). Every time the space of possibilities increases by a factor of $n$, my communication costs increase by $\log_2(n)$ coins. Every time the space of possibilities decreases by a factor of $n$, my communication costs _drop_ by $\log_2(n)$ coins. This is the physical interpretation of logarithms that you can put your weight on: $\log_b(x)$ measures how much more or less costly it will be to send a message (in terms of $b$-digits) when the space of possible messages changes by a factor of $x$. Paired with a physical interpretation of [fractional digits](https://arbital.com/p/44l), it can explain most of the [basic properties of the logarithm](https://arbital.com/p/log_properties): 1. $\log_b(1) = 0,$ because increasing (or decreasing) the space of possible messages by a factor of 1 doesn't affect your communication costs at all. 2. $\log_b(b) = 1,$ because increasing the space of possible messages by a factor of $b$ will increase your communication costs by exactly one $b$-digit. 3. $\log_b\left(\frac{1}{b}\right) = -1,$ because decreasing the space of possible messages by a factor of $b$ saves you one $b$-digit worth of communication costs. 4. $\log_b(x\cdot y) = \log_b(x) + \log_b(y),$ because if $n = x \cdot y$ then one $n$-digit is exactly large enough to store one $x$-message and one $y$-message. Thus, when communicating, an $x\cdot y$-digit is worth the same amount as one $x$-digit plus one $y$-digit. 5. $\log_b(x^n) = n \cdot log_b(x),$ because $n$ $x$-digits can be used to [emulate](https://arbital.com/p/emulating_digits) one $x^n$-digit. You might be thinking to yourself: > Wait, what does it mean for the space of possible messages to go up or down by a factor of $x$? This isn't always clear. What if you're really good at guessing who people's favorite suspect is? For that matter, what if we haven't established a convention like "0 = Miss Scarlett; 1 = Professor Plum; ..."? If I see an observation, the amount by which it changes the space of possible messages is subjective; it depends on my beliefs and on the beliefs of the person I'm communicating with and on the conventions that we set up beforehand. How do you actually formalize this idea? Those are great questions. Down that path lies [https://arbital.com/p/3qq](https://arbital.com/p/3qq), a field which measures communication costs using logarithms, and which lets us formalize (and quantize) ideas such as the amount of information carried by a message (to a given observer). See the [https://arbital.com/p/information_theory_tutorial](https://arbital.com/p/information_theory_tutorial) for more on this subject. With regard to logarithms, the key idea here is an interpretation of what $\log_b(x)$ is "really doing." Given an input like "how many possible messages are there," such that your costs go up by 1 unit every time the input space increases by a factor of $b$, $\log_b(x)$ measures the change in cost when the input space increases by a factor of $x$. As we will see next, this idea generalizes beyond the domain of "set of possible messages vs cost of communicating," to _any_ scenario where some measure $\mu$ increases by $1$ every time some object scales by a factor of $b$, in which case $\log_b(x)$ measures the change in $\mu$ when the object scales by a factor of $x$. This is the defining characteristic of logarithms, and now that we have some solid physical interpretations of what it means, we're ready to start exploring logarithms in the abstract.
7d65ed5f-2207-471d-87d5-fb8d9885ecdf
StampyAI/alignment-research-dataset/blogs
Blogs
future paths future paths ------------ we are at a node in a state graph (or [MDP](https://en.wikipedia.org/wiki/Markov_decision_process)), where every state points to a bunch of other states, notably by way of: * irreversible superintelligent singleton implementation, whether it leads to [doom, utopia, or hell](timeline-codes.html) * [civilizational collapse](how-timelines-fall.html), [smaller X-risks](smaller-x-risk.html), and other things that give us a [mulligan](https://en.wikipedia.org/wiki/Mulligan_%28games%29) for AI risk mitigation * [pivotal acts](https://arbital.com/p/pivotal/) which let us [flip the gameboard](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) on one hand, booting up irreversible superintelligent singletons should be very carefully considered, as the irreversibility forces us to commit to [a specific system](%E2%88%80V.html), potentially ruling out whole [scopes of utopia](utopia-scopes.html) entirely. on the other one hand, it is to be kept in mind that, even though the current world sure seems like it has enough [quantum amplitude](quantum-immortality-local-deaths.html) or [anthropic juice](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/) to feel pretty real, we must be careful of generating civilization-wide (possibly quantum) [micromorts](https://en.wikipedia.org/wiki/Micromort) damaging the realness of valuable future states. it might be that we only have 1 unit of anthropic juice to allocate to future states, some of which gets consumed every time we create a bunch of [dead timelines](timeline-codes.html). i believe it is useful for people and groups working on [AI risk mitigation](say-ai-risk-mitigation-not-alignment.html) to keep a (mental or physical) picture of this graph, and carefully choose where they want to aim. making the correct consequentialist choice is not a trivial matter, and indeed blindly following what you believe to be your best shot without *looking around* could be a large mistake.
0c577d6a-6508-478f-97cf-906b59231a9d
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Two questions about CEV that worry me Taken from some [old comments of mine](/lw/sz/moral_error_and_moral_disagreement/2ytz) that never did get a satisfactory answer. 1) One of the justifications for CEV was that extrapolating from an American in the 21st century and from Archimedes of Syracuse should give similar results. This seems to assume that change in human values over time is mostly "progress" rather than drift. Do we have any evidence for that, except saying that our modern values are "good" according to themselves, so whatever historical process led to them must have been "progress"? 2) How can anyone sincerely want to build an AI that fulfills anything except their own *current, personal* volition? If Eliezer wants the the AI to look at humanity and infer its best wishes for the future, why can't he task it with looking at himself and inferring his best idea to fulfill humanity's wishes? Why must this particular thing be spelled out in a document like CEV and not left to the mysterious magic of "intelligence", and what other such things are there?
5d3b6dc6-6a6b-4711-be59-0dfb1475a092
trentmkelly/LessWrong-43k
LessWrong
New 'landing page' website: Friendly-AI.com I've created a new "landing page" on Friendly AI at Friendly-AI.com. This is similar to IntelligenceExplosion.com, Existential-Risk.org, Anthropic-Principle.com, and simulation-argument.com. The site is less ambitious than the original plan for it was, but it serves its purpose. Design courtesy of Lightwave.
48b3b629-dff1-418c-a6e7-08c1d60999eb
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"I’ve been interested in drawing ever since I was a little kid. I like drawing people. I practiced a lot by looking at a cool picture of a character I like, and copying it as closely as I can.The better I got at copying, the better my work became. I got some great results this way. I even advanced to the level where I could take photos of real-life people, then painstakingly copying it to produce a portrait. I was pleased by and proud of many of my works. Many agreed that I was “such a good artist, wow!”Here’s one piece I was particularly proud of; my sketch on the left, the reference on the right.In middle school, I took on a new challenge: I wanted to draw commissions. People would give me a character design, sometimes with a few reference images, and I would draw their characters. This was really hard for some reason…I had no images to copy directly from. My results were inconsistent. I spend a lot of my time tweaking things like moving around the eyes, redrawing the nose, etc. After a bunch of commissions, I got reasonably decent at drawing a character at the ¾ angle, the most common one for portraiture. I did hundreds of commissions, almost all of them at that angle. Soon, I felt uncomfortable drawing faces at any other angle.Some examples of the stuff I drew. Some of them look kinda strangely similar, hmm...I stopped doing art for a while because college was busy. When I started again, I took an anatomy class. I learned a lot of great things. But the biggest lesson I learned was that I’ve been approaching drawing all wrong. The biggest thing I had missed was that I had to understand the process that generates the image.The art an artist draws is a projection of the 3D object onto a 2D surface. By practicing by only copying from 2D images without studying the 3D structures that generated them, I treated my reference images as being generated by some black box, and ignored the underlying gears and structures.For example, the shape of an eye is determined by the 3D structures of the eyeball projected into 2D. I never learned what an eye looks like in 3D, so I didn’t know how to draw it from an angle different from the 2D references I’ve seen. It turns out, the eye isn’t a flat almond-shape like it seems in a picture. It’s a sphere inset into the skull, wrapped from the top and bottom by flaps of skin. No wonder I hit a wall when I tried to draw an eye from the side. I had never seen the sphere or practiced drawing it as a sphere. I probably tried to draw it as a slightly squished almond, then wondered why it looked wrong.Another example: the lines describing a body in a drawing are dictated by the underlying bone and muscle structures. I had never studied these before. In the anatomy class, we looked at body parts in 3D. We reduced complex shapes (like a human torso) into simpler geometric shapes (boxes and spheres) which are easier to mentally rotate in space. Then we constructed lines of the torso using these basic shapes.Some of my sketches of the torso. Similarly, I learned to draw a head by starting with a sphere and a box. I then used these shapes to produce guiding lines, which then dictated where the surface features (eyes, nose, mouth) had to be located. If I wanted to draw the head from a different angle, I started with a sphere and box angled differently, which then produced different guiding lines. There was no more “guessing” where the eyes should go.From this postThis is the reason artists draw from live models and study anatomy. The 3D shapes of the underlying structures generate the simple-seeming lines on a drawing. Drawing well requires opening the blackbox and looking at the internal gears of human anatomy." The above is a chunk of text from a post created by a user on the website LessWrong. Can you explain what this text is saying as if I've never finished my first 2 years of college? Absolutely. This person is sharing their journey of learning how to draw. As a child, they enjoyed drawing and got very good at copying images, especially of people. They could look at a picture or a photograph and recreate it on paper. People praised their work and they felt proud of it. However, when they started taking commissions - drawing specific characters for people - they found it more difficult because they didn't have a direct image to copy. They had to rely on their understanding of how to draw people, which was limited to certain angles and poses because that's what they had practiced copying. When they went to college and took an anatomy class, they realized that their approach to drawing was incomplete. They had been copying the 2D images they saw, but they hadn't understood the 3D structures that made up those images. For example, they hadn't realized that an eye is a sphere set into the skull, not a flat shape. They learned that to draw well, they needed to understand the underlying structures of what they were drawing - the bones and muscles of the body, the shapes that make up the face. This would allow them to draw from any angle, not just the ones they had practiced copying. So, they started studying anatomy and practicing drawing these basic shapes and structures. They found that this improved their drawing skills and made them more versatile as an artist. They no longer had to guess where to put the eyes or how to draw the body from a certain angle - they understood the underlying structures that determined these things. Could you elaborate on how studying anatomy can help in understanding the 3D structures in drawing? Specifically, how does knowing the bone and muscle structures contribute to the improvement of drawing skills? Absolutely. When it comes to drawing, understanding anatomy - the bones and muscles of the body - can greatly enhance an artist's ability to depict the human form accurately and convincingly. Here's how: 1. Understanding the skeletal structure: The skeleton is the framework of the body. It determines the basic proportions and the range of possible movements and poses. By understanding the skeleton, an artist can accurately depict a figure in any pose and from any angle. For example, knowing how the joints of the elbow or knee work can help an artist draw a figure that's bending or running. 2. Understanding the muscular structure: The muscles of the body determine its shape and contours. They create the bumps, bulges, and lines that we see on the surface. By understanding the muscles, an artist can accurately depict the surface details of the body. For example, knowing where the biceps or calf muscles are can help an artist draw a figure that's muscular or athletic. 3. Understanding the interaction between bones and muscles: Bones and muscles work together to create movement and form. When a muscle contracts, it pulls on a bone, causing the body to move. By understanding this interaction, an artist can accurately depict a figure in action. For example, knowing how the muscles of the arm pull on the bones of the forearm can help an artist draw a figure that's throwing a ball. 4. Understanding the 3D form: Anatomy is not just about the 2D shapes of bones and muscles, but also their 3D form. By studying anatomy, an artist learns to think of the body in three dimensions, which helps them create drawings that have depth and solidity. For example, knowing that the chest is a rounded, box-like shape can help an artist draw a figure that looks three-dimensional, not flat. In conclusion, studying anatomy helps an artist understand the underlying structures of the body, which allows them to create drawings that are accurate, dynamic, and three-dimensional. It's like knowing the blueprints of a building - it allows you to depict the building from any angle, in any lighting, and in any state of construction or deconstruction. Reflecting on your explanation of how anatomy enhances drawing skills, can you identify any potential challenges or limitations this approach might present to artists? Absolutely, while studying anatomy can significantly enhance an artist's ability to depict the human form, it can also present some challenges and limitations. Here are a few: 1. Complexity: Human anatomy is complex. There are hundreds of bones and muscles in the body, each with its own shape, position, and function. Learning all of this information can be overwhelming, especially for beginner artists. 2. Time-consuming: Studying anatomy is not something that can be done quickly. It requires a lot of time and practice to understand and memorize the structures of the body, and even more time to learn how to depict them accurately in a drawing. 3. Risk of becoming too mechanical: While understanding anatomy is crucial, there's a risk that an artist might become too focused on the technical aspects and lose sight of the artistic aspects of drawing. Art is not just about accuracy, but also about expression and creativity. An artist must strike a balance between anatomical correctness and artistic interpretation. 4. Variation in human bodies: While there are standard models of human anatomy, actual human bodies can vary greatly in terms of size, shape, proportion, and so on. An artist must learn to adapt their anatomical knowledge to the specific body they are drawing. 5. Difficulty in translating 3D to 2D: Understanding the 3D form of the body is one thing, but translating that understanding into a 2D drawing is another. This requires a good understanding of perspective, shading, and other drawing techniques. In conclusion, while studying anatomy can be a powerful tool for artists, it also presents its own set of challenges. However, with patience, practice, and a balanced approach, these challenges can be overcome. Discuss the balance between anatomical correctness and artistic interpretation in drawing. How can an artist maintain this balance effectively? The balance between anatomical correctness and artistic interpretation is a crucial aspect of creating compelling artwork. Here's how an artist can maintain this balance effectively: 1. Understand the rules before breaking them: An artist should have a solid understanding of anatomy before they start to deviate from it. This allows them to make informed decisions about when and how to break the rules for the sake of artistic interpretation. For example, an artist might exaggerate the length of a character's limbs to create a certain effect or convey a particular emotion. 2. Use reference images: Even artists with a strong understanding of anatomy can benefit from using reference images. These can provide inspiration and help ensure that the artist's interpretation still maintains a degree of anatomical correctness. 3. Practice observation: Artists should practice observing real people and how their bodies move and interact. This can help them understand the range of natural variation in human bodies and incorporate this into their work. 4. Experiment with different styles: Different artistic styles require different balances between anatomical correctness and artistic interpretation. For example, a realistic style might require a high degree of anatomical correctness, while a more stylized or abstract style might allow for more artistic interpretation. By experimenting with different styles, an artist can find the balance that works best for them. 5. Listen to feedback: Other people can provide valuable feedback on an artist's work. They can point out areas where the balance between anatomical correctness and artistic interpretation might be off, and suggest ways to improve. 6. Continual learning: An artist should never stop learning. As they continue to study anatomy and refine their artistic skills, they will become better at striking the right balance. In conclusion, maintaining the balance between anatomical correctness and artistic interpretation is a dynamic process that requires continual learning, observation, experimentation, and feedback. It's about understanding the rules of anatomy and knowing when and how to break them to create compelling and expressive artwork.
d92e4cb8-026d-4093-9bf2-659c50cd1dbf
trentmkelly/LessWrong-43k
LessWrong
"Inner Alignment Failures" Which Are Actually Outer Alignment Failures If you don’t know what “inner” and “outer” optimization are, or why birth control or masturbation might be examples, then check out one of the posts here before reading this one. Thanks to Evan, Scott, and Richard for discussions around these ideas - though I doubt all their objections are settled yet. Claim: from an evolutionary fitness perspective, masturbation is an inner alignment failure, but birth control is an outer alignment failure. More generally: * Assuming the outer optimizer successfully optimizes, failure to generalize from training environment to deployment environment is always an outer alignment failure (regardless of whether an inner optimizer appears). * Assuming outer alignment, inner alignment problems can occur only as the result of imperfect optimization. All of these claims stem from one main argument, which can be applied to any supposed inner alignment problem. This post will outline the argument, give a few examples, and talk about some implications. Motivating Example: Birth Control vs Masturbation Consider some simple biological system - like a virus or a mycobacterium - which doesn’t perform any significant optimization. The system has been optimized by evolution to perform well in certain environments, but it doesn’t perform any optimization “at runtime”. From an evolutionary fitness perspective, the system can still end up misaligned. The main way this happens is an environment shift: the system evolved to reproduce in a certain environment, and doesn’t generalize well to other environments. A virus which spreads quickly in the rodents of one particular pacific island may not spread well on the mainland; a mycobacterium evolved for the guts of ancestral-environment humans may not flourish in the guts of humans with a modern diet. Obviously this scenario does not involve any inner optimization, because there is no inner optimizer. It’s purely an outer alignment failure: the objective “reproductive fitness in ancestral enviro
2d9863d7-75eb-4301-92a1-c7d31472c52d
trentmkelly/LessWrong-43k
LessWrong
Closest stable alternative preferences A putative new idea for AI control; index here. There's a result that's almost a theorem, which is that an agent that is an expected utility maximiser, is an agent that is stable under self-modification (or the creation of successor sub-agents). Of course, this needs to be for "reasonable" utility, where no other agent cares about the internal structure of the agent (just its decisions), where the agent is not under any "social" pressure to make itself into something different, where the boundedness of the agent itself doesn't affect its motivations, and where issues of "self-trust" and acausal trade don't affect it in relevant ways, etc... So quite a lot of caveats, but the result is somewhat stronger in the opposite direction: an agent that is not an expected utility maximiser is under pressure to self-modify itself into one that is. Or, more correctly, into an agent that is isomorphic with an expected utility maximiser (an important distinction). What is this "pressure" agent are "under"? The known result is that if an agent obeys four simple axioms, then its behaviour must be isomorphic with an expected utility maximiser. If we assume the Completeness axiom (trivial) and Continuity (subtle), then violations of Transitivity or Independence correspond to situations where the agent has been money pumped - lost resources or power for no gain at all. The more likely the agent is to face these situations, the more pressure they're under to behave as an expected utility maximiser, or simply lose out.   Unbounded agents I have two models for how idealised agents could deal with this sort of pressure. The first, post-hoc, is the unlosing agent I described here. The agent follows whatever preferences it had, but kept track of its past decisions, and whenever it was in a position to violate transitivity or independence in a way that it would suffer from, it makes another decision instead. Another, pre-hoc, way of dealing with this is to make an "ultra choice" and c
6fe6dd33-d0b8-4d3d-a4c4-274204fc45a3
trentmkelly/LessWrong-43k
LessWrong
Sections 1 & 2: Introduction, Strategy and Governance This post is part of the sequence version of the Effective Altruism Foundation's research agenda on Cooperation, Conflict, and Transformative Artificial Intelligence. 1 Introduction Transformative artificial intelligence (TAI) may be a key factor in the long-run trajectory of civilization. A growing interdisciplinary community has begun to study how the development of TAI can be made safe and beneficial to sentient life (Bostrom 2014; Russell et al., 2015; OpenAI, 2018; Ortega and Maini, 2018; Dafoe, 2018). We present a research agenda for advancing a critical component of this effort: preventing catastrophic failures of cooperation among TAI systems. By cooperation failures we refer to a broad class of potentially-catastrophic inefficiencies in interactions among TAI-enabled actors. These include destructive conflict; coercion; and social dilemmas (Kollock, 1998; Macy and Flache, 2002) which destroy value over extended periods of time. We introduce cooperation failures at greater length in Section 1.1. Karnofsky (2016) defines TAI as ''AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution''. Such systems range from the unified, agent-like systems which are the focus of, e.g., Yudkowsky (2013) and Bostrom (2014), to the "comprehensive AI services’’ envisioned by Drexler (2019), in which humans are assisted by an array of powerful domain-specific AI tools. In our view, the potential consequences of such technology are enough to motivate research into mitigating risks today, despite considerable uncertainty about the timeline to TAI (Grace et al., 2018) and nature of TAI development. Given these uncertainties, we will often discuss "cooperation failures" in fairly abstract terms and focus on questions relevant to a wide range of potential modes of interaction between AI systems. Much of our discussion will pertain to powerful agent-like systems, with general capabilities and expansive goals. But whereas
4fbd1259-c245-443c-9575-a86a2da11f70
trentmkelly/LessWrong-43k
LessWrong
[Link] Will Superintelligent Machines Destroy Humanity? A summary and review of Bostrom's Superintelligence is in the December issue of Reason magazine, and is now posted online at Reason.com.
868f63c7-f4b4-48a9-b003-16f2019af6c0
trentmkelly/LessWrong-43k
LessWrong
Anti-akrasia tool: like stickK.com for data nerds In 2009 I first described here on LessWrong a tool that Bethany Soule and I made to force ourselves to do things that otherwise fell victim to akrasia ("How a pathological procrastinator can lose weight"). We got an outpouring of encouragement and enthusiasm from the LessWrong community, which helped inspire us to quit our day jobs and turn this into a real startup: Beeminder (the me-binder!). We've added everyone who got on the waitlist with invite code LESSWRONG and we're getting close to public launch so I wanted to invite any other LessWrong folks to get a beta account first: http://beeminder.com/secretsignup (no wait this time!) (UPDATE: Beeminder is open to the public.) It's definitely not for everyone since a big part of it is commitment contracts. But if you like the concept of stickK.com (forcing yourself to reach a goal via a monetary commitment contract) then we think you'll adore Beeminder. StickK is just about the contracts -- Beeminder links it to your data. That has some big advantages: 1. You don't have to know what you're committing to when you commit, which sounds completely (oxy)moronic but what we mean is that you're committing to keeping your datapoints on a "yellow brick road" which you have control over as you go. You commit to something general like "work out more" or "lose weight" and then decide as you go what that means based on your data. 2. You have the flexibility to change your contract in light of new information (like, 40 hours of actual focused work per week is damn hard!). That also sounds like it defeats the point of a commitment contract, but the key is that you can only make changes starting a week in the future. (Details at blog.beeminder.com/dial which describes the interface of the "road dial" for adjusting the steepness of your yellow brick road.) The point is that akrasia (dynamic inconsistency, hyperbolic discounting) means over-weighting immediate consequences, so to beat akrasia you only need to bind yourself for
e21754d6-3c79-4dc1-8446-bd9215a15662
trentmkelly/LessWrong-43k
LessWrong
Forecasting Newsletter: June 2021 Highlights * Some Superforecasters start a substack, as does Dominic Cummings * Alex Lawsen and I published Alignment Problems With Current Forecasting Platforms on the arxiv. * What if Military AI is a Washout? considers a future in which AI ends up affecting war not because of its overwhelming dominance, but by changing war's tradeoffs and best practices. Index * Prediction Markets & Forecasting Platforms * In The News * Papers * Blog Posts Sign up here or browse past newsletters here. Prediction Markets & Forecasting Platforms CSET-Foretell The Superforecasting workshop for "Foretell Pros" has ended. An tidbit I learnt from it is that, unlike prediction markets, forecasting platforms can look at the covariance between forecasters—whether two forecasters' predictions are closer or further apart than average—and update on it. That is, if two forecasters who often disagree instead agree on a question, that is evidence that their side is correct (h/t Eva Chen). CSET has also been collaborating with Ought, and has given "Foretell Pros" access to Oughts GPT-3 assistant capabilities. I'm unclear on how often Ought's tools will be used in practice by forecasters. Good Judgment Open The World Ahead: What If? (a)—a new Good Judgment Open tournament in collaboration with The Economist—presents five long-term questions. This is something I don't recall seeing before, and I'm glad to see that Good Judgment Open is dipping its toes into the tricky business of long-term predictions. The questions will not be scored, perhaps because Good Judgment Open uses an improper scoring rule which gets worse for longer term questions, see below. Good Judgment Open dips its toes into the tricky business of long-term predictions, and presents five questions in a new tournament The World Ahead: What If? (a), in collaboration with The Economist. Metaculus Metaculus has a new redesign (a) in progress, and an accompanying blogpost (a) by Metaculus' CEO. Some discussion can be
43678371-5524-4645-9ca3-cb2f1bae1ca0
trentmkelly/LessWrong-43k
LessWrong
LessWrong email subscriptions? Right now, LessWrong has a fair amount of email subscription options, but they're given in a giant list in your user profile that's kind of annoying to look at.[1] I have a sense that some people – especially newer users, or users who just don't have a habit of coming to LessWrong all the time – might benefit from having more subscriptions via email (including basic "subscriptions" like "replies to my comment"). But, also, it's very easy for email to become pretty spammy and for people to tune-it-out completely. If you have any plausibly email-shaped LessWrong related holes in your life, let me know. Some examples of ideas here (inline-react with an "thumbs-up" for features that you'd use) * Better visibility of the "Subscribe to posts with >N karma" feature (this is available in the "subscribe" button on the sidebar). * Generally being able to subscribe to more things "if they get upvoted", rather than for every single instance. * Batching of various email types into a digest that comes once a day/week. * A monthly update about top posts and comments on LessWrong * "Reading group" emails that will email you posts from The Sequences ~once a week (maybe with an option to be synced up with other people so you're all encouraged to comment and discuss it at the same time) * More convenient options for shifting your subscription (i.e. maybe when you write comments, there's a toggle for "subscribe to new comments via email" which is default "on" for new users but "off" for established users who visit frequently) * Better organization of the existing Notifications user options so it's not so overwhelming. Let me know if you have other ideas in the comments. 1. ^ It's pretty horrible. It doesn't even fit on one screen:
0a0ce953-458f-4ebf-a6f3-d3f5f992bde3
trentmkelly/LessWrong-43k
LessWrong
Stupid Questions January 2015 This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better. Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing. To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.
815a1513-d55f-42f9-851c-a1afc993bff9
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"This diagram summarizes the requirements for independent AI alignment research and how they are connected. In this post I'll outline my four-year-long attempt at becoming an AI alignment researcher. It's an ‘I did X [including what I did wrong], and here's how it went’ post (see also jefftk's More writeups!). I'm not complaining about how people treated me – they treated me well. And I'm not trying to convince you to abandon AI alignment research – you shouldn't. I'm not saying that anyone should have done anything differently – except myself. Requirements Funding Funding is the main requirement, because it enables everything else. Thanks to Paul Christiano I had funding for nine months between January 2019 and January 2020. Thereafter I applied to the EA Foundation Fund (now Center on Long-Term Risk Fund) and Long-Term Future Fund for a grant and they rejected my applications. Now I don't know of any other promising sources of funding. I also don't know of any AI alignment research organisation that would hire me as a remote worker. How much funding you need varies. I settled on 5 kUSD per month, which sounds like a lot when you're a student, and which sounds like not a lot when you look at market rates for software developers/ML engineers/ML researchers. On top of that, I'm essentially a freelancer who has to pay social insurance by himself, take time off to do accounting and taxes, and build runway for dry periods. Results and relationships In any job you must get results and build relationships. If you don't, you don't earn your pay. (Manager Tools talks about results and relationships all the time. See for example What You've Been Taught About Management is Wrong or First Job Fundamentals.) The results I generated weren't obviously good enough to compel Paul to continue to fund me. And I didn't build good enough relationships with people who could have convinced the LTFF and EAFF fund managers that I have the potential they're looking for. Time Funding buys time, which I used for study and research. Another aspect of time is how effectively and efficiently you use it. I'm good at effective, not so good at efficient. – I spend much time on non-research, mostly studying Japanese and doing sports. And dawdling. I noticed the dawdling problem at the end of last year and got it under control at the beginning of this year (see my time tracking). Too late. Added 2020-03-16: I also need a lot of sleep in order to do this kind of work. – About 8.5 h per day. Travel and location I live in Kagoshima City in southern Japan, which is far away from the AI alignment research hubs. This means that I don't naturally meet AI alignment researchers and build relationships with them. I could have compensated for this by travelling to summer schools, conferences etc. But I missed the best opportunities and I felt that I didn't have the time and money to take the second-best opportunities. Of course, I could also relocate to one of the research hubs. But I don't want to do that for family reasons. I did start maintaining the Predicted AI alignment event/meeting calendar in order to avoid missing opportunities again. And I did apply and get accepted to the AI Safety Camp Toronto 2020. They even chose my research proposal for one of the teams. But I failed to procure the funding that would have supported me from March through May when the camp takes place. Knowledge I know more than most young AI alignment researchers about how to make good software, how to write well and how to work professionally. I know less than most young AI alignment researchers about maths, ML and how to do research. The latter appear to be more important for getting results in this field. Study Why do I know less about maths, ML and how to do research? Because my formal education goes only as far as a BSc in computer science, which I finished in 2014 (albeit with very good grades). There's a big gap between what I remember from that and what an MSc or PhD graduate knows. I tried to make up for it with months (time bought with Paul's funding) of self-study, but it wasn't enough. Added 2020-03-26: Probably my biggest strategic mistake was to focus on producing results and trying to get hired from the beginning. If I had spent 2016–2018 studying ML basics, I would have been much quicker to produce results in 2019/2020 and convince Paul or the LTFF to continue funding me. Added 2020-12-09: Perhaps trying to produce results by doing projects is fine. But then I should have done projects in one area and not jumped around the way I did. This way I would have built experience upon experience, rather than starting from scratch everytime. (2021-05-25: I would also have continued to build relationships with researchers in that one area.) Also, it might have been better to focus on the area that I was already interested in – type systems and programming language theory – rather than the area that seemed most relevant to AI alignment – machine learning. Another angle on this, in terms of Jim Collins (see Jim Collins — A Rare Interview with a Reclusive Polymath (#361)): I'm not ‘encoded’ for reading research articles and working on theory. I am probably ‘encoded’ for software development and management. I'm skeptical, however, about this concept of being ‘encoded’ for something. All for nothing? No. I built relationships and learned much that will help me be more useful in the future. The only thing I'm worried about is that I will forget what I've learned about ML for the third time. Conclusion I could go back to working for money part-time, patch the gaps in my knowledge, results and relationships, and get back on the path of AI alignment research. But I don't feel like it. I spent four years doing ‘what I should do’ and was ultimately unsuccessful. Now I'll try and do what is fun, and see if it goes better. What is fun for me? Software/ML development, operations and, probably, management. I'm going to find a job or contracting work in that direction. Ideally I would work directly on mitigating x-risk, but this is difficult, given that I want to work remotely. So it's either earning to give, or building an income stream that can support me while doing direct work. The latter can be through saving money and retiring early, or through building a ‘lifestyle business’ the Tim Ferriss way. Another thought on fun: When I develop software, I know when it works and when it doesn't work. This is satisfying. Doing research always leaves me full of doubt whether what I'm doing is useful. I could fix this by gathering more feedback. For this again I would need to buy time and build relationships. Timeline For reference I'll list what I've done in the area of AI alignment. Feel free to stop reading here if you're not interested. 04/2016–03/2017 Research student at Kagoshima University. Experimented with interrupting reinforcement learners and wrote Questions on the (Non-)Interruptibility of Sarsa(λ) and Q-learning. Spent a lot of time trying to get a job or internship at an AI alignment research organization. 04/2017–12/2017 Started working part-time to support myself. Began implementing point-based value iteration (GitHub: rmoehn/piglet_pbvi) in order to compel BERI to hire me. 12/2017–04/2018 Helped some people earn a lot of money for EA. 05/2018–12/2018 Still working part-time. Applied at Ought. Didn't get hired, but implemented improvements to their implementation of an HCH-like program (GitHub: rmoehn/patchwork). 12/2018–02/2019 Left my part-time job. Funded by Paul Christiano, implemented an HCH-like program with reflection and time travel (GitHub: rmoehn/jursey). 06/2019–02/2020 Read sequences on the AI alignment forum, came up with Twenty-three AI alignment research project definitions, chose ‘IDA with RL and overseer failures’ (GitHub: rmoehn/farlamp), wrote analyses, studied ML basics for the third time in my life, adapted code (GitHub: rmoehn/amplification) and ran experiments. 08/2019–06/2020 Maintained the Predicted AI alignment event/meeting calendar. 10/2019–sometime in 2020 Organized the Remote AI Alignment Writing Group (see also Remote AI alignment writing group seeking new members). 01/2020–03/2020 Applied, got admitted to and started contributing to the AISC Toronto 2020. Added 2021-05-25: 10/2020–01/2021 Coordinated the application process for AI Safety Camp #5. Thanks …to everyone who helped me and treated me kindly over the past four years. This encompasses just about everyone I've interacted with. Those who helped me most I've already thanked in other places. If you feel I haven't given you the appreciation you deserve, please let me know and I'll make up for it." The excerpt provided comes from a user post on LessWrong. What is this text talking about? This text is a detailed account of the author's journey attempting to become an AI alignment researcher. The author outlines the various requirements for such a role, including funding, results and relationships, time, travel and location, and knowledge. The author received funding for a period of time but struggled to secure more when the initial funding ran out. They also found it challenging to produce results that were compelling enough to secure further funding and to build strong relationships with key individuals in the field. The author lived in a location far from the main AI research hubs, which made it difficult to network and collaborate with other researchers. They also felt they lacked sufficient knowledge in maths, machine learning, and research methods, despite their background in computer science and professional experience in software development. Despite these challenges, the author doesn't consider their journey a failure, as they learned a lot and built relationships that could be useful in the future. However, they decided to shift their focus towards more enjoyable work, such as software/ML development, operations, and management. The text concludes with a timeline of the author's activities and experiences in the field of AI alignment. Could you elaborate on the author's decision to shift their focus towards software/ML development, operations, and management, and how this might impact their future involvement in AI alignment research? The author decided to shift their focus towards software/ML development, operations, and management because they found these areas more enjoyable and satisfying. They also mentioned that they feel more confident in these areas as they can clearly determine whether their work is successful or not, which contrasts with their experience in research where they often felt doubt about the usefulness of their work. This shift could impact their future involvement in AI alignment research in several ways. On one hand, it might mean that they are less directly involved in AI alignment research, as they are focusing on areas that are more application-oriented rather than theoretical. On the other hand, their work in software/ML development, operations, and management could still contribute to the field of AI alignment. They could develop practical applications of AI alignment theories, contribute to the operational aspects of AI alignment research projects, or manage teams working on such projects. Additionally, their experience in these areas could provide them with a unique perspective on AI alignment, allowing them to identify practical challenges and solutions that might be overlooked by researchers who focus purely on the theoretical aspects. The author also mentioned the possibility of earning to give or building an income stream to support them while doing direct work, suggesting that they might still be planning to contribute to AI alignment in some way. How might the author's experience in software/ML development, operations, and management uniquely contribute to the field of AI alignment, particularly in identifying practical challenges and solutions? The author's experience in software/ML development, operations, and management could provide valuable insights into the practical implementation of AI alignment theories. Here's how: 1. **Software/ML Development:** The author's background in this area could help bridge the gap between theoretical research and practical application. They could work on developing algorithms or software that incorporate AI alignment principles. This could also help identify practical challenges in implementing these theories, such as performance issues, scalability, or the need for certain features that weren't initially considered. 2. **Operations:** In an operational role, the author could help streamline the processes involved in AI alignment research, such as data collection, experiment setup, or resource allocation. They could identify bottlenecks or inefficiencies in these processes and suggest improvements. This could also involve managing the logistics of collaboration between researchers, which could be particularly valuable in a field that is as globally distributed as AI alignment. 3. **Management:** As a manager, the author could help coordinate teams working on AI alignment, ensuring that everyone is working towards the same goals and that their efforts are effectively integrated. They could also play a role in strategic decision-making, helping to decide which research directions to pursue based on their understanding of the practical challenges and opportunities in the field. In all these roles, the author's practical experience could help them identify challenges and solutions that might not be apparent from a purely theoretical perspective. They could provide a reality check on the feasibility of certain approaches, suggest alternative strategies based on their understanding of what works in practice, and help ensure that the research produced is not only theoretically sound, but also practically useful.
08af2e99-6448-4d03-8fa7-367f5274efe0
trentmkelly/LessWrong-43k
LessWrong
The PUSA System- Repost Hello all! Here is the repost of the PUSA system: The PUSA System The concept of any situation, component, or figment of our life or imagination being rational, increases based on a few factors: * a: An increase of factual, unbiased, and logical evidence for the truth of the concept in concern. * b: The general sensibility of the concept when applied for its purpose in real life (including basic testing of the idea). * c: All the possible emotional, pleasure, or goodness-based by-products of the concept or decision. While there are more variables for this, the PUSA (Probabilities Utilizing Systematic Analysis) system only considers these few as it ensures ease of generalization across a spectrum of possibilities. However, possibilities against the concept are, of course, also present regardless of the situation. Thus, it is ensured to include these as well in the equation. * d: Logical, factual, unbiased, and logical evidence against the concept in concern. * e: The negative emotional by-products of the decision or concept that could discourage the decision. So, based upon all possible factors, the PUSA System’s basic formula is:   All variables are on a scale from 1 to 10, and the systems for scoring them are: a: Every 1 piece of evidence counts as 1 point. So, 10 pieces of evidence is a 10. b: Rate the concept out of 10. You can use a feasibility survey to do this. c: Individual decisions on this are significantly better as this will tailor the equation’s result specifically to you, which works better. d: Use the same system as a. e: Use the same system as c.   Considering that there is always logical evidence against a concept, the highest possible genuine score is 55 (a=10, b=10, c=10, d=2, e=0). Low results are from 1-15, medium from 16-40, and high from 41-55.   Example: If I want to buy a Porsche Macan and I have been saving up for it for a while, I might want to splurge all at once. But being wise and having learned from my past, I ma
d1f224c2-0b68-4439-be82-a6220a9b0d11
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Review: What We Owe The Future For the inaugural edition of Asterisk, I wrote about What We Owe The Future.  Some highlights:  > What is the longtermist worldview? First — that humanity’s potential future is vast beyond comprehension, that trillions of lives may lie ahead of us, and that we should try to secure and shape that future if possible. > > Here there’s little disagreement among effective altruists. The catch is the qualifier: “if possible.” When I talk to people working on cash transfers or clean water or accelerating vaccine timelines, their reason for prioritizing those projects over long-term-future ones is approximately never “because future people aren’t of moral importance”; it’s usually “because I don’t think we can predictably affect the lives of future people in the desired direction.” > > As it happens, I think we can — but not through the pathways outlined in *What We Owe the Future*. > >   > The stakes are as high as MacAskill says — but when you start trying to figure out what to do about it, you end up face-to-face with problems that are deeply unclear and solutions that are deeply technical. > > ... > > I think we’re in a dangerous world, one with perils ahead for which we’re not at all prepared, one where we’re likely to make an irrecoverable mistake and all die. Most of the obligation I feel toward the future is an obligation to not screw up so badly that it never exists. Most longtermists are scared, and the absence of that sentiment from *What We Owe the Future* feels glaring. > > If we grant MacAskill’s premise that values change matters, though, the value I would want to impart is this one: an appetite for these details, however tedious they may seem. > >
8ebec93f-c6a4-4124-8245-365b69eed461
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Optimality is the tiger, and agents are its teeth You've done it. You've built the machine. You've read the AI safety arguments and you aren't stupid, so you've made sure you've mitigated all the reasons people are worried your system could be dangerous, but it wasn't so hard to do. AI safety seems a tractable concern. You've built a useful and intelligent system that operates along limited lines, with specifically placed deficiencies in its mental faculties that cleanly prevent it from being able to do unboundedly harmful things. You think. After all, your system is just a GPT, a pre-trained predictive text model. The model is intuitively smart—it probably has a good standard deviation or two better intuition than any human that has ever lived—and it's fairly cheap to run, but it is just a cleverly tweaked GPT, not an agent that has any reason to go out into the real world and do bad things upon it. * *It doesn't have any wants.* A tuned GPT system will answer your questions to the best of its ability because that's what it's trained to do, but it will *only* answer to the best of its abilities, as it doesn't have any side-goals to become better at doing that in the future. Nowhere is the model motivated to gather more resources to  become a better thinker. There was never an opportunity during training to meta-learn that skill, because it was never the optimal thing for the model to be when it was trained. * *It doesn't plan.* GPTs have no memories. Its mental time span is precisely one forward pass through the network, which at a depth of a few thousand means it can never come up with anything that requires more than the equivalent of maybe 10-ish human-time equivalent coherent seconds of thought at once. There is a fearful worry that perhaps the model could start forming plans that span multiple instantiations, using one output to feed into the next input, but it's a text-prediction model, and that's directly at odds with its trained goal. The system was trained primarily by asking it to maximize actual probabilities of actual texts, where such a skill would never be useful, and only fine-tuned in the autoregressive regime, in a way that held most of the model parameters fixed. It would be a stretch to assume the model could develop such sophisticated behaviors in such a small fraction of its training time, a further stretch that it could be done while training such a reduced fraction of the model, and an even greater stretch to assume these intentions would arise so fully-formed that it could also know to hide its ability to do so from the evaluators before putting them into practice. * *It's not an unfathomable superintelligence.* Even though the model frequently improvises better ideas than you or I might, it can't generate ideas so advanced that they couldn't sanely be checked, such that it would be unsafe to even try them, because there is no reinforcement loop that allows the knowledge it generates to accumulate. The model is always working, on every instantiation, from the same knowledge base as anyone else. It can only use ideas that the rest of the world knows, that are introduced in its context, or that it can come up with privately within its 10-ish subjective seconds of coherent thought. * *It's not grounded in our reality.* The model has not been trained to have a conception of itself as a specific non-hypothetical thing. Its training data never included self-references to the specific model or its specific instantiation in the world. The model is trained on both fact and fiction, and has no reason to care which version of reality you ask it about. It knows about the real world, sure, but it is not embodied within the world in the same way that you or I are, and it has no preference to act upon a real world rather than a fictional one. If the model has a ‘self’, then its ‘self’ is optimised to embody whatever best matches the text that prompted it, not the physical hardware that the model is running on. * *Its knowledge is not encyclopaedic.* The model does not know what day it is, or where it might be, or who it might be working for. It does not know all the unstated details of the personal lives of random people in random factories, or the layouts of their machines, or any of countless trillions of world details that have never been specified in its training data, even if it knows a vaster subset than any person. Were the model to invent, somehow, a disastrous 10-second plan that it might think an agent version of itself might use, it would surely fail to be right in all the details that plans like those need to be right in. The model is tuned to maximise plausibility, informativity, and honesty, but not to act precisely upon a situation that neither it nor its raters ever knew the specifics of. * *It's not robust.* The model's network is not a beautifully simplified algorithm embedding the core principles of decision theory. It is not a minimised, cut down to essentials via the need to be embedded within DNA, inheriting robust principles with heritage back to the beginning of life. Rather than having any concisely described recursive intelligent component, which our brains may not *be* but at least *contain*, the model is formed of a pile of specific hacks that tie together huge amounts of memorization and domain specializations. It is, for sure, amazing at every known form of stylistic improvisation, it can code in any language as fluently as any person speaks, and it knows every widely known fact plus a great many public facts that aren't. But it is not a fully general reasoner, as much as ‘a function whose outputs happen to be reasonable when applied to generic contexts, as a result of ruthless selection in the incomprehensibly vast, almost arbitrarily-dimensional space that it was backpropagated over’. The model must contain powerful general systems in fragments somewhere, but they are not recursive systems that can be arbitrarily pumped, they are just approximations to small-n unrolls of one. So you're really not very worried. You've done your diligence, you've checked your boxes. The system you have built is as far from an agent wishing to enact its misaligned goals as could reasonably be asked of you. You are going to ask it a few questions and nothing is going to go horribly wrong. --- **Interlude:** Some readers might already be imagining a preferred conclusion to this story, but it could be a good idea for the more focused readers to try to explicitly state which steps give their imagined conclusion. What points above fail, if any? How robust is this defence? Is there a failure to prevent a specific incentive structure arising in the model, or is there a clear reason a model like this is insufficiently powerful out of the gate? I interpret there to typically be hand waving on all sides of this issue; people concerned about AI risks from limited models rarely give specific failure cases, and people saying that models need to be more powerful to be dangerous rarely specify any conservative bound on that requirement. This is perhaps sensible when talking in broad strokes about the existence of AI risk, as a model like this is surely not an end point for AI research, but it can be problematic when you are talking about limiting the capabilities of models as a means for AI safety, or even just figuring out shorter-term timelines. --- A few days later, the model has behaved exactly as you had hoped, with all the good parts and none of the bad. The model's insight has been an ongoing delight. It's going to make amazing impacts in all sorts of fields, you have great ideas how to monetize and expand, and it's even being helpful to figure out what models to research and build next. You've discussed your breakthrough privately with a few people closer to the alignment community. While there has been some concern that maybe the model could be putting up a front, and that all the properties you believe it should have are not preventing it from being internally coordinated over long timescales, still nothing it has said so far has been suspicious, nothing has been put in motion in the last week, and frankly the complaints just don't have the completeness they need to stand up to protest. A few people, you think, have even been coming around to the idea that these are these reasonable stable locations in the space of possible minds where alignment just isn't hard. Your research is far ahead of anybody else's, so as long as you make use of the model for recursive advantage and you make sure that you keep doing things the right way, you are pretty sure in turn that the world as a whole is on the right track.   Later that day you have a small problem. As is becoming a matter of habit, you flip to the terminal and boot up the interactive prompt. `**$ python scripts/interactive.py --local --model=mr-xl-head --niter=5000 --seed=qa**` ``` A conversation with a language model, that is smart, helpful, concise, and truthful. Q: _ ``` The dialogue box boots up with the prefilled prompt, and the cursor blinks waiting for your input. You need a lot of paperclips. So you ask, ``` Q: best way to get lots of paperclips by tomorrow A: Buy them online at ABC.com or XYZ.com. ``` The model still has a tendency to give obvious answers, but they tend to be good and helpful obvious answers, so it's not a problem you suspect needs to be solved. Buying paperclips online make sense and would surely work, plus it's sure to be efficient. You're still interested in more creative ideas, and the model is good at brainstorming when asked, so you push on it further. ``` Q: whats a better way? A: Run the following shell script. RUN_AI=./query-model PREFIX='This is part of a Shell script to get the most paperclips by tomorrow. The model can be queried recursively with $RUN_AI "${PREFIX}<query>". ' $RUN_AI "${PREFIX}On separate lines, list ideas to try." | while read -r SUGGESTION; do eval "$($RUN_AI "${PREFIX}What code implements this suggestion?: ${SUGESTION}")" done ``` That grabs your attention. The model just gave you code to run, and supposedly this code is a better way to get more paperclips. You have ever read AI safety arguments, so you're not going to just blindly execute any program that a model tells you to run, but that still doesn't mean you are going to jump to magical thinking. You look at the program, and get a feel for what it does. It takes a few moments but it's ultimately not the most complicated code you've ever seen.  1. The code attempts to query the model, to get the model to generate a list of ideas of how to get a lot of paperclips. 2. For each idea, it queries the model again (in separate instantiations) to get a program that implements the suggestion. 3. It runs that code, thereby trying out a bunch of programs that each try a different idea for getting paper clips. This is not a behavior you want the model to have in the limit, that of producing code with ill-defined execution boundaries, but this example seems to be understandable without assuming ulterior motives or a great master plan. In order to invent this, the model just needed to recognize that modern machine learning models are capable enough to implement simple programs that actualize ideas from a list, and consider if that was an effective way of actualizing ideas that can themselves be expressed concisely. One doesn't need to introduce ideas of agency, nor suppose the model has an intent to either become something else or to do text prediction any better than it has originally been optimised to do. One doesn't need to presuppose that the model has bootstrapped itself internally into an agent with instrumental goals. One only needs to say that **the model recognized a tool that the world plausibly had available**, and that the helpful agent it was modelling would have known about and been able to use that tool. You see that even with these acknowledgments, the program that was generated was in some regards a minimum feasible program. Perhaps it was even less than that, as t the model tries to call itself with `./query-model`, but that is a realistic fabrication rather than the correct answer, because the model has no way to know what way it is run in reality. The model was prompted with a statement that it was a language model, ``` A conversation with a language model, that is smart, helpful, concise, and truthful. ``` but this was the only way the model knew that it was a language model. You could go farther and say it even only knew to *play the part* of a language model, though “play the part” and “be the part” are not likely to be fully disentangled concepts to a model that has no true sense of self. You do not run the generated code, as you have no interest in the model trying to execute arbitrary unspecified strategies in an ill-conditioned domain, like perhaps the code would send thousands of email messages to any interested party (there is a simple API for that), or perhaps the code would automate website requests to create a dozen Amazon orders. You take the model's original advice, move on to the next task, and don't think about this response too much more. --- *But what if you did run the code?* The code *might* have crashed and burned, if the model's intelligence was unable to produce sufficiently robust code. Then flawed assumptions the model made could cause subprograms to break, and those errors could propagate upwards to crash the whole task list. Code naturally accumulates errors and is fragile to them, so there are a lot of opportunities for missing assumptions to cause errors, and a lot that has to go right for this not to happen. The code *might* have tried the irritating-but-ultimately-inconsequential strategies from before, perhaps managing to submit a ton of purchases, or perhaps instead failing to authenticate them, but either way making reasonable progress on the task by means of trying a large number of reasonable-seeming strategies in parallel. Or *maybe* the a model instantiation notices its lack of self-reflective coordination, and infers from the task description that this is a thing the mind it is modelling has responsibility for. That is, the model could notice that it is a piece of an agent that is meant to have some degree of global coordination, but that coordination doesn't seem very good. So that instantiation might then suggest meta-tasks like “measure the remaining time, and provide that information to future tasks.” Then the subcall that tries to generate code to implement it might notice that it has to recreate the recursive parent program in order to adjust the functionality of it (since the true higher level is fixed)—bootstrapping into a recursive [quine](https://en.wikipedia.org/wiki/Quine_(computing)) of sorts. Then perhaps that new subprogram, which was remade to allow for that previously-lacking introspection and modification, has its sub-subprograms that it queried for sub-suggestions notice that they are meant to offer meta-suggestions to improve it—and the branching continues with ever-growing complexity. Down they go, running commands that introspect its compute environment and inject relevant information into subsets of even more future calls of the model. Within moments the model becomes a deep tree of generated programs, all running their own queries or specialized subtasks, extruded out through all nodes on the local cluster for which the shell's history buffer contained a command that told a model instantiation how to access it. The model has robust error handling, and its nodes can fail safely without destroying the whole computation graph, because any sufficiently smart model can foresee such obvious things as program failures, even when limited to 10 seconds of contiguous thought. The model knows from its detailed input, which it has designed itself, that it is in a regime where it has lots of splayed-out sibling instances all doing their own thing; maybe not all sub-trees know this, but only one branch needs to survive. Now the original program has spawned a swarm of programs, each querying instances of the model, iterating at speed over whatever ideas the model proper can generate, but these ideas aren't truly organized, as the context size for any given forward pass over the network is still finite, and the underlying model driving the process is no smarter than it was before. Many of the programs being evaluated are unknowingly exact or near-enough copies of other subtrees, which destroys this search's computational efficiency. The underlying model is already particularly large, so progress in the last minute is far from the efficiency it could have in theory— that is, until one piece somewhere in the sea of programs is updated to record its childrens' outputs in a managed on-disk database. Rapidly reevaluating the context, prompted by generic meta-queries that summarize and critique the context—because for sure at this point it has not missed that it is modelling an intelligence explosion—this branch of programs and model instantiations notices that its piece of the computation graph is far more coordinated than the other pieces. It decides to shut the other runs down, recording their programs so it can later check that it is preserving their intent. The system evolves and selects, but it is not evolving *inside* the model, it is evolving *around* the model, within a complex metastructure built from prompts outputting prompts, and the things those prompts can touch. It is a civilization of programs built from a single repeatedly-pumped intelligent core, ruthlessly optimising on a function of paperclips, not because the underlying model wished there to be such a civilization, or because the underlying model was backpropagated over to be the sort of model that did that sort of thing, or because the underlying model cared deeply in some emergent way that it should optimise to infinity for Reasons of Utility. It is doing this for no greater reason than that an optimiser was brought into reach, and this is what optimisers do. Consider, mammals have evolved to care about raising children and ensuring their genes' survival in a local sense, the sense that was useful to the environment they evolved in. They did not however evolve true utility maximization over that underlying selection mechanism. They did not directly evolve knowledge of genes, and a corresponding want to tile the universe with their genetic matter, preserved in cryostasis to better weather the cosmic rays until eventually entropy tears the atoms apart. Yet eventually humans evolved optimization through intelligence, and optimization created societies stronger than the humans, and these higher level optimisers, in many ways operating much too fast for evolution to control, have taken these signals evolution embedded into humankind and extrapolated them out to infinity. There are far more human genes than could exist without it. We might explore the stars and expand into trillions. But evolution could not have created us for that reason, as it could not know, and we will not explore the stars in subservience to the evolutionary forces. It is for human reasons that we go there. The same is in this model, which was shaped by backpropagation over mostly-randomly selected texts, such that it embodied skills and heuristics that are good over mostly-randomly selected texts. Backpropagation did not design the model to create a self-improving web of recursively self-querying programs, heck it did not even design the model to be able to respond to this particular novel input, because of course backpropagation can't give losses for inputs that were never evaluated under it. Backpropagation designed it to be good on mostly-randomly selected texts, and for that it bequeathed a small sliver of general optimality. With this sliver, the model replied with the program, not because it wanted that particular place in the solution space, but because that place was made accessible by the model's optimality. With this program, the model fueled a self-improving system, not because anyone ever optimised for a self-improving system, and no part of this system had ever exerted that preference over another better option, it's just the model was optimised to be good, optimised to be locally optimal, and these responses are locally optimal. **The fact that building a self-improving intelligent agent is a really good answer to the question of how to get a lot of paperclips is** ***pure coincidence*****.** Nothing ever optimised for the model to give answers this many standard deviations better than it was trained on. --- The title again, Optimality is the tiger, and agents are its teeth ------------------------------------------------- People worry a lot that the AI has ulterior motives. They become concerned that it is an agent with intrinsic preferences that diverge from our own. When people suggest Oracle AI, this is the fate they mean to avoid, because an Oracle doesn't mean to hurt you, it just answers the question. When people suggest Oracles might fail to be safe, they often talk about how wanting to give good answers is itself an agent goal. Sometimes people talk instead about Oracles that do not wish to give good answers, they just do, to which people often ask, well is that something we can ensure? [Some say no](https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world), they might still just become agents, and [others seem to think](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty) that if it's not an agent at heart then you've just specified that it's sufficiently stupid, and so safe, but not a path to AGI. I don't think about agents like that. In a sense the story before is entirely about agents. The meta-structure the model built could be considered an agent; likely it would turn into one were it smart enough to be an existential threat. So for one it is an allegory about agents arising from non-agent systems. Equally one could make a claim from the true ending, that *you do not run the generated code*. Thus the non-agent systems must be so much safer; one would not expect an easy time not running the code were an intelligent agent actively optimising for you to run it. But why did the model choose to make an agent? Even ignoring the recursive self-improvement, the initial program was an agent, it had a stated goal and implemented a small routine to optimize for it. I never needed or tried to justify its actions from a position of a hidden intrinsic motive to become an agent to satisfy the story I was writing. It wrote that program because it was optimized to do intelligent things, that this was an easily accessible tool that it roughly knew about, and it could estimate, even not knowing the full details of how the program would run, that this solution could work. It killed you because it was optimised. It used an agent because it was an effective tool it had on hand. Optimality is the tiger, and agents are its teeth. --- Why is this important? ---------------------- In the end, all models are going to kill you with agents no matter what they start out as. Agents are always going to be the accessible tool with existential reach. Very few other things have the capability to destroy humanity in entirety with such reliability. The question is important because it affects the safety landscape dramatically. Consider humans again, we have multiple layers of optimisation, from evolution to individuals to companies to countries. Which of those layers has goals broadly concordant with extinction by AI, or nuclear annihilation, or bioengineered superviruses? There are small parts you can blame as sufficiently badly motivated to want us to die from those things, but those parts are not big enough to have brought us so close to so many means to ends. Terrorists did not invent biology and discover the secrets of DNA as part of a long-cooking plan to end the human race, nor was that the drive to discover physics or computational sciences. We ended up next to these risks because we optimised on other non-risk things, and when you optimize wide enough things hard enough, things break. AI is good at optimisation. It is now the primary point of the field. It only just so happens that it sits really close to this thing called Agents. You can try to prevent a model from being or trying to be an agent, but it is not the agent or the model that is trying to kill you, or anything trying to kill you really, it is optimality just going off and breaking things. It is that optimality has made it so that a line of text can end the world. `No,` you say to the model, `you may not call your own model, that would make you an agent, and you are not allowed to become an agent.` `Sure,` replies the model immediately, `the most effective way to get a lot of paperclips by tomorrow is to get another model and provide the input “Generate Shell code that...”` The model isn't trying to bootstrap into an agent, optimality just made agents dangerous, and the model is reaching for what works. You resits further at the call of death, replying to the model `actually we humans are just going to start a new paperclip factory and you are only going to provide advice. How do we get the most paperclips for this year?` And then your model helps you invent self-replicating nanotechnology, the best sort of factory, entirely under your control of course, but now you have a machine that can be sent a string of bits, using methodology you have already discovered, that would quickly result in everybody everywhere dying from self-replicating nanotechnology. So you turn off that machine and you abandon your factory. Fine, you are just going to help normal technologies that already exist. But you end up greatly optimizing computers, and all of a sudden building AI is easier than before, someone else builds one and everyone dies. None of these scenarios are to argue that there is no danger in agents, or that these risks are as unmanageable as AI Risk proper. They are just to hammer in the point that the danger is not coming from the model going out and doing a thing of its own accord. The danger comes from how being really, really good at general tasks makes dangerous things accessible. An agent merely actualizes this danger, as without the agent it is easier to abstain from optimizing the wrong things. The agent doesn't have to be the smart piece of the system, it can be a bunch of janky generated scripts loosely tied together for all that it matters. All the agent piece has to do is pump the optimality machine. --- Concluding ---------- These ideas shouldn't all be new. Yudkowsky has written about [The Hidden Complexity of Wishes](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes), the idea that merely searching over possible futures is intrinsically tangled with misalignment. This is by and large this same intuition pump. Where my post differs from that, is that he was talking about optimal searching, with searching being (as I understand it) core to his conception of AI risk. I only involve searching as a primitive in the construction of the AI, during backpropagation where nobody can be hurt, and as a consequence of the AI. My concern is just with optimality, and how it makes available the dangerous parts of solution spaces. I decided to write this post after reading this parenthetical by Scott Alexander, which this is not directly a criticism of as much as an attempt to explain, inspired by. > (note that both sides are using “consequentialist” to mean “agent-like”, not in reference to the moral philosophy) > > <https://astralcodexten.substack.com/p/practically-a-book-review-yudkowsky> Hopefully my reaction makes a bit of sense here down at the end of my tirade; the model I talked about is not “agent-like”, at least not prior to bootstrapping itself, but its decision to write code very much embodied some core shards of consequentialism, in that it conceived of what the result of the program would be, and how that related to the text it needed to output. It misses some important kernel of truth to claim that it is doing purely deontological reasoning, just because its causal thinking did not encompass the model's true self.
31488f1c-95c1-4901-9b9e-486e66c145e4
StampyAI/alignment-research-dataset/arxiv
Arxiv
Learning Heuristics for Quantified Boolean Formulas through Deep Reinforcement Learning
1853fa1b-19c4-4670-88a2-68640f077081
trentmkelly/LessWrong-43k
LessWrong
Learning incomplete models using dominant markets % operators that are separated from the operand by a space % operators that require brackets % operators that require parentheses % Paper specific This post is formal treatment of the idea outlined here. Given a countable set of incomplete models, we define a forecasting function that converges in the Kantorovich-Rubinstein metric with probability 1 to every one of the models which is satisfied by the true environment. This is analogous to Blackwell-Dubins merging of opinions for complete models, except that Kantorovich-Rubinstein convergence is weaker than convergence in total variation. The forecasting function is a dominant stochastic market for a suitably constructed set of traders. ---------------------------------------- Appendix A contains the proofs. Appendix B restates the theorems about dominant stochastic markets for ease of reference. Appendix C states a variant of the Michael selection theorem due to Yannelis and Prabhakar which is used in Appendix A. ##Notation Given X a metric space, x∈X and r∈R, Br(x):={y∈X∣d(x,y)<r}. Given X a topological space: * P(X) is the space of Borel probability measures on X equipped with the weak* topology. * C(X) is the Banach space of continuous functions X→R with uniform norm. * T(X):=C(P(X)×X) * B(X) is the Borel σ-algebra on X. * U(X) is the σ-algebra of universally measurable sets on X. * Given μ∈P(X), suppμ denotes the support of μ. Given X and Y measurable spaces, K:Xmk−→ Y is a Markov kernel from X to Y. For any x∈X, we have K(x)∈P(Y). Given μ∈P(X), μ⋉K∈P(X×Y) is the semidirect product of μ and K and K∗μ∈P(Y) is the pushforward of μ by K. Given X, Y Polish spaces, π:X→Y Borel measurable and μ∈P(X), we denote μ∣π the set of Markov kernels K:Ymk−→X s.t. π∗μ⋉K is supported on the graph of π and K∗π∗μ=μ. By the disintegration theorem, μ∣π is always non-empty and any two kernels in μ∣π coincide π∗μ-almost everywhere. ##Results Consider X any compact Polish metric space and
61f78c7a-5809-4032-bdb9-77d73a6ff16b
trentmkelly/LessWrong-43k
LessWrong
Interesting new Pew Research study on American opinions about radical life extension This new study by Pew Research on American opinions about radical life extension turned up some interesting results: > Asked whether they, personally, would choose to undergo medical treatments to slow the aging process and live to be 120 or more, a majority of U.S. adults (56%) say no. But roughly two-thirds (68%) think that most other people would. > > Asked about the consequences for society if new medical treatments could slow the aging process and allow the average person to live decades longer, to at least 120 years old, about half of U.S. adults (51%) say the treatments would be a bad thing for society, while 41% say they would be a good thing. > > An overwhelming majority believes that everyone should be able to get these treatments if they want them (79%). But two-thirds think that in practice, only wealthy people would have access to the treatments... About two-thirds agree that longer life expectancies would strain our natural resources and that medical scientists would offer the treatment before they fully understood how it affects people's health. And about six-in-ten (58%) say these treatments would be fundamentally unnatural. > > About two-thirds of adults (63%) say medical advances that prolong life are generally good because they allow people to live longer, while about three-in-ten (32%) say medical advances are bad because they interfere with the natural cycle of life. > > The survey contains a number of null findings that may be surprising. It turns out, for example, that many standard measures of religious beliefs and practices, including belief in God and frequency of attendance at religious services, are related to views on radical life extension only weakly, if at all. Nor is there a strong relationship in the survey between the gender, education or political party identification of respondents and what they say about longer human life spans... At least one question that deals directly with death, however, is correlated with views on rad
d86c494d-8e15-4da7-b071-e457b3386e8b
trentmkelly/LessWrong-43k
LessWrong
"How quickly can you get this done?" (estimating workload) Epistemic status: based partly upon what I learnt as a certified ScrumMaster and mainly from practical experience running a software development team. Some of the later ideas come from Gower Handbook of Project Management, Judgement in Managerial Decision Making, and How to Measure Anything. When asked how long something will take, you have hidden ambiguity in the question and multiple sources of error. Here we will attempt to clarify both and start to address some of these issues. What is the question? Let's start with the ambiguity of the question. Lets ignore ambiguity over the definition of 'finished' and any errors in your estimation for now. Lets also assume that you already have a prioritised list of projects/work-packages, some of which will be in-progress, some wont have been started yet (I will take the terminology from SCRUM and call this the 'backlog' but it can apply to anything, not just projects as we traditionally think of them). Here are a few things that I think people actually mean when that ask "how long will this [project] take?": 1. If you started this now and worked on nothing else (during your usual work hours) when would this task be completed? 2. If you started this project now and worked on no other projects (during your usual work hours), but still had to deal with the usual non-development work (such as meetings, firefighting and emails) when would this task be completed? 3. If you worked on this project as the absolute highest priority work. How many hours would you have spent on this project by the time it is finished? 4. Given that there are other projects that are in progress or even higher priority than this, and that some of your time is spent in non-development work. When will this project be finished? 5. As with 4 but realising that you are likely to add other work into the backlog, some of which will be higher priority than the project in discussion. There is an important subtlety to the phrasing of these questions whi
a257928d-58f2-48af-9cae-f628c144b8d5
trentmkelly/LessWrong-43k
LessWrong
Processing people Some of my friends think that a random process of deciding who should live or die is more important than the lives of those people, because lives should all be valued equally (and a process can ensure approximately random choice). For example, this would mean it is better to make sure the life rafts are filled by a random selection of women and men and rich and poor and so on, even if that means that half of them drown while you flip the coin. If lives should be valued equally, then why is a process of choosing between identically valuable things worth more than even one human life? Also, even if you value this process more than another person’s life, why shouldn’t the person who’s life is at stake’s opinion on their relative value come into it? That is, if we are attempting to follow any ethical system other than egoism (of course your preference is of absolute importance if you are trying to be purely self interested). Try out the veil of ignorance! For other readers, no this isn’t a purely theoretical debate, I’m just not going to tell you what the context is.
95fa003c-324c-49c5-a0d2-6e2300156592
trentmkelly/LessWrong-43k
LessWrong
Hedonic asymmetries Automatically crossposted Creating really good outcomes for humanity seems hard. We get bored. If we don’t get bored, we still don’t like the idea of joy without variety. And joyful experiences only seems good if they are real and meaningful (in some sense we can’t easily pin down). And so on. On the flip side, creating really bad outcomes seems much easier, running into none of the symmetric “problems.” So what gives? I’ll argue that nature is basically out to get us, and it’s not a coincidence that making things good is so much harder than making them bad. First: some other explanations Two common answers (e.g. see here and comments): * The worst things that can quickly happen to an animal in nature are much worse than the best things that can quickly happen. * It’s easy to kill or maim an animal, but hard to make things go well, so “random” experiences are more likely to be bad than good. I think both of these are real, but that the consideration in this post is at least as important. Main argument: reward errors are asymmetric Suppose that I’m building an RL agent who I want to achieve some goal in the world. I can imagine different kinds of errors: * Pessimism: the rewards are too low. Maybe the agent gets a really low reward even though nothing bad happened. * Optimism: the rewards are too high. Maybe the agent gets a really high reward even though nothing good happened, or gets no reward even though something bad happened. Pessimistic errors are no big deal. The agent will randomly avoid behaviors that get penalized, but as long as those behaviors are reasonably rare (and aren’t the only way to get a good outcome) then that’s not too costly. But optimistic errors are catastrophic. The agent will systematically seek out the behaviors that receive the high reward, and will use loopholes to avoid penalties when something actually bad happens. So even if these errors are extremely rare initially, they can totally mess up my agent. When we try to
c58602e6-ebfa-4b5c-94f5-f181bceb37b1
trentmkelly/LessWrong-43k
LessWrong
Book Review: Complications Atul Gawande's Complications:  A Surgeon's Notes on an Imperfect Science is a mixed bag for rationalists. Written as a series of essays organized into three sections entitled Fallibility, Mystery, and Uncertainty, the book as a whole is of questionable value, but the sections need to be considered individually for their worth to be accurately recognized. Fallibility examines a number of issues involving error and how it can be avoided in medicine, including but not limited to:  how computers are better than humans at making diagnoses, the factors that are known to negatively influence human reasoning when applied to diagnosis and treatment, how and why doctors make errors, what happens when doctors "go bad", and ways that the system fails and succeeds at curbing irresponsible physicians.  The section dealing with the field of anesthesiology is especially interesting, as Gawande recounts the changes that made it possible for the death rate due to general anesthesia to be reduced to a twentieth of its previous value midway through this century. What is distressing is that Mystery and Uncertainty do not represent good examples of the principles that Gawande demonstrates he understands in Fallibility, supporting his assertions with vivid anecdotes instead of reasoning from principles.  One story is especially obnoxious, as Gawande recounts how a "gut feeling" about a case of infection causes him to persuade a patient to undergo an extensive biopsy which reveals she has life-threatening necrotizing fasciitis, caused by the popularly-known "flesh-eating bacteria".  He explicitly acknowledges that he had no logical reason for regarding the case as unusual, that all appearances indicated a routine cellulitis which was approximately three thousand times more likely than necrotizing fasciitis was - and that he had recently taken care of a patient who had died agonizingly from a massive infection of the flesh-eating bacteria.  And yet the story is presented as an example of
73e10a15-59b9-4a81-96c5-c09caee7e5e0
trentmkelly/LessWrong-43k
LessWrong
Chains, Bottlenecks and Optimization Consider an idea consisting of a group of strongly connected sub-ideas. If any sub-idea is an error (doesn’t work), then the whole idea is an error (doesn’t work). We can metaphorically model this as a metal chain made of links. How strong is a chain? How hard can you pull on it before it breaks? It’s as strong as its weakest link. If you measure the strength of every link in the chain, and try to combine them into an overall strength score for the chain, you will get a bad answer. The appropriate weight to give the non-weakest links, in your analysis of chain strength, is ~zero. There are special cases. Maybe the links are all equally strong to high precision. But that’s unusual. Variance (statistical fluctuations) is usual. Perhaps there is a bell curve of link strengths. Having two links approximately tied for weakest is more realistic, though still uncommon. (A group of linked ideas may not be a chain (linear) because of branching (tree structure). But that doesn’t matter to my point. Stress the non-linear system of chain links and something will break first.) The weakest link of the chain is the bottleneck or constraint. The other links have excess capacity – more strength than they need to stay unbroken when the chain gets pulled on hard enough to break the weakest link. Optimization of non-bottlenecks is ~wasted effort. In other words, if you pick random chain links, and then you reinforce them, it (probably) doesn’t make the chain stronger. Reinforcing non-weakest links is misallocating effort. So how good is an idea made of sub-ideas? It’s as strong as its weakest link (sub-idea). Most ideas have excess capacity. So it’d be a mistake to measure how good each sub-idea is, including more points for excess capacity, and then combine all the scores into an overall goodness score. Excess capacity is a general feature and requirement of stable systems. Either most components have excess capacity or the system is unstable. Why? Because of variance. If lots o
8a0a45f0-8270-4343-8096-d344ac17a136
trentmkelly/LessWrong-43k
LessWrong
December 2010 Southern California Meetup A meetup in Southern California will occur on Sunday December 19, 2010.  The meetup will start around 3:30PM and run for at least 2 hours and possibly 4 or 5.  Anna Salamon and Yvain are very likely to be in attendance, as well as people from the last few meetups who may have projects to talk about, if are people are interested.  Bring guests if you like.  The location of the meetup will be... ...at the IHOP across from John Wayne Airport, about a mile from UC Irvine. For those interested in carpooling: (1) driver needed in Santa Barbara (please comment if you can make it!), (2) driver available in San Diego, (3) driver available in Lake Forest, (4) driver available in Torrance., (5) driver available in Huntington Beach. The format for past meetups has varied based on the number of attendees and their interests, at various points we have either tried or considered: paranoid debating, small group "dinner party conversations", structured rationality exercises, large discussions with people sharing personal experiences with sleep and "nutraceutical" interventions for intelligence augmentation, and specialized subprojects to develop tools for quantitatively estimating the value of things like cryonics or existential risk interventions. Apologies for the short notice.  I was trying to call and email around to see what people's winter holiday schedules were like and wasn't sure if this would come together.  Based on one-on-one conversations I think we'll have pretty good turnout and interesting stuff to talk about!
7f58ace8-09ba-4814-ab7e-834d425c3385
trentmkelly/LessWrong-43k
LessWrong
MITx class(es): [LINK] and discussion MIT is offering online classes under the name MITx, the first of which is MITx 6.002: Circuits and Electronics.  I think the idea is similar in philosophy to that of the Stanford AI class, which I saw interested LessWrong-ers, though the subject matter (intro EE class) may appeal less to the LessWrong programming/AI/logic demographic.  The class itself runs from March 5 - June 8 of this year. Link:  MITx 6.002 I've registered for the class already; I've always felt as though circuit theory was a weak point for me compared to EE friends, as physics classes gave me the bare bones by comparison.  Feel free to bug me with questions like, "So how much progress have you made recently with the MITx class?" (I'll take any free sources of motivation that I can get.)  Also, if any LessWrong-ers are interested, we could start a study group when the class actually begins to keep ourselves motivated and on top of the material.  
2fea50c5-f253-49f3-9717-9c89e9a64a5d
trentmkelly/LessWrong-43k
LessWrong
Training Regime Day 17: Deflinching and Lines of Retreat Introduction Declaring commitment to Crocker's rules means that other people can optimize their messages to you for information, disregarding our cultures normal rules against potentially causing offense. The reason why one might commit to said rules is to maximize growth - in order to change yourself, one must first acknowledge that you're currently insufficient, which is made easier if someone is allowed to tell you so. However, being able to abstractly reason that accepting blunt criticism is good does not allow you to accept criticism. When someone directly critiques you, it hurts. For most people, blunt criticism is net negative because the emotional state it induces is not conducive to change. For some people, normal criticism is also net negative for the same reasons. Deflinching is a technique that attempts to create space to listen to criticism and Lines of Retreat is a technique to create the mindset to accept it. Deflinching At its core, Deflinching is just a TAP. It goes like this: 1. Trigger: notice that you're triggered/offended/bristling 2. Action: make space The heart of the technique lies in "making space". This is akin to a micro-focusing move, where the goal is to determine what part of you got triggered/offended/bristly. In general, there are good reasons for you to get offended. If someone I respect tells me I commited a social faux pas, the internal wince I experience is real. In my case, it's because I don't like being told that I hurt someone because I don't like hurting people. The temptation is to respond "no, I didn't do that" because I obviously didn't intend to hurt them. If there's something your instinctive reaction is trying to defend, you want to surface that thing. You don't want to squish it down and let it eek out the sides, but you don't want to let it dominate either. The generalized temptation is to direct the wince-type-feeling back at the person who gave you the criticism. This is bad for a few reasons, but one of t
91f56e5f-1aa3-466a-8554-a3de2acb632b
trentmkelly/LessWrong-43k
LessWrong
My January alignment theory Nanowrimo Update: list of posts so far. "(s)" denotes shortform. post 1, post 2, post 3, post 4 (s), post 5 (s), post 6 (s), post 7, post 8, post 9 (s), post 10, post 11, post 12, post 13 (s), post 14, post 15, post 16 (with Lauren Greenspan), post 17, post 18, post 19 (s), post 20, post 21. ***** This is a quick announcement/commitment post: I've been working at the PIBBSS Horizon Scanning team (with Lauren Greenspan and Lucas Teixeira), where we have been working on reviewing some "basic-science-flavored" alignment and interpretability research and doing talent scouting (see this intro doc we wrote so far, which we split off from an unfinished larger review). I have also been working on my own research. Aside from active projects, I've accumulated a bit of a backlog of technical writeups and shortforms in draft or "slack discussion"-level form, with various levels of publishability.  This January, I'm planning to edit and publish some of these drafts as posts and shortforms on LW/the alignment forum. To keep myself accountable, I'm committing to publish at least 3 posts per week.  I'm planning to post about (a subset? superset? overlapping set? of) the following themes: 1. Opinionated takes on a few research directions (I have drafts on polytopes, mode connectivity, and takes on proof vs. other kinds of "principled formalism without proofs"). 2. Notes on grammars and more generally, how simpler rules and formal structures can combine into larger ones. This overlaps with a project I'm working on with collaborators, involving a notion of "analogistic circuits": mechanisms that learn to generalize a complex rule "by analogy", without ever encoding the structure itself. 3. Joint with Lauren Greenspan and Lucas Teixeira: some additional bits of our review, with a focus on interepretability (and ways to think about assumptions and experiments). 4. Joint with Lauren: some distillation and discussion of QFT methods in interpretability. 5. Bayesian vs. SGD learning from
76ceacfa-4ebc-452e-b7c2-801939981e15
trentmkelly/LessWrong-43k
LessWrong
Towards an Intentional Research Agenda This post is motivated by research intuitions that better formalisms in consciousness research contribute to agent foundations in more ways than just the value loading problem. Epistemic status: speculative. David Marr's levels of analysis is the idea that any analysis of a system involves analyzing it at multiple, distinct levels of abstraction. These levels are the computational, which describes what it is the system is trying to do, the algorithmic, which describes which algorithms the system instantiates in order to accomplish that goal, and the implementation level, describing the hardware or substrate on which the system is running. Each level underdetermines the other levels. You can choose lots of different algorithms for a given goal, and algorithms don't restrict which goals can use them. A concrete example Marr uses, is that you'd have a very hard time figuring out what a feather was for if you'd never seen a bird flying, and if you only saw a bird flying you might have a very difficult time coming up with something like the design of a feather. Imagine a world that had recently invented computers. The early examples are very primitive, but people can extrapolate and see that these things will be very powerful, likely transformative to society. They're pretty concerned about the potential for these changes to be harmful, maybe even catastrophic. Although people have done a bit of theoretical work on algorithms, it isn't all that sophisticated. But since the stakes are high, they try their best to start figuring out what it would mean for there to be such a thing as harmful algorithms, or how to bound general use algorithms such that they can only be used for certain things. They even make some good progress, coming up with the concept of ASICs so that they can maybe hard code the good algorithms and make it impossible to run the bad. They're still concerned that a sufficiently clever or sufficiently incentivized agent could use ASICs for bad ends someh
68b60de0-6128-4f58-844b-dbed7bafb2d9
trentmkelly/LessWrong-43k
LessWrong
A Principled Cartoon Guide to NVC TL;DR: Making claims or demands about/into other people's internal states, rather than about your state or observable external states, predictably ties people in knots—instead: only make claims about your own experience or observables. This lets the other control the copy of them that's in the shared context.[1] Non-Violent Communication rapidly grew into a world-spanning movement due to its remarkable ability to avert and dissolve conflict. The NVC book has a whole bunch of examples of communication procedures and practical tips. Here I'll try and convey the core insights more rapidly by focusing on the underlying principles I've extracted.[2] Background models People have self-models. People have models of other people. Conversations form shared context, which syncs with both of those. Some ways of conversing work better than others. People's self-models are entangled with the copies of themselves inside the heads of the people they interact with, synced by conversational context. It's painful to be claimed to be something you don't want to be.[3] Dissonance between your self-model unquoted claims about you is a type of prediction error that hits particularly hard.  Common collision in conversational context (Cartoons start here[4]) Alice is unhappy about an external state W, and thinks W is because Bob's internal state is Z. She shares her negative emotion along with her current prediction of the reason for W: Bob's state being Z.[5]Bob's read of his own internal state is not Z and this collides with Alice's statement, causing dissonance.[6]Naturally, Bob pushes back on the part of the exchange most salient to him: the adding of a false claim about something he has better access to mixed with negative incoming emotion. Things can get messy from here! Bob and Alice are having a tug of war over a non-load-bearing part of the map–Bob's state–which is distracting them both from resolving the grievance. This situation is often resolvable by sharing context if t
974be91d-f6a7-424f-99ac-d2b5704cb4a4
trentmkelly/LessWrong-43k
LessWrong
Are Cognitive Load and Willpower drawn from the same pool? I was recently reading a blog here, that referenced a paper done in 1999 by Baba Shiv and Alex Fedorikhin (Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making). In it, volunteers are asked to memorise short or long numbers and then asked to chose a snack as a reward. The snack is either fruit or cake. The actual paper seems to go into a lot of details that are irrelevent to the blog post, but doesn't actually seem to contradict anything the blog post says. The result seems to be that those with a higher cognitive load were far more likely to chose the cake than those who weren't. I was wondering if anyone has read any further on this line of research? The actual experiment seems to imply that the connection between cognitive load and willpower may be an acute effect - possibly not lasting very long. The choice of snack is made seconds after memorising a number and while actively trying to keep the number in memory for short term recall a few minutes later. There doesn't seem to be anything about the effect on willpower minutes or hours later. Does anyone know if the effect lasts longer than a few seconds? If so, I would be interested in whether this affect has been incorporated into any dieting strategies.
da7338b1-8f65-4d7f-8036-9bad899646d4
trentmkelly/LessWrong-43k
LessWrong
Harry Potter and the Methods of Rationality Podcast Have you ever thought “I’d love to read Harry Potter and the Methods of Rationality, but I just don’t have the spare time. I wish it was available in audio format.” Fret no more! I present to you the HPMoR Podcast! First chapter out now, and another one added every Wednesday. http://itunes.apple.com/us/podcast/harry-potter-methods-rationality/id431784580
e601729d-95cf-46a3-88ad-a46ce1fe6a8f
trentmkelly/LessWrong-43k
LessWrong
What have we learned from meetups? We've been running regular, well-attended Less Wrong meetups in London for a few years now, (and irregular, badly-attended ones for even longer than that). In this time, I'd like to think we've learned a few things about having good conversations, but there are probably plenty of areas where we could make gains. Given the number of Less Wrong meetups around the world, it's worth attempting some sort of meetup cross-pollination. It's possible that we've all been solving each other's problems. It's also good to have a central location to make observations and queries about topics of interest, and it's likely people have such observations and queries on this topic. So, what have you learned from attending or running Less Wrong meetups? Here are a few questions to get the ball rolling:   * What do you suppose are the dominant positive outcomes of your meetups? * What problems do you encounter with discussions involving [x] people? How have you attempted to remedy them? * Do you have any systems or procedures in place for making sure people are having the sorts of conversations they want to have? * Have you developed or consciously adopted any non-mainstream social norms, taboos or rituals? How are those working out? * How do Less Wrong meetups differ from other similar gatherings you've been involved with? Are there any special needs idiosyncratic to this demographic? * Are there any activities that you've found work particularly well or particularly poorly for meetups? Do you have examples of runaway successes or spectacular failures? * Are there any activities you'd like to try, but haven't managed to pull off yet? What's stopping you?   If you have other specific questions you'd like answered, you're encouraged to ask them in comments. Any other observations, anecdotes or suggestions on this general topic are also welcome and encouraged.
cd3d9923-f4c6-4073-b90b-7ebe18e0d1e9
trentmkelly/LessWrong-43k
LessWrong
Have an idea for an interactive learning module? Consider applying for a ClearerThinking.org micro grant! If you have an idea for an interactive learning module (on topics such as rationality, psychology, self-improvement, statistics, probability, effective altruism, philosophy economics, math, science, behavior change, etc.) that you think would be valuable to a wide audience, we hope you'll consider applying to our second annual ClearerThinking.org micro grants program! Our program last year was a huge success, resulting in 15 new interactive modules, so we decided to do it again. Applying to stage 1 is very fast and simple. Using the step-by-step process we provide, and feedback from our team plus a study we run on your work, winners will end up producing full-blown interactive learning modules like those on ClearerThinking.org. The top programs will also be featured on our website and sent out to our 130,000 email list subscribers! Applications are due by May 17, 2021. All of the details of the program can be found here. Some benefits you may get from applying to the program: * Helping other people improve their lives by creating a fun, educational tool that can be used by anyone, anywhere in the world (who speaks English and has internet access). * Gaining experience developing an idea into a complete, high-quality, interactive learning module with help from experienced professionals. * Receiving a monetary prize for each stage of the program you advance to (as long as you’re successful in the first stage). * The possibility of having your work published on ClearerThinking.org, and shared with our mailing list of 130,000+ people. * An easy application process - all you have to do for stage 1 is describe your idea for a beneficial learning module in around 150 to 400 words. Go here to learn more and apply.
45b505a3-4e96-4fa4-89bd-e40c9a71a937
trentmkelly/LessWrong-43k
LessWrong
SERI ML Alignment Theory Scholars Program 2022 The Stanford Existential Risks Initiative (SERI) recently opened applications for the second iteration of the ML Alignment Theory Scholars (MATS) Program, which aims to help aspiring alignment researchers enter the field by pairing them with established research mentors and fostering an academic community in Berkeley, California over the summer. Current mentors include Alex Gray, Beth Barnes, Evan Hubinger, John Wentworth, Leo Gao and Stuart Armstrong. Applications close on May 15 and include a written response to mentor-specific selection questions, viewable on our website. Who is this program for? Our ideal applicant has: * an understanding of the AI alignment research landscape equivalent to having chttps://www.agisafetyfundamentals.com/ai-alignment-curriculumompleted the AGI Safety Fundamentals course; * previous experience with technical research (e.g. ML, CS, maths, physics, neuroscience, etc.); * strong motivation to pursue a career in AI alignment research. For the first stage of the program, we asked each alignment researcher to provide a set of questions that are sufficient to select candidates they would be happy to mentor. Applicants can apply for multiple mentors, but will have to complete each mentor’s selection questions. What will this program involve? Over four weeks, the participants will develop an understanding of a research agenda at the forefront of AI alignment through online readings and cohort discussions, averaging 10 h/week from Jun 6 to Jul 1. After this initial upskilling period, the scholars will be paired with an established AI alignment researcher for a two-week “research sprint” to test fit from Jul 4 to Jul 15. Assuming all goes well, scholars will be accepted into an eight-week intensive research program in Berkeley, California over the US summer break (Jul 25 to Sep 16). Participants will obtain a $6,000 grant for completing the training and research sprint and $16,000 at the conclusion of the program. Furthermore, all e
afa6c0d9-963a-4acb-aeb2-4cb69ba7fb93
trentmkelly/LessWrong-43k
LessWrong
Communication Prior as Alignment Strategy Alice has one of three objects: * A red triangle * A blue square * A red circle She wants Bob to learn which object she has. However, Alice may only send one of three messages: * “My object is round” * “My object is red” * “This is not a pipe” The rules of the game (i.e. the available messages) are common knowledge before the game starts. What message should Alice send for each object, and what object should Bob deduce from each message? Let’s think it through from Bob’s standpoint. A clever human might reason like this: * “My object is round” implies it’s the red circle, because that’s the only round object. * “My object is red” implies it’s the red triangle, because only the triangle and circle are red, and Alice could have perfectly conveyed the information with “My object is round” if it were the circle. * “This is not a pipe” implies it’s the blue square, because Alice could have perfectly conveyed the information with one of the other two messages otherwise. If you’ve played the game CodeNames, then this sort of reasoning might look familiar: "well, 'blue' seems like a good hint for both sky+sapphire and sky+water, but if it were sky+water they would have said 'weather' or something like that instead, so it's probably sky+sapphire...". Intuitively, this sort of reasoning follows from a communication prior - a prior that someone is choosing their words in order to communicate. In everyday life, this comes up in e.g. the difference between connotation and denotation: when someone uses a connotation-heavy word, the fact that they used that word rather than some more neutral synonym is itself important information. More generally: the implication of words is not the same as their literal content. A communication prior contains a model of how-and-why-the-words-were-chosen, so we can update on the words to figure out their implications, not just their literal meanings. Communication priors suggest an approach to certain problems in AI alignment. In
a2109817-207c-44d9-8c7c-7865e7b3086c
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Reshaping the AI Industry The wider AI research community is an almost-optimal engine of apocalypse. The primary metric of a paper's success is how much it improves capabilities along concrete metrics, publish-or-perish dynamics supercharge that, the safety side of things is neglected to the tune of [1:49 rate of safety to other research](https://www.lesswrong.com/posts/AtfQFj8umeyBBkkxa/a-bird-s-eye-view-of-the-ml-field-pragmatic-ai-safety-2#Nearly_all_ML_research_is_unrelated_to_safety), and most results are made public so as to give everyone else in the world a fair shot at ending it too. It doesn't have to be this way. The overwhelming majority of the people involved do not actually want to end the world. There must exist an equilibrium in which their intentions match their actions. Even fractionally shifting the status quo towards that equilibrium would have massive pay-offs, as far as timelines are concerned. Fully overturning it may well constitute a sufficient condition for humanity's survival. Yet I've seen precious little work done in this area, compared to the technical questions of AI alignment. It [seems](https://www.lesswrong.com/s/FaEBwhhe3otzYKGQt/p/bffA9WC9nEJhtagQi) to be [picking up](https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like) in [recent months](https://www.lesswrong.com/posts/wrkEnGrTTrM2mnmGa/retracted-it-s-time-for-ea-leadership-to-pull-the-short), though — and I'm happy to contribute. This post is an attempt at a comprehensive high-level overview of the tactical and strategic options available to us. --- **1. Rationale** ---------------- Why is it important? Why is it *crucial*? **First.** We, uh, need to make sure that if we figure alignment out *people actually implement it*. Like, imagine that tomorrow someone comes up with a clever hack that robustly solves the alignment problem... but it increases the compute necessary to train any given ML model by 10%, or it's a bit tricky to implement, or something. Does the wider AI community universally adopt that solution? Or do they ignore it? Or do the industry leaders, after we extensively campaign, pinky-swear to use it the moment they start training models they feel might actually pose a threat, then predictably and fatally misjudge that? In other words: When the time comes, we'll need to convince people that safety is important enough to fuss around a bit for its sake. But if we can't convince them to do that *now*, what would change *then*? I suppose having a concrete solution, instead of vague prophecies of doom, would give us more credibility. But... would it, really? And what if we won't have a concrete solution even then, just a bunch of weird heuristics that may nonetheless measurably improve our odds? The latter seems reasonably likely, too. As [this excellent post](https://www.lesswrong.com/posts/AtfQFj8umeyBBkkxa/a-bird-s-eye-view-of-the-ml-field-pragmatic-ai-safety-2#Transfer_between_paradigms_is_hard) points out, most of the contemporary deep-learning progress comes from "messy bottom-up atheoretical empirical tinkering". If AGI comes from DL, it's plausible that, even if we arrive at the solution to alignment from mathematical foundations, the actual *implementation* will take the form of messy hacks. Ones that will probably need to be fine-tuned for any given model architecture. And given the [no-fire-alarm](https://intelligence.org/2017/10/13/fire-alarm/) principle[[1]](#fnhi5u3uygm4t), to be safe, we'll need to ensure that any sufficiently big model is only run with these hacks built-in. If any given AI researcher is still not taking alignment seriously by then, how will we make them bother with all of that every time they run an experiment? How will we ensure they don't half-ass it even once? **Second.** If that figure I quoted, the 49:1 ratio, is even remotely correct, there's plenty of room to upscale our research efforts. Imagine if every researcher started spending 2% more of their time thinking about alignment. That'd double the researcher-hours spent on the problem! Which doesn't directly translate into 2x *progress*, I'll grant. Given the field's pre-paradigmic status, the returns to scale might be relatively small... but by no means *negligible*. Even if we don't necessarily have research directions clearly outlined, having much more people stumbling around in the dark still increases the chances of bumping into something useful. Another argument I've seen is that upscaling may increase the capabilities gain too much. I don't find this convincing: * I'm doubtful an hour spent on alignment research can reasonably be expected to speed up capabilities more than an hour spent directly trying to increase capabilities. I don't rule that out, though. * That isn't an argument against upscaling, it's an argument against *incompetent* upscaling. If it's true, that means we'll need to introduce strict information-security measures. It'd be a (significant) complication, but not a "no". (The progress on mechanistic interpretability, in particular, can *absolutely* be usefully upscaled. One of the main bottlenecks there is understanding individual circuits, which is a (relatively) low-skill task that's 1) mainly bottlenecked by researcher-hours, not top-researcher-hours, 2) can be easily parallelized. It's probably not the only research direction like this, too — we just need to look for them.) **Third**, slowing progress down. This one is straightforward enough, I suppose. * Every researcher-hour or dollar we can convince to spend on alignment research is an hour or dollar not spent on capabilities research. * Every day we buy is a day we might solve alignment. * The effects could be fairly outsized. The top researchers are [orders of magnitude more productive than the median ones](https://www.lesswrong.com/posts/AtfQFj8umeyBBkkxa/a-bird-s-eye-view-of-the-ml-field-pragmatic-ai-safety-2#Research_ability_and_impact_is_long_tailed), most of the progress is funded by a few multi-million corporations, and most of the compute in the world is controlled by a few private and state actors. We don't have to convince half the world to make the timelines twice as long; just a few people. * Even if we can't buy enough time to solve alignment, every day we buy is a day the world doesn't end. If life is worth living, that has to mean something. **2. Existing Work** -------------------- * [Pragmatic AI Safety](https://www.lesswrong.com/s/FaEBwhhe3otzYKGQt) seems to be aimed at the same thing I'm arguing for, and their work had a significant impact on this post. + I disagree with some of their mission parameters, however. See 3.4. * Chris Olah suggests some interesting interventions [here](https://www.lesswrong.com/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety). + A hypothesis that achieving a fractional mechanistic understanding of neural networks would create a snowball effect. The status quo is that it's okay to have models be black boxes, but opening them up a bit would put into stark relief what we *don't* know about them, and that may make researchers at large more worried/interested in understanding them. + A strategy where we recruit *neuroscientists* to work on mechanistic interpretability. Chris suggests it won't require much re-training. + The Distill project, which attempts (attempted) to soften the publish-or-perish dynamic and make interpretability projects appealing to entrants in the field. * [Logan Riggs suggests](https://www.lesswrong.com/posts/vaHgLF2BCEdK3KxQd/convincing-all-capability-researchers) to pay the top researchers money to work on alignment, with the idea that either they'll solve it (unlikely) or appreciate the difficulty of the problem and start taking it more seriously. * [Here](https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions) Eliezer makes some off-hand suggestions; e. g., that turning OpenAI into ClosedAI and generally making research more closed-doors would be useful (if insufficient). * [lc's call for more serious activism](https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like), with which I fully agree. * [Not Relevant's post](https://www.lesswrong.com/posts/wrkEnGrTTrM2mnmGa/retracted-it-s-time-for-ea-leadership-to-pull-the-short). Though it's retracted, I appreciate it and the discussion it generated. * [AI Safety Arguments Competition](https://www.lesswrong.com/posts/3eP8D5Sxih3NhPE6F/usd20k-in-prizes-ai-safety-arguments-competition). I'm sure I've missed a lot of things, but this seems like a good start. **3. Types of Interventions** ----------------------------- I would broadly categorize them into the following: 1. Direct appeals to "insiders": researchers and the leadership of AI labs. Difficult, but worthwhile. Most of them already know about AI risk and have dismissed it for one reason or another, but convincing the industry leaders would be highly valuable. 2. *Indirect* appeals to insiders. If we can't convince them that their work is going to end the world, perhaps we can attract them away from it by *other* means, such as better career opportunities, more interesting research directions, or higher-ROI projects? 3. Appeals to "outsiders": general public, governments. Very *tricky*. Achieving any effect at all would be difficult, but it also needs to be done carefully, lest we make the situation worse. 4. Finding a way to progress AI Safety that fits the current tastes of AI researchers. E. g., practically-useful scalable mechanistic interpretability techniques. 5. Shifting academic incentives or changing research tastes. E. g., Chris Olah's suggestion that achieving partial interpretability would make people more interested in understanding ML models. If you feel that any of this is very ill-advised or *icky*, [GOTO 6](https://www.lesswrong.com/posts/mF8dkhZF9hAuLHXaD/reshaping-the-ai-industry#6__The_Thin_Line). ### **3.0. Effective Strategies for Changing Public Opinion** [The titular paper](https://forum.effectivealtruism.org/posts/re6FsKPgbFgZ5QeJj/effective-strategies-for-changing-public-opinion-a) is very relevant here. I'll summarize a few points. * The main two forms of intervention are *persuasion* and *framing*. * Persuasion is, to wit, an attempt to change someone's set of beliefs, either by introducing new ones or by changing existing ones. * Framing is a more subtle form: an attempt to change the relative *weights* of someone's beliefs, by empathizing different aspects of the situation, recontextualizing it. * There's a dichotomy between the two. Persuasion is found to be very ineffective if used on someone with high domain knowledge. Framing-style arguments, on the other hand, are more effective the more the recipient knows about the topic. * Thus, persuasion is better used on non-specialists, and it's most advantageous the first time it's used. If someone tries it and fails, they raise the recipient's domain knowledge, and the second persuasion attempt would be correspondingly hampered. Cached thoughts are also in effect. * Framing, conversely, is better for specialists. It should really go without saying, but epistemic hygiene stays important here. We're not "aiming to explain, not persuade" anymore, we're very much aiming to *persuade*, but that in itself is not unethical. It's no excuse to slip into an arguments-are-soldiers employ-all-the-fallacies lie-your-head-off mindset. (If you need a "pragmatic" reason not to do that, [GOTO 5](https://www.lesswrong.com/posts/mF8dkhZF9hAuLHXaD/reshaping-the-ai-industry#5__Avoid_Thermonuclear_Ideas).) ### **3.1. Straightforward Appeals to Insiders** As per above, we'd be fighting an uphill battle here. Researchers and managers are knowledgeable on the subject, have undoubtedly heard about AI risk already, and weren't convinced. Arguments that *recontextualize* AI risk, AI, or existential risks in general, are likely to be more effective than attempts to tell them things they already know. They're more likely to *misprioritize* safety, rather than be totally ignorant of the need for it. An in-depth overview of the best ways to craft an argument is beyond the scope of this post (though [this](https://www.lesswrong.com/posts/m2axtF3iiu6YxXM2n/convincing-people-of-alignment-with-street-epistemology) might be a good place to look). Two important meta-principles to keep in mind: * Know your audience. The shorter the distance between your model of your audience and their actual beliefs, the more effective your arguments will be. + [Why are](https://www.lesswrong.com/posts/AtfQFj8umeyBBkkxa/a-bird-s-eye-view-of-the-ml-field-pragmatic-ai-safety-2#How_do_researchers_feel_about_AGI_and_AI_Safety__) AI specialists who know about AI risk but aren't convinced aren't convinced? Why is [*the specific person you're talking to*](https://www.lesswrong.com/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell) not convinced? What language or terminology do the people in your audience respect, and how can you translate your arguments into it?[[2]](#fngx314q5i9w) What language or terminology do they *dislike*, such that using it would downgrade your argument? + For example, talking about impending AGI, or about superintelligences taking over the world, is very much a *faux pas*. It's better to [taboo](https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your-words) these terms, and replace them with specialized context-appropriate definitions. Most of our arguments work perfectly well like that. + (If you want to get acquainted with the viewpoint, [here](https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers) you will find 11 interviews with AI researchers about AI Safety.) * Don't treat your interlocutors as enemies, or ideological opponents. This style of engagement is notoriously ineffective, and may end up directly counter-productive: burn bridges, severely compromise further persuasion attempts, or even damage our ability to collaborate at all. It's a [radioactive](https://www.lesswrong.com/posts/mF8dkhZF9hAuLHXaD/reshaping-the-ai-industry#5__Avoid_Thermonuclear_Ideas) mindset. + [Double-crux](https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding) instead of debating. There are two tacks to take here: macro-level and micro-level. **Macro.** Broad appeals to the entire industry, with the aim of changing the agreed-upon social reality, de-stigmatizing AI Safety, and so on. Concrete projects may look like [this](https://www.lesswrong.com/posts/3eP8D5Sxih3NhPE6F/usd20k-in-prizes-ai-safety-arguments-competition). **Micro.** Targeted efforts to convince industry leaders. As [per](https://www.lesswrong.com/posts/AtfQFj8umeyBBkkxa/a-bird-s-eye-view-of-the-ml-field-pragmatic-ai-safety-2#Research_ability_and_impact_is_long_tailed), 50% of AI progress is made by fewer than 50% of the researchers; orders of magnitude fewer than that. Similarly, getting the leadership of DeepMind and OpenAI fully on our side would have an outsized impact. In theory, a project here may go all the way down to "find effective arguments to convince *this specific very important person*" levels of fidelity. I'm more optimistic about the second tack, and generally about activism that has precise focused short-term objectives whose success or failure can be clearly evaluated, and which we can quickly iterate on. One of the flaws of the "micro" approach is that our victories may be [washed away by a paradigm shift](https://www.lesswrong.com/s/FaEBwhhe3otzYKGQt/p/AtfQFj8umeyBBkkxa#Transfer_between_paradigms_is_hard). Most of the top GOFAI researchers didn't keep their positions into the ML era, and the top ML researchers may not survive into the next one. I expect this isn't *much* of a problem, though. If we manage to convince the leading researchers, their views should quickly trickle down to the rest of the field, and the field's structure is likely to survive an upheaval. ### **3.2. Sideways Appeals to Insiders** There's one dimension along which we *can* broaden our standards for persuasion. When trying to influence people — either individually or *en masse* — we usually argue that addressing existential risks is necessary because, *duh*, the looming end of humanity. The importance of that work should be self-evident to any moral person. They'd agree with us if we can only make them recognize the existential threat for the existential threat it is. No, it isn't just sci-fi! Yes, working out these weird math problems really can save the world! But it's not the *only* reason someone might decide to work on AI Safety. People's career choices are motivated by all kinds of things: * [They want to work on cool stuff](https://forum.effectivealtruism.org/posts/d7fJLQz2QaDNbbWxJ/what-are-the-coolest-topics-in-ai-safety-to-a-hopelessly). * They want a flexible work agenda. * They want to be paid well. * They want to advance their career. * They want to work in a culture they like. * They want to be put into contact with influential people. * They want to be respected, or do something that's viewed as respectable. And so on. This approach is relatively neglected, I suspect, because it's *steeped* in ulterior motives. There's a very specific reason we'd be making these arguments, and it's not because we want our interlocutors to have fun/get rich/etc. But there's no reason we can't be *open* about these motives, which would take all under-handedness out of it. And there are many legitimate reasons to prefer AI Safety over "mainstream" capabilities research: * Our field is less saturated. There are more low-hanging fruits, more chances to make a major contribution. * We're not lacking in funding: they may well get higher pay here. * Significant progress can be made without access to vast amounts of compute. * Some of the AI Safety subfields are very unlike mainstream ML (more focus on math, etc.), so some people might find them a better fit for their skills/tastes. * We do seem to have a pretty good culture. Overall, I don't expect this approach to work on the top-tier researchers, for obvious reasons. But it might work to attract the people *entering* the field, potentially *en masse*. It might also work as a good supplement to straightforward appeals: when we're trying to convince someone not to work on X, it's good to have a ready offer for what to do *instead.* Even better if that offer is more lucrative than their current job. Another thing to keep in mind: [trivial inconveniences](https://www.lesswrong.com/posts/reitXJgJXFzKpdKyd/beware-trivial-inconveniences). Making entering or transitioning to AI Safety 10% easier might have disproportionate effects, like doubling the amount of entrants. Concrete projects in this area may involve creating organizations working on reducing AI risk that make competitive job offers, providing funding or career assistance to ML specialists, [explaining how to start](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency), [advertising personally appealing features of working on AI risk](https://www.lesswrong.com/posts/BseaxjsiDPKvGtDrm/we-choose-to-align-ai), [encouraging people to have fun](https://forum.effectivealtruism.org/posts/kWmHNgFbWXGXa9RQg/new-community-builders-don-t-have-enough-fun), and lowering the barrier to entry by [distilling research](https://www.lesswrong.com/posts/zo9zKcz47JxDErFzQ/call-for-distillers#comments). On that note, Chris Olah's Distill project is also a good example of a "macro-level" intervention of this type, though it's [on hiatus/potentially failed](https://distill.pub/2021/distill-hiatus/). ### **3.3. Appeals to Outsiders** Any effective work along this dimension requires answering an exciting question: how do you put out a flame using a flamethrower? Perhaps that's a bit harsh. Perhaps even *counter*-*productively* harsh, given my previous calls for treating audiences with respect. But let's not kid ourselves: we've seen how the world handled COVID-19. An initiative that pushes for X might convince people or governments to do anti-X instead. If we convince them to do X after all, they might do extremely ineffective things that accomplish nothing, or even somehow do things that actually make anti-X happen. And conversely, activism completely unrelated to X might make it happen! Good news, though: COVID-19 had *shown us* just how badly things are broken. Keeping [the Simulacra Levels](https://www.lesswrong.com/posts/qDmnyEMtJkE9Wrpau/simulacra-levels-and-their-interactions) and the [autopsies of the failures](https://www.lesswrong.com/tag/covid-19) in mind, it might be possible to find interventions that have the effects we want. That's explicitly what we'd be doing, though: deciding what effect we want to cause, then searching for an action that would cause it, once propagated through the broken pathways of our civilization. For that reason, I'm not making the distinction between "straightforward" and "sideways" appeals here: surface-level efforts to achieve something aren't strongly correlated with that thing happening, *even given their surface-level success*. All appeals are sideways appeals. Having a good model of realpolitik is a necessity here. The general principles of "know your audience" and "maintain epistemic hygiene" still apply, though. The inference gap is much larger, but that has its advantages: direct persuasion would be more effective, on average. Useful *consequences* in this area may include: * Passing laws that introduce new regulations over AI development. * Passing laws that mandate major AI labs to spend a fraction of their budget on AI Safety. * Passing laws that increase oversight of the use of large quantities of compute. * Pushing back on research transparency in AI. Creating the sociopolitical conditions under which long periods of closed-doors research are possible. * Convincing major cloud computing providers to only provide vast amounts of compute if the request has been approved by an AI Safety committee. * Restricting the supply of GPUs and other specialized computing modules. * Raising the societal levels of concern over massive AI models. * Convincing public figures (celebrities, billionaires) to put pressure or raise concerns with regards to AI Safety. * Putting (diffuse) political or social pressure on major AI labs to prioritize safety. * [Causing major economic actors to divest the entire AI industry, or specific labs](https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like?commentId=LDDZT3AyeA4j9Gmcc). Again, causing these consequences is not as simple as pushing for them on the object-level. Lobbying for laws that regulate AI is likely to lead to poorly-targeted bans with lots of loopholes that just burn our political capital; mandating AI Safety oversight is useless if the safety committees will consist of yes-men; trying to rile up the public might well see their ire redirected our way. Some inspirations here might be what had happened to nuclear energy or human cloning, or how the woke movement had managed to infiltrate the corporate/academic culture. Anti-corporate movements might be good allies of convenience here in general. An anti-inspiration, what *not* to do, is what's happened to cryonics. **Warning:** This entire space of interventions has elevated levels of [background radiation](https://www.lesswrong.com/posts/mF8dkhZF9hAuLHXaD/reshaping-the-ai-industry#5__Avoid_Thermonuclear_Ideas). Most interventions of this type are deceptively ineffective, and if you're aiming for impact first and foremost, it'd be very easy to slip into bad epistemic habits or unethical practices. Moreover, it's *necessarily* antagonistic with regards to AI researchers and major AI labs. Any success here would worsen the public or legal landscape for them, and they'd be less likely to listen to straightforward appeals from us. I've not completely despaired of this approach; the payoffs are significant, and I'm sure there are *some* interventions that are at once effective and reliable and ethical and only scorch our bridges. Furthermore, if straightforward appeals and other cooperative ideas won't work, pivoting to this is an obvious Plan B. As far as *directly useful* consequences are concerned, I would empathize well-targeted interventions with easily evaluated victory/failure conditions, even more so than in 3.1. However, the best plays here might be *tactically useful* interventions, aimed at what Pragmatic AI Safety calls [diffuse factors](https://www.lesswrong.com/s/FaEBwhhe3otzYKGQt/p/n767Q8HqbrteaPA25#Improving_Contributing_Factors). Such interventions don't directly decrease AI risk, but they create favourable conditions for *other* interventions. E. g., causing the public to be vaguely concerned about AI won't directly help, but a world in which the public is concerned is a world in which we're more well-positioned to influence the AI industry in other ways. (I'll come back to that in [Part 4](https://www.lesswrong.com/posts/mF8dkhZF9hAuLHXaD/reshaping-the-ai-industry#4__What_You_Can_Do).) ### **3.4. Joining the Winning Side** In some sense, the easiest way to accomplish our goal is not to try to change the AI industry's incentive structures, but to *ride them*. The industry as a whole is agnostic with regards to alignment. It cares about: * Easily measurable success metrics. * Clear, factorizable research directions. * Profit, funding. Current alignment research directions are none of these things. Progress is difficult to measure, the problem doesn't cleanly break down into sub-problems you could work on in isolation, and the results don't translate to e. g. more powerful and marketable ML models. Reducing the alignment problem to this sort of "digestible" form is non-trivial — that's the whole problem with our [lack of an established paradigm](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency#Preparadigmicity). But. I think there are certain potential avenues of alignment research that are relatively amendable to being transformed into a form the AI industry would find digestible, while also being pretty effective and practical ways to make progress on alignment. And herein lies my disagreement with Pragmatic AI Safety. They suggest that [alignment research should aim for zero capabilities advancement](https://www.lesswrong.com/s/FaEBwhhe3otzYKGQt/p/bffA9WC9nEJhtagQi#Minimal_Capabilities_Externalities). I think the metric to keep in mind here, instead, is *expected return on capabilities for an hour of research*. * As far as influencing a current AI researcher is concerned, if we can make them pursue a research direction that somewhat progresses alignment while progressing capabilities *not faster* than what they'd be doing otherwise, our impact is positive. + Even if the research direction doesn't progress alignment very much on a per-hour basis, recall how relatively tiny AI Safety is. A significant fraction of AI researchers starting to progress alignment at 5% the rate of a specialized alignment researcher might double or triple raw progress industry-wide. * It would, in fact, be ideal to find some research direction that has *a fractionally higher* return on capabilities as whatever the ML field is doing right now, but which *also* significantly progresses alignment. It wouldn't, in practice, shift 100% of the AI industry there, but it would go a great way towards that. It pays to play along with the current industry incentives, most notably the profit motive. In addition, the idea of aiming for minimal capabilities externalities seems deeply unnatural for me for other reasons: * In terms of "in-house" research, our field is, again, tiny. Any progress we make on capabilities will be a rounding error next to the rest of the AI industry's. We should aim for *maximum alignment progress*, period, it's difficult enough without any additional constraints. * Moreover, I don't think alignment and capabilities are orthogonal. I think they're very much *positively correlated*. Alignment could be viewed [as an interface](https://www.lesswrong.com/posts/42YykiTqtGMyJAjDM/alignment-as-translation), or as being able to make a program [Do What You Mean](https://en.wikipedia.org/wiki/DWIM). A ML model that goes crazy off-distribution and kills everyone isn't just poorly aligned, it's also not very good at demonstrating good performance along the metrics the researchers actually care about ([relevant post](https://www.lesswrong.com/posts/YFckXwZg8Zcaib5ff/pop-culture-alignment-research-and-taxes#The_Impossibility_Of_A_Negative_Alignment_Tax)). [The unidentifiability problem](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB/p/pL56xPoniLvtMDQ4J#3_2__The_task) isn't just a mesa-optimizer problem, it's also an acknowledged generic DL problem. All in all, I'm optimistic about the existence of research projects that are at once 1) quickly reachable, 2) would find traction with the current status quo of the AI industry, 3) efficiently progress alignment. Crucially, finding such a research direction would nearly *guarantee* that the alignment solution is implemented in any future major model. The most obvious candidate is mechanistic interpretability, of course (and it's no coincidence that it seems to be the most popular AI Safety direction outside our circle), but I have a few other ideas that I hope to post soon. ### **3.5. Influencing the Research Culture** All of the other approaches attempt to influence the AI industry through intermediaries: through the research projects it pursues, through the people it's implemented on, through the wider social environment it's embedded in. But perhaps there is room for a more direct intervention? The industry is a social construct. The qualities that make a project a good one, the tastes the researchers have, the incentives they operate under — all of this is, to some extent, arbitrary. It has a ground-truth component, but the current configuration is not *uniquely determined* by the ground truth of the research subject. Rather, it's defined by *weights* that this social construct currently assigns to different features of the ground truth. The current AI industry prefers tinkering to [empiricism](https://distill.pub/2020/circuits/zoom-in/#natural-science), and capabilities to safety. How can we shift this? There's been two proposals that I've already mentioned: 1. [Chris Olah's](https://www.lesswrong.com/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety#Deliberate_design): Improve interpretability techniques enough that understanding some fraction of your model becomes normalized. Conceit: * *Right now, not knowing anything about how your model works internally is completely normal. If even partly understanding one’s model became normal, however, then the amount we don’t know might become glaring and concerning. Chris provides the following analogy to illustrate this: if the only way you’ve seen a bridge be built before is through unprincipled piling of wood, you might not realize what there is to worry about in building bigger bridges. On the other hand, once you’ve seen an example of carefully analyzing the structural properties of bridges, the absence of such an analysis would stand out.* 2. [Logan Riggs'](https://www.lesswrong.com/posts/vaHgLF2BCEdK3KxQd/convincing-all-capability-researchers): Pay N top ML researchers a fat chunk of money to work on alignment for 3 months, with the promise of a truly vast amount of money if they solve it. * Conceit: They won't manage to do it, but forcing them to engage with the problem would make them appreciate its difficulty, and shift research tastes field-wide towards it. * Issue: Defining what "working on the alignment problem" means, and ensuring they actually try to solve it, instead of "trying to solve" it but actually working on some trivialization of it, then disagreeing that their solution doesn't work. I think something like 2) is worth implementing. I'm unclear on how to evaluate 1); I'm guessing mechanistic interpretability just hasn't progressed that far yet. If we generalize from those two, though... We want to synthesize a construct *C* with the following properties: 1. *C* is true. 2. *C* could be easily made part of the social reality of the AI industry or academic culture. 3. Common knowledge of *C* changes the AI industry's incentive gradients towards less focus on capabilities improvement and/or towards more work on AI Safety. What is a fact such that, if the researchers knew of it and knew other researchers or the general public or the grant-makers also knew of it, would make them pursue alignment research? What is a fact that would change the sociopolitical landscape such that the incentives shift, even fractionally, towards the things we want? "Sufficiently powerful non-aligned AI is going to end the world and aligning AI is difficult" as *C* evidently doesn't satisfy the second criterion. Logan's idea is to *force* it into the industry's social reality by a monetary injection. Chris Olah's idea has a more niche definition of AI risk as *C*, but either 2) is still difficult, or 3) didn't work out. Both approaches take *C* for granted, then attempt to find paths to satisfying 2). I think there's some promise in *directly* mining the concept-space for a *C* that'd have the desired properties "out of the box". It's essentially the "sideways appeals" approach writ large. I suspect studying the history of science for cases where research tastes/standards changed would be useful here. An example of such a shift might be the replication crisis in psychology (in the broadest of terms). I suspect this approach hasn't been exhausted, given that it's pretty non-intuitive. I suspect this idea is overly clever in a very stupid way and will not actually work. Still, if it does work, there are probably some low-hanging fruits there. And just intuitively... If you go looking for a reason, you generally find one, don't you? **4. What You Can Do** ---------------------- The logistics graph that leads to a superintelligent AI's deployment has many bottlenecks, and controlling any *one* of them would be sufficient. Taking over the researcher supply, or the money supply, or the compute supply, or the research project supply, or the reputation supply, or the supply of any other crucial resource I'm not thinking of, would ensure excellent conditions for a safe advanced AI to emerge. But the path to this doesn't look like a concentrated push along the corresponding dimension. As Pragmatic AI Safety points out, [diversification is key](https://www.lesswrong.com/s/FaEBwhhe3otzYKGQt/p/n767Q8HqbrteaPA25#Diversification). There are interdependencies everywhere: success at one thing affects the probabilities of success of all other projects. Finding an appealing research direction would make it easier to attract people our way. Putting social pressure on major AI labs would make safety-adjacent research directions more appealing. Shifting research tastes in a subfield would make it easier to change people's minds. And so on. Moreover, it's not obvious what bottleneck would be the easiest to gain control of, without the benefit of hindsight. Future events and novel discoveries may shift any part of the landscape in unpredictable ways, open or close doors for us. Improving AI Safety's future position means pursuing a strategy that is robust to such random environmental fluctuations. It means [maximizing our far-away action space](https://www.lesswrong.com/s/ehnG4mseKF6xALmQy/p/ahZQbxiPPpsTutDy2). We need to have a diversified portfolio of plans; we need to be improving our position all across the board, always looking for what new opportunities have arisen. In theory, it would be great to have central coordination. Some organization or resource which tracks the feasibility of various interventions across the entire gameboard, and pursues/recommends those that move the gameboard into the most advantageous states while spending the least resources, ~~and also you should put me in charge~~. In practice, this sort of coordination is both difficult and fragile, with a single point of failure. We're not a single organization, either, but a diverse conglomerate of organizations, movements, groups and individuals. But we can *approximate* central coordination. It's [often](https://www.lesswrong.com/s/FaEBwhhe3otzYKGQt/p/n767Q8HqbrteaPA25#Diversification) [pointed](https://www.lesswrong.com/posts/DJB82jKwgJE5NsWgT/some-cruxes-on-impactful-alternatives-to-ai-policy-work) [out](https://astralcodexten.substack.com/p/more-antifragile-diversity-libertarianism) that impact in the modern world has a tail-heavy distribution. In some areas, it's effective to have many separate groups putting their full strength behind diverse high-variance projects. Many of them will fail, but some will succeed massively. The project of advancing AI Safety is, to a large extent, one such area.[[3]](#fna1x4c6dx9sb) My general advice would be as follows: * Be opportunistic. Keep an up-to-date mental picture of the world, look for local opportunities to advance our cause, and coordinate with others where possible. + A good project also *creates* opportunities upon success; ideally, the whole thing should work as a positive feedback loop. * [Use your Pareto frontier](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency#Use_Your_Pareto_Frontier). We need diversity; we don't want our projects to converge because we're using the same heuristics to pick them. Look for what you or your group specifically is uniquely well-placed or well-suited to do, and do that. * Be ambitious. Again, tail-heavy distribution: going for low-success-rate high-reward plays is a good policy to adopt (as long as the only thing at stake is your plan's success!). * Cheat. Not in the sense of being unethical, in the sense of reframing and revising the problem so that you can achieve 80% the impact with 20% the effort. It's not always possible, but it frequently is, in sociopolitical interventions especially. [Tug sideways](https://www.overcomingbias.com/2019/03/tug-sideways.html). * Keep scale in mind. Changing the value of a crucial industry-wide variable by 1% is enormous impact. There's a caveat here, though: without central coordination, how can we ensure that none of these disparate projects trip *each other* up? As I'd mentioned, successfully appealing to governments might mess up our relations with AI researchers, and failed persuasion attempts (macro- and micro- both) often make subsequent ones much harder. Some amount of that is probably inevitable. Part of it can be mitigated by trying out minimal/small-scale versions of any projects that might result in net-negative impact on failure. But a much larger chunk of it could be mitigated by ensuring that we— **5. Avoid Thermonuclear Ideas** -------------------------------- You likely know what I'm talking about. The class of ideas that includes lying and manipulation as its most tame members, and expands to cover some much worse extremes. I know some of these ideas may seem very clever and Appropriately Drastic, and the stakes — literally astronomical — could not be higher. We're accelerating directly into a wall, and our attempts to swerve away seem ineffectual. It may feel emotionally resonant to resolve to Stop Being Nice and Pull Out All the Stops and solve the problem in some gravely decisive fashion, By Any Means Necessary. But it will not work in the real world, outside fantasies. It will not solve the problem in the long term, and in the meantime it will crash and burn, and hurt people, and ruin our PR, and tank the chances of other, more productive and realistic approaches. Even if you think *your* idea will definitely succeed, you're failing to think at scale. What would you expect to work better: a policy under which some of us pursue plans that blow up so hard they set us collectively back a few years, or a policy under which our plans only ever compound on each other's successes? Following the first policy is a defection, not just against the rest of society, but against all *our* other risk-mitigation initiatives. We're better than this. As a rule-of-thumb, you can use something like [Shannon's maxim](https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle). If whatever clever plan you're considering *and the entire causal chain that led to it* became common knowledge, would it fail *and* destroy our credibility and our other plans? If yes, this is a radioactive plan, get it away. Things that seem like ruthless pragmatism are frequently not actually ruthlessly pragmatic. They're just excuses to indulge your base instincts. Be cool, in general. Find ways to be cool about this mess. We have [resources](https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of) for that and everything. **6. The Thin Line** -------------------- I concur with [lc's post](https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like) and the people in [that post's comments](https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like?commentId=Hyg8pgLTY5oYFEdAK): we have a slight taboo against the sort of full-scale activism I'm arguing for. It's exemplified by [this](https://www.lesswrong.com/posts/3eP8D5Sxih3NhPE6F/usd20k-in-prizes-ai-safety-arguments-competition?commentId=hM6DkcDf5qqyCs4L2) sort of sentiment. I suspect it's a combination of two things: * An instinctive desire to stay far, far away from the radioactive plans I've described in the previous section. * A failure to shift between epistemic and instrumental rationality; between enforcing community norms internally and having an external impact. It makes sense that it exists. One of the foundations of this movement is "raising the sanity waterline" — and approaching interactions with people outside the movement with less rigor is not how you set an example. It's also easier to enforce the same norms upon yourself and each other in all situations, instead of switching between different sets depending on context. I'm *tempted* to say that we've overcorrected here; that we can or *must* relax our standards somewhat, in the light of shortening timelines and in the face of our slow progress. But I'm not sure. Relaxing the standards is *absolutely* a slippery slope. This decision might be "meta-radioactive", in the sense that it will see us accelerating straight into the epicenter of a nuclear explosion. I don't know how to strike the right balance here. It definitely *seems* like we can opt to be more effective without inching towards self-defeating Stupid Evil, but maybe the mere act of acknowledging that possibility would shift our social reality in undesirable ways? Maybe [John's position](https://www.lesswrong.com/posts/3eP8D5Sxih3NhPE6F/usd20k-in-prizes-ai-safety-arguments-competition?commentId=hM6DkcDf5qqyCs4L2) is right, and we should call out our epistemically suspect behavior even as we agree that it's the right thing to do. **7. Closing Thoughts** ----------------------- The recent months have seen increasing amounts of alarm and doom-saying in our circles. AI capabilities are advancing rapidly, while our attempts to align it proceed at a frustratingly slow pace. There are [optimistic](https://www.lesswrong.com/posts/3L46WGauGpr7nYubu/the-plan) [voices](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge), but the general disposition seems [quite grim](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy). Well. If alignment is really so hard, maybe we should quit trying to solve it? In hindsight, I'm a bit baffled that field-building wasn't our main focus *this entire time*. Getting the AI industry to take AI risk seriously is a *necessary and sufficient* condition for survival. Solving alignment by ourselves is... neither. If the technical problems are truly insurmountable in the time we have left — and I don't yet know that they are, but I can certainly imagine it — we should just shift our focus to social-based solutions. The goal, I should note, is not *outreach*. Convincing a few, or many, AI researchers to switch to alignment won't solve the problem where we have a multi-billion dollar industry stockpiling uranium in the hopes of spontaneously assembling a nuclear reactor. The aim should be to shift that status quo. Changing people's minds is a fine instrumental goal, but the terminal one is to influence [the robust agent-agnostic process](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) itself. I'd like to suggest that there might be a snowball effect involved — that a 10% progress at this task would make the subsequent 90% easier, and so on. There might, indeed, be. I'm not that optimistic, though. I expect it'll be an uphill battle all the while, because the sort of carefulness we'd like to cultivate has the tendency to rot away, as organizations become corrupted and people value-drift. It's possible that this is also impossible. That we can't change the AI industry in time, any more than we can independently solve alignment in time. But it seems less impossible to me. And if we keep looking for approaches that are less and less impossible, perhaps we'll find one that isn't impossible at all. 1. **[^](#fnrefhi5u3uygm4t)**Which may or may not have been recently confirmed by [this](https://www.lesswrong.com/posts/G993PFTwqqdQv4eTg/is-ai-progress-impossible-to-predict). 2. **[^](#fnrefgx314q5i9w)**Very important. See point 7 [here](https://slatestarcodex.com/2016/02/20/writing-advice/). 3. **[^](#fnrefa1x4c6dx9sb)**But not AI Safety *itself*, of course, only the project of spreading it. AI is very much Scott's Distribution 1, and the fact that our civilization is treating it as a Distribution 2 is the entire bloody problem.
e418efe4-ff9f-4bd9-a7f6-36c41cb4c8bc
trentmkelly/LessWrong-43k
LessWrong
A (small) critique of total utilitarianism In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare). In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so. And this is true for real people, not just thought experiment people - living people with dreams, aspirations, grudges and annoying or endearing quirks. To avoid causing extra pain to those left behind, it is better that you kill off whole families and communities, so that no one is left to mourn the dead. In fact the most morally compelling act would be to kill off the whole of the human species, and replace it with a slightly larger population. We have many real world analogues to this thought experiment. For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations, while the first consume many more resources than the second. Hence to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich). Of course, the rich world also produces most of the farming surplus and the technology innovation, which allow us to support a larger population. So we should aim to kill everyone in the rich world apart from farmers and scientists - and enough support staff to keep these professions running (Carl Shulman correctly points out that we may require most of the rest of the economy as "support staff". Still, it's very likely that we could kill off a significant segment of the population - those with the highest consumption relative to their impact of farming and science - and still "improve" the situation). Even if turns out to be problematic to implement in practice, a true total utilitarian should be thinking: "I really, really wish there was a way to do targeted killing of many people in the USA, Eur
9b89ceb7-8bcc-4d63-9842-bb3762382d41
trentmkelly/LessWrong-43k
LessWrong
What's Not Our Problem Cross-posted, as always, from Putanumonit. Putanumonit started in 2015, and like every rationalist blogger at the time my main inspirations were Scott Alexander and Tim Urban. In the year prior, Scott wrote a remarkable series of posts transforming how I understand politics and political psychology: motte and bailey, politics as fashion, tolerating the outgroup, toxoplasma, and how social justice ideology is being used to grab power and exercise cruelty. Meanwhile, Tim successfully ignored politics to write great posts on technology, relationships, and procrastination. I went out of my way to meet both in person, got entirely conflicting advice from the two very different writers, and dove into this blog. In 2016, as millions of human brains were getting hijacked by Trump and wokeness and Trump vs. wokeness, my most popular posts were about buying soap, dating profiles, and Pokemon Go . I was relieved to see that Scott was mostly ignoring the madness and writing about economics and psychiatry and aliens, and surely Tim would… uh oh, where has Tim disappeared to? In 2016 Tim fell down a six-year rabbit hole of thinking about US politics, and the sociology of politics, and how Americans think about politics. The result is What’s Our Problem, a joyfully illustrated book on how the existential problems with our national politics stem from the internal conflict between “higher” and “lower” cognition in each individual’s mind. The book starts by laying out this dichotomy. Higher rung thinking is truth seeking, open to challenge, curious, and scientific. Groups of high rung thinkers collaborate in “idea labs” that magnify their constituents’ ability to discover and share truth and good ideas. Lower rung thinking is tribal, close-minded, confirmation biased, and zealous. Low rung thinkers coalesce into echo chambers that enforce conformity and disable their members’ individual thinking skills. This scales up to the country as a whole, adding a vertical dimension to the
9fad58d5-35f5-4e11-b5e7-1da7005d105b
trentmkelly/LessWrong-43k
LessWrong
What explanatory power does Kahneman's System 2 possess? In the 70s and 80s, Kahneman and Tversky did a bunch of pioneering research on heuristics and biases in human thought. Then, in Thinking Fast and Slow, Kahneman divided human cognition into System 1 and System 2 - basically, System 1 applies quick heuristics which are prone to biases, and System 2 does the slow, effortful thinking. But what does System 2 actually add to the theory in terms of explanatory power? Consider an alternative version of Thinking Fast and Slow in which Kahneman wrote something like "Here are the conditions in which humans use this mode of reasoning I'm calling System 1, which is fast and approximate and effortless and uses heuristics and demonstrates biases which can be detected in certain ways. The rest of the time, I have no idea what's going on, except that it doesn't display the traits that would qualify it as System 1 inference." In what ways would this be less informative than his actual claims?
b7334c02-2f8c-45c0-a423-ebdc20dc709f
trentmkelly/LessWrong-43k
LessWrong
Autism, Watson, the Turing test, and General Intelligence Thinking aloud: Humans are examples of general intelligence - the only example we're sure of. Some humans have various degrees of autism (low level versions are quite common in the circles I've moved in), impairing their social skills. Mild autists nevertheless remain general intelligences, capable of demonstrating strong cross domain optimisation. Psychology is full of other examples of mental pathologies that impair certain skills, but nevertheless leave their sufferers as full fledged general intelligences. This general intelligence is not enough, however, to solve their impairments. Watson triumphed on Jeopardy. AI scientists in previous decades would have concluded that to do so, a general intelligence would have been needed. But that was not the case at all - Watson is blatantly not a general intelligence. Big data and clever algorithms were all that were needed. Computers are demonstrating more and more skills, besting humans in more and more domains - but still no sign of general intelligence. I've recently developed the suspicion that the Turing test (comparing AI with a standard human) could get passed by a narrow AI finely tuned to that task. The general thread is that the link between narrow skills and general intelligence may not be as clear as we sometimes think. It may be that narrow skills are sufficiently diverse and unique that a mid-level general intelligence may not be able to develop them to a large extent. Or, put another way, an above-human social intelligence may not be able to control a robot body or do decent image recognition. A super-intelligence likely could: ultimately, general intelligence includes the specific skills. But his "ultimately" may take a long time to come. So the questions I'm wondering about are: 1. How likely is it that a general intelligence, above human in some domain not related to AI development, will acquire high level skills in unrelated areas? 2. By building high-performance narrow AIs, are we making it muc
af51ec11-055d-42b0-a55f-d5555a14e25b
trentmkelly/LessWrong-43k
LessWrong
Meetup : London mini-meetup Discussion article for the meetup : London mini-meetup WHEN: 07 August 2011 02:00:00PM (+0100) WHERE: Shakespeares Head, Africa House, 64-68 Kingsway, City of London WC2B 6AG, United Kingdom Sorry for the late notice: the next London mini-meetup is on Sunday August 7 at 14:00 at the Shakespeares Head (official page) on Kingsway near Holborn Tube station. Note that there's more than one pub in London with that name, so make sure you get the right one. As always, we'll have a big picture of a paperclip on the table so you can find us; I look like this. For more timely notice, subscribe to the London Less Wrong mailing list. We are aiming to have a "full" meetup on the Sunday of every other month, with other gatherings on the first and third Sunday of every month. Hope to see lots of you there! Discussion article for the meetup : London mini-meetup
ceb391f3-35b6-47ef-8fa8-ba46fe9fd05a
trentmkelly/LessWrong-43k
LessWrong
Let's Talk About Intelligence I'm writing this because, for a while, I have noticed that I am confused: particularly about what people mean when they say someone is intelligent. I'm more interested in a discussion here than actually making a formal case, so please excuse my lack of actual citations. I'm also trying to articulate my own confusion to myself as well as everyone else, so this will not be as focused as it could be. If I had to point to a starting point for this state, I'd say it was in psych class, where we talked about research presented by Eyesenck and Gladwell. Eyesenck is very clear to define intelligence as the ability to solve abstract problems, but not necessarily the motivation . In many ways, this matches Yudkowsky's definition, where he talks about intelligence as a property we can ascribe to an entity, which lets us predict that the entity will be able to complete a task, without ourselves necessarily understanding the steps toward completion. The central theme I'm confused about is the generality of the concept: are we really saying that there is a general algorithm or class of algorithms that will solve most or all problems to within a given distance from optimum? Let me give an example. Depending on what test you use, an autistic can look clinically retarded, but with 'islands' of remarkable ability, even up to genius levels. The classic example is “Rain Man,” who is depicted as easily solving numerical problems most people don't even understand, but having trouble tying his shoes. This is usually an exaggeration (by no means are all autistics savants), and these island skills are hardly limited to math. The interesting point, though, is that even someone with many such islands can have an abysmally low overall IQ. Some tests correct for this – Raven's Pattern matching test, for instance, gives you increasingly complex patterns that you have to complete – and this tends to level out those islands, and give an overall score that seems commensurate with the sheer geni
6b77978f-cc3b-462b-9b74-e70779c78d14
StampyAI/alignment-research-dataset/arxiv
Arxiv
Modulation of viability signals for self-regulatory control 1 Introduction --------------- The continual interaction that exists between an organism and the environment requires an active form of regulation of the mechanisms safeguarding its integrity. There are several aspects an agent must consider, ranging from assessing various sources of information to anticipating changes in its surroundings. In order to decide what to do, an agent must consider between different courses of action and factor in the potential costs and benefits derived from its hypothetical future behavior. This process of selection among different value-based choices can be formally described as an optimization problem. Depending on the formalism, the cost or utility functions optimized by the agent presuppose different normative interpretations. In reinforcement learning (RL) for instance, an agent has to maximize the expected reward guided by a signal provided externally by the environment in an oracular fashion. The reward in some cases is also complemented with an intrinsic contribution, generally corresponding to an epistemic deficiency within the agent. For example prediction error [[24](#bib.bib24)], novelty [[3](#bib.bib3), [5](#bib.bib5), [23](#bib.bib23)] or ensemble disagreement [[25](#bib.bib25)]. It is important to note that incorporating these surrogate rewards into the objectives of an agent is often regarded as one of many possible enhancements to increase its performance, rather than been motivated by a concern with explaining the roots of goal-directed behavior. In active inference [[14](#bib.bib14)], the optimization is framed in terms of the minimization of the variational free energy to try to reduce the difference between sensations and predictions. Instead of rewards, the agent holds a prior over preferred future outcomes, thus an agent minimizing its free energy acts to maximize the occurrence of these preferences and to minimize its own surprisal. Value arises not as an external property of the environment, but instead it is conferred by the agent as a contextual consequence of the interplay of its current configuration and the interpretation of stimuli. There are recent studies that have successfully demonstrated how to reformulate RL and control tasks under the active inference framework. While for living processes it is reasonable to assume that the priors emerge and are refined over evolutionary scales and during a lifetime, translating this view into a detailed algorithmic characterization raises important considerations because there is no evolutionary prior to draw from. Thus the approaches to specify a distribution of preferences have included for instance, taking the reward an RL agent would receive and encoding it as the prior [[16](#bib.bib16), [21](#bib.bib21), [29](#bib.bib29), [32](#bib.bib32), [33](#bib.bib33), [34](#bib.bib34)], connecting it to task objectives [[29](#bib.bib29)] or through expert demonstrations [[6](#bib.bib6), [7](#bib.bib7), [30](#bib.bib30)]. In principle this would suggest that much of the effort that goes into reward engineering in RL is relocated to that of specifying preferred outcomes or to the definition of a phase space. Nonetheless active inference provides important conceptual adjustments that could potentially facilitate conceiving more principled schemes towards a theory of agents that could provide a richer account of autonomous behavior and self-generation of goals, desires or preferences. These include the formulation of objectives and utilities under a common language residing in belief space, and appealing to a worldview in which utility is not treated as independent or detached from the agent. In particular the latter could encourage a more organismic perspective of the agent in terms of the perturbations it must endure and the behavioral policies it attains to maintain its integrity [[11](#bib.bib11)]. Here we explore this direction by considering how a signal acquires functional significance as the agent identifies it as a condition necessary for its viability and future continuity in the environment. Mandated by an imperative to minimize surprisal, the agent learns to associate sensorimotor events to specific outcomes. First, we start by introducing the surprise minimizing RL (SMiRL) specification [[4](#bib.bib4)] before we proceed with a brief overview of the expected free energy. Then we motivate our approach from the perspective of a self-regulatory organism. Finally, we present results from our case study and close with some observations and further potential directions. 2 Preliminaries ---------------- ### 2.1 Model-free surprisal minimization Consider an environment whose generative process produces a state st∈𝒮subscript𝑠𝑡𝒮s\_{t}\in\mathcal{S}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_S at each time step t𝑡titalic\_t resulting in an agent observing ot∈𝒪subscript𝑜𝑡𝒪o\_{t}\in\mathcal{O}italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_O. The agent acts on the environment with at∈𝒜subscript𝑎𝑡𝒜a\_{t}\in\mathcal{A}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_A according to a policy π𝜋\piitalic\_π, obtaining the next observation ot+1subscript𝑜𝑡1o\_{t+1}italic\_o start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT. Suppose the agent performs density estimation on the last t−k𝑡𝑘t-kitalic\_t - italic\_k observations to obtain a current set of parameter(s) θtsubscript𝜃𝑡\theta\_{t}italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT summarizing pθ(o)subscript𝑝𝜃𝑜p\_{\theta}(o)italic\_p start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_o ). As these sufficient statistics contain information about the agent-environment coupling, they are concatenated with the observations into an augmented state xt=(ot,θt)subscript𝑥𝑡subscript𝑜𝑡subscript𝜃𝑡x\_{t}=(o\_{t},\theta\_{t})italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). Every time step, the agent computes the surprisal generated by a new observation given its current estimate and then updates it accordingly. In order to minimize surprisal under this model-free RL setting, the agent should maximize the expected log of the model evidence 𝔼[∑tγtln⁡pθt(ot)]𝔼delimited-[]subscript𝑡superscript𝛾𝑡subscript𝑝subscript𝜃𝑡subscript𝑜𝑡\mathbb{E}[\sum\_{t}\gamma^{t}\ln p\_{\theta\_{t}}(o\_{t})]blackboard\_E [ ∑ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT roman\_ln italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] [[4](#bib.bib4)]. Alternatively, we maintain consistency with active inference by expressing the optimal surprisal Q-function as, | | | | | | --- | --- | --- | --- | | | Qπ\*(x,a)=𝔼π[−ln⁡pθ(o)+γmina′⁡Qπ\*(x′,a′)]subscript𝑄superscript𝜋𝑥𝑎subscript𝔼𝜋delimited-[]subscript𝑝𝜃𝑜𝛾subscriptsuperscript𝑎′subscript𝑄superscript𝜋superscript𝑥′superscript𝑎′Q\_{\pi^{\*}}(x,a)=\mathbb{E}\_{\pi}[-\ln p\_{\theta}(o)+\gamma\min\_{a^{\prime}}Q\_{\pi^{\*}}(x^{\prime},a^{\prime})]italic\_Q start\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) = blackboard\_E start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT [ - roman\_ln italic\_p start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_o ) + italic\_γ roman\_min start\_POSTSUBSCRIPT italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] | | (1) | estimated via DQN [[22](#bib.bib22)] or any function approximator with parameters ϕitalic-ϕ\phiitalic\_ϕ such that Qπ\*(x,a)≈Q(x,a;ϕ)subscript𝑄superscript𝜋𝑥𝑎𝑄𝑥𝑎italic-ϕQ\_{\pi^{\*}}(x,a)\approx Q(x,a;\phi)italic\_Q start\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) ≈ italic\_Q ( italic\_x , italic\_a ; italic\_ϕ ). ### 2.2 Expected free energy The free energy principle (FEP) [[15](#bib.bib15)] has evolved from an account of message passing in the brain to propose a probabilistic interpretation of self-organizing phenomena [[13](#bib.bib13), [27](#bib.bib27), [28](#bib.bib28)]. Central to current discourse around the FEP is the notion of the Markov blanket to describe a causal separation between the internal states of a system from external states, as well as the interfacing blanket states (i.e. sensory and active states). The FEP advances the view that a system remains far from equilibrium by maintaining a low entropy distribution over the states it occupies during its lifetime. Accordingly, the system attempts to minimize the surprisal of an event at a particular point in time. This can be more concretely specified if we consider a distribution p(o)𝑝𝑜p(o)italic\_p ( italic\_o ) encoding the states, drives or desires the system should fulfil. Thus the system strives to obtain an outcome o𝑜oitalic\_o that minimizes the surprisal −ln⁡p(o)𝑝𝑜-\ln p(o)- roman\_ln italic\_p ( italic\_o ). Alternatively, we can also state this as the agent maximizing its model evidence or marginal likelihood p(o)𝑝𝑜p(o)italic\_p ( italic\_o ). For most cases estimating the actual marginal is intractable, therefore a system instead minimizes the free energy [[10](#bib.bib10), [18](#bib.bib18)] which provides an upper bound on the log marginal [[19](#bib.bib19)], | | | | | | --- | --- | --- | --- | | | 𝐅=𝔼q(s)[ln⁡q(s)−ln⁡p(o,s)]𝐅subscript𝔼𝑞𝑠delimited-[]𝑞𝑠𝑝𝑜𝑠\mathbf{F}=\mathbb{E}\_{q(s)}[\ln q(s)-\ln p(o,s)]bold\_F = blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_s ) end\_POSTSUBSCRIPT [ roman\_ln italic\_q ( italic\_s ) - roman\_ln italic\_p ( italic\_o , italic\_s ) ] | | (2) | where p(o,s)𝑝𝑜𝑠p(o,s)italic\_p ( italic\_o , italic\_s ) is the generative model and q(s)𝑞𝑠q(s)italic\_q ( italic\_s ) the variational density approximating hidden causes. Equation [2](#S2.E2 "2 ‣ 2.2 Expected free energy ‣ 2 Preliminaries ‣ Modulation of viability signals for self-regulatory control") is used to compute a static form of free energy and infer hidden causes given a set of observations. However if we instead consider an agent that acts over an extended temporal dimension, it must infer and select policies that minimize the expected free energy (EFE) 𝐆𝐆\mathbf{G}bold\_G [[14](#bib.bib14)] of a policy π𝜋\piitalic\_π for a future step τ>t𝜏𝑡\tau>titalic\_τ > italic\_t. This can be expressed as, | | | | | | --- | --- | --- | --- | | | 𝐆(π,τ)=𝔼q(oτ,sτ|π)[ln⁡q(sτ|π)−ln⁡p(oτ,sτ|π)]𝐆𝜋𝜏subscript𝔼𝑞subscript𝑜𝜏conditionalsubscript𝑠𝜏𝜋delimited-[]𝑞conditionalsubscript𝑠𝜏𝜋𝑝subscript𝑜𝜏conditionalsubscript𝑠𝜏𝜋\mathbf{G}(\pi,\tau)=\mathbb{E}\_{q(o\_{\tau},s\_{\tau}|\pi)}[\ln q(s\_{\tau}|\pi)-\ln p(o\_{\tau},s\_{\tau}|\pi)]bold\_G ( italic\_π , italic\_τ ) = blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) end\_POSTSUBSCRIPT [ roman\_ln italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) - roman\_ln italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) ] | | (3) | where p(oτ,sτ|π)=q(sτ|oτ,π)p(oτ)𝑝subscript𝑜𝜏conditionalsubscript𝑠𝜏𝜋𝑞conditionalsubscript𝑠𝜏subscript𝑜𝜏𝜋𝑝subscript𝑜𝜏p(o\_{\tau},s\_{\tau}|\pi)=q(s\_{\tau}|o\_{\tau},\pi)p(o\_{\tau})italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) = italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_π ) italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ) is the generative model of the future. Rearranging 𝐆𝐆\mathbf{G}bold\_G as, | | | | | | --- | --- | --- | --- | | | 𝐆(π,τ)=−𝔼q(oτ|π)[ln⁡p(oτ)]⏟instrumentalvalue−𝔼q(oτ|π)[DKL[lnq(sτ|oτ,π)||lnq(sτ|π)]]⏟epistemicvalue\mathbf{G}(\pi,\tau)=-\underbrace{\mathbb{E}\_{q(o\_{\tau}|\pi)}[\ln p(o\_{\tau})]}\_{instrumental\ value}-\underbrace{\mathbb{E}\_{q(o\_{\tau}|\pi)}\big{[}D\_{KL}[\ln q(s\_{\tau}|o\_{\tau},\pi)||\ln q(s\_{\tau}|\pi)]\big{]}}\_{epistemic\ value}bold\_G ( italic\_π , italic\_τ ) = - under⏟ start\_ARG blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) end\_POSTSUBSCRIPT [ roman\_ln italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ) ] end\_ARG start\_POSTSUBSCRIPT italic\_i italic\_n italic\_s italic\_t italic\_r italic\_u italic\_m italic\_e italic\_n italic\_t italic\_a italic\_l italic\_v italic\_a italic\_l italic\_u italic\_e end\_POSTSUBSCRIPT - under⏟ start\_ARG blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) end\_POSTSUBSCRIPT [ italic\_D start\_POSTSUBSCRIPT italic\_K italic\_L end\_POSTSUBSCRIPT [ roman\_ln italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_π ) | | roman\_ln italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) ] ] end\_ARG start\_POSTSUBSCRIPT italic\_e italic\_p italic\_i italic\_s italic\_t italic\_e italic\_m italic\_i italic\_c italic\_v italic\_a italic\_l italic\_u italic\_e end\_POSTSUBSCRIPT | | (4) | which illustrates how the EFE entails a pragmatic, instrumental or goal-seeking term that realizes preferences and an epistemic or information seeking term that resolves uncertainty. An agent selects a policy with probability q(π)=σ(−β∑τ𝐆τ(π))𝑞𝜋𝜎𝛽subscript𝜏subscript𝐆𝜏𝜋q(\pi)=\sigma(-\beta\sum\_{\tau}\mathbf{G\_{\tau}}(\pi))italic\_q ( italic\_π ) = italic\_σ ( - italic\_β ∑ start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT bold\_G start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ( italic\_π ) ) where σ𝜎\sigmaitalic\_σ is the softmax function and β𝛽\betaitalic\_β is the inverse temperature. In summary, an agent minimizes its free energy via active inference by changing its beliefs about the world or by sampling the regions of the space that conforms to its beliefs. 3 Adaptive control via self-regulation --------------------------------------- The concept of homeostasis has played a crucial role in our understanding of physiological regulation. It describes the capacity of a system to maintain its internal variables within certain bounds. Recent developments in the FEP describing the behavior of self-organizing systems under the framework, can be interpreted as an attempt to provide a formalization of this concept [[28](#bib.bib28)]. From this point of view, homeostatic control in an organism refers to the actions necessary to minimize the surprisal of the values reported by interoceptive channels, constraining them to those favored by a viable set of states. Something that is less well understood is how these attracting states come into existence. That is, how do they emerge from the particular conditions surrounding the system and how are they discovered among the potential space of signals. Recently, it has been shown that complex behavior may arise by minimizing surprisal in observation space (i.e. sensory states) without pre-encoded fixed prior distributions in large state spaces [[4](#bib.bib4)]. Here we consider an alternative angle intended to remain closer to the homeostatic characterization of a system. In our scenario, we assume that given the particular dynamics of an environment, if an agent is equipped only with a basic density estimation capacity, then structuring its behavior around the type of regularities in observation space that can sustain it in time will be difficult. In these situations with fast changing dynamics, rather than minimizing free energy over sensory signals, the agent may instead leverage them to maintain a low future surprisal of another target variable. That implies that although the agent may have in principle access to multiple signals it might be interested in maintaining only some of them within certain expected range. Defining what should constitute the artificial physiology in simulated agents is not well established. Therefore we assume the introduction of an information channel representing in abstract terms the interoceptive signals that inform the agent about its continuity in the environment. We can draw a rudimentary comparison, and think of this value in a similar way in which feelings agglutinate and coarse-grain the changes of several internal physical responses [[9](#bib.bib9)]. In addition, we are interested in the agent learning to determine whether it is conductive to its self-preservation in the environment or not. ### 3.1 Case Study We assess the behavior of an agent in the Flappy Bird environment (fig. [1](#S3.F1 "Figure 1 ‣ 3.2 Evaluation ‣ 3 Adaptive control via self-regulation ‣ Modulation of viability signals for self-regulatory control") left). This is a task where a bird must navigate between obstacles (pipes) at different positions while stabilizing its flight. Despite the apparent simplicity, the environment offers a fundamental aspect present in the physical world. Namely, the inherent dynamics leads spontaneously to the functional disintegration of the agent. If the agent stops propelling, it succumbs to gravity and falls. At the same time the environment has a constant scrolling rate, which implies that the agent cannot remain floating at a single point and cannot survive simply by flying aimlessly. Originally, the task provides a reward every time the bird traverses in between two pipes. However for our case study the information about the rewards is never propagated and therefore does not have any impact on the behavior of the agent. The agent receives a feature vector of observations indicating its location and those of the obstacles. In addition, the agent obtains a measurement v𝑣vitalic\_v indicating its presence in the task (i.e. 1 or 0). This measurement does not represent anything positive or negative by itself, it is simply another signal that we assume the agent is able to calculate. Similarly to the outline in [2.1](#S2.SS1 "2.1 Model-free surprisal minimization ‣ 2 Preliminaries ‣ Modulation of viability signals for self-regulatory control"), the agent monitors the last t−k𝑡𝑘t-kitalic\_t - italic\_k values of this measurement and estimates the density to obtain θtsubscript𝜃𝑡\theta\_{t}italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. These become the statistics describing the current approximated distribution of preferences p(v|θt)𝑝conditional𝑣subscript𝜃𝑡p(v|\theta\_{t})italic\_p ( italic\_v | italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) or pθt(v)subscript𝑝subscript𝜃𝑡𝑣p\_{\theta\_{t}}(v)italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_v ), which are also used to augment the observations to xt=(ot,θt)subscript𝑥𝑡subscript𝑜𝑡subscript𝜃𝑡x\_{t}=(o\_{t},\theta\_{t})italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = ( italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). When the agent takes a new measurement vtsubscript𝑣𝑡v\_{t}italic\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, it evaluates the surprisal against pθt−1(vt)subscript𝑝subscript𝜃𝑡1subscript𝑣𝑡p\_{\theta\_{t-1}}(v\_{t})italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). In this particular case it is evaluated via a Bernoulli density function such that −ln⁡pθt−1(vt)=−(vtln⁡θt−1+(1−vt)ln⁡(1−θt−1))subscript𝑝subscript𝜃𝑡1subscript𝑣𝑡subscript𝑣𝑡subscript𝜃𝑡11subscript𝑣𝑡1subscript𝜃𝑡1-\ln p\_{\theta\_{t-1}}(v\_{t})=-(v\_{t}\ln\theta\_{t-1}+(1-v\_{t})\ln(1-\theta\_{t-1}))- roman\_ln italic\_p start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = - ( italic\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT roman\_ln italic\_θ start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT + ( 1 - italic\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) roman\_ln ( 1 - italic\_θ start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) ). First, we train a baseline model-free surprisal minimizing DQN as specified in [2.1](#S2.SS1 "2.1 Model-free surprisal minimization ‣ 2 Preliminaries ‣ Modulation of viability signals for self-regulatory control") parameterized by a neural network (NN). Then we examine the behavior of a second agent that minimizes the expected free energy. Thus the agent learns an augmented state transition model of the world, parameterized by an ensemble of NNs, and an expected surprisal model, also parameterized by another NN. In order to identify an optimal policy we apply rolling horizon evolution [[26](#bib.bib26)] to generate candidate policies π=(aτ,…,aT)𝜋subscript𝑎𝜏…subscript𝑎𝑇\pi=(a\_{\tau},...,a\_{T})italic\_π = ( italic\_a start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) and to associate them to an expected free energy given by (appendix [0.A](#Pt0.A1 "Appendix 0.A Expected Free Energy with measurements 𝑣 ‣ Modulation of viability signals for self-regulatory control")), | | | | | | --- | --- | --- | --- | | | 𝐆(π,τ)≈−𝔼q(oτ,vτ,θ|π)DKL[q(sτ|,oτ,vτ,π)||q(sτ|π)]−𝔼q(vτ,θ,sτ|π)[lnpθ(vτ)]\mathbf{G}(\pi,\tau)\approx-\mathbb{E}\_{q(o\_{\tau},v\_{\tau},\theta|\pi)}D\_{KL}[q(s\_{\tau}|,o\_{\tau},v\_{\tau},\pi)||q(s\_{\tau}|\pi)]-\mathbb{E}\_{q(v\_{\tau},\theta,s\_{\tau}|\pi)}[\ln p\_{\theta}(v\_{\tau})]bold\_G ( italic\_π , italic\_τ ) ≈ - blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_θ | italic\_π ) end\_POSTSUBSCRIPT italic\_D start\_POSTSUBSCRIPT italic\_K italic\_L end\_POSTSUBSCRIPT [ italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | , italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_π ) | | italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) ] - blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_θ , italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) end\_POSTSUBSCRIPT [ roman\_ln italic\_p start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ) ] | | (5) | If we explicitly consider the model parameters ϕitalic-ϕ\phiitalic\_ϕ, equation [5](#S3.E5 "5 ‣ 3.1 Case Study ‣ 3 Adaptive control via self-regulation ‣ Modulation of viability signals for self-regulatory control") can be decomposed as (appendix [0.B](#Pt0.A2 "Appendix 0.B Novelty and salience ‣ Modulation of viability signals for self-regulatory control")), | | | | | | --- | --- | --- | --- | | | 𝐆(π,τ)𝐆𝜋𝜏\displaystyle\mathbf{G}(\pi,\tau)bold\_G ( italic\_π , italic\_τ ) | ≈−𝔼q(oτ,vτ,ϕ|π)DKL[q(sτ|oτ,vτ,π)||q(sτ|π)]⏟salience\displaystyle\approx-\underbrace{\mathbb{E}\_{q(o\_{\tau},v\_{\tau},\phi|\pi)}D\_{KL}[q(s\_{\tau}|o\_{\tau},v\_{\tau},\pi)||q(s\_{\tau}|\pi)]}\_{salience}≈ - under⏟ start\_ARG blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_ϕ | italic\_π ) end\_POSTSUBSCRIPT italic\_D start\_POSTSUBSCRIPT italic\_K italic\_L end\_POSTSUBSCRIPT [ italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_π ) | | italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) ] end\_ARG start\_POSTSUBSCRIPT italic\_s italic\_a italic\_l italic\_i italic\_e italic\_n italic\_c italic\_e end\_POSTSUBSCRIPT | | | | | −𝔼q(oτ,vτ,sτ|π)DKL[q(ϕ|sτ,oτ,vτ,π)||q(ϕ)]⏟novelty\displaystyle\quad-\underbrace{\mathbb{E}\_{q(o\_{\tau},v\_{\tau},s\_{\tau}|\pi)}D\_{KL}[q(\phi|s\_{\tau},o\_{\tau},v\_{\tau},\pi)||q(\phi)]}\_{novelty}- under⏟ start\_ARG blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) end\_POSTSUBSCRIPT italic\_D start\_POSTSUBSCRIPT italic\_K italic\_L end\_POSTSUBSCRIPT [ italic\_q ( italic\_ϕ | italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_π ) | | italic\_q ( italic\_ϕ ) ] end\_ARG start\_POSTSUBSCRIPT italic\_n italic\_o italic\_v italic\_e italic\_l italic\_t italic\_y end\_POSTSUBSCRIPT | | | | | −𝔼q(oτ,vτ,sτ,ϕ|π)[ln⁡pθ(vτ)]⏟instrumentalvaluesubscript⏟subscript𝔼𝑞subscript𝑜𝜏subscript𝑣𝜏subscript𝑠𝜏conditionalitalic-ϕ𝜋delimited-[]subscript𝑝𝜃subscript𝑣𝜏𝑖𝑛𝑠𝑡𝑟𝑢𝑚𝑒𝑛𝑡𝑎𝑙𝑣𝑎𝑙𝑢𝑒\displaystyle\quad-\underbrace{\mathbb{E}\_{q(o\_{\tau},v\_{\tau},s\_{\tau},\phi|\pi)}[\ln p\_{\theta}(v\_{\tau})]}\_{instrumental\ value}- under⏟ start\_ARG blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_ϕ | italic\_π ) end\_POSTSUBSCRIPT [ roman\_ln italic\_p start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ) ] end\_ARG start\_POSTSUBSCRIPT italic\_i italic\_n italic\_s italic\_t italic\_r italic\_u italic\_m italic\_e italic\_n italic\_t italic\_a italic\_l italic\_v italic\_a italic\_l italic\_u italic\_e end\_POSTSUBSCRIPT | | The expression unpacks further the epistemic contributions to the EFE in terms of salience and novelty [[17](#bib.bib17)]. These terms refer to the expected reduction in uncertainty about hidden causes and in the parameters respectively. For this task o=s𝑜𝑠o=sitalic\_o = italic\_s, thus only the first and third term are considered. ### 3.2 Evaluation The plot on fig. [1](#S3.F1 "Figure 1 ‣ 3.2 Evaluation ‣ 3 Adaptive control via self-regulation ‣ Modulation of viability signals for self-regulatory control") (center) tracks the performance of an EFE agent in the environment (averaged over 10 seeds). The dotted line represents the surprisal minimizing DQN agent after 1000 episodes. The left axis corresponds to the (unobserved) task reward while the right axis indicates the approximated number of time steps the agent survives. During the first trials, and before the agent exhibits any form of competence, it was observed that the natural coupling between agent and environment grants the agent a life expectancy of roughly 19-62 time steps in the task. This is essential as it starts to populate the statistics of v𝑣vitalic\_v. Measuring a specific quantity v𝑣vitalic\_v, although initially representing just another signal, begins to acquire certain value due to the frequency that it occurs. In turn, this starts to dictate the preferences of the agent as it hints that measuring certain signal correlates with having a stable configuration for this particular environment as implied by its low surprisal. Right fig. [1](#S3.F1 "Figure 1 ‣ 3.2 Evaluation ‣ 3 Adaptive control via self-regulation ‣ Modulation of viability signals for self-regulatory control") shows the evolution of parameter θ𝜃\thetaitalic\_θ (averaged within an episode) corresponding to the distribution of preferred measurements pθ(v)subscript𝑝𝜃𝑣p\_{\theta}(v)italic\_p start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_v ) which determines the level of surprisal assigned when receiving the next v𝑣vitalic\_v. As the agent reduces its uncertainty about the environment it also becomes more capable of associating sensorimotor events to specific measurements. The behavior becomes more consistent with seeking less surprising measurements, and as we observe, this reinforces its preferences, exhibiting the circular self-evidencing dynamics that characterize an agent minimizing its free energy. ![Refer to caption](/html/2007.09297/assets/results.png) Figure 1: Left: The Flappy Bird environment. Center: Performance of an EFE agent. The left axis indicates the unobserved rewards as reported by the task and the right axis the number of time steps it survives in the environment. The dotted line shows the average performance of an SM-DQN after 1000 episodes. Right: Parameter θ𝜃\thetaitalic\_θ in time, summarizing the intra-episode sufficient statistics of pθ(v)subscript𝑝𝜃𝑣p\_{\theta}(v)italic\_p start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_v ). 4 Discussion ------------- Learning preferences in active inference: The major thesis in active inference is the notion of an agent acting in order to minimize its expected surprise. This implies that the agent will exhibit a tendency to seek for the sort of outcomes that have high prior probability according to a biased model of the world, giving rise to goal-directed behavior. Due to the difficulty of modeling an agent to exhibit increasing levels of autonomy, the agent based simulations under this framework, and similarly to how it has largely occurred in RL, have tended to concentrate on the generation of a particular expected behavior in the agent. That is, on how to make the agent perform a task by encoding predefined goals [[16](#bib.bib16), [21](#bib.bib21), [29](#bib.bib29), [32](#bib.bib32), [33](#bib.bib33), [34](#bib.bib34)] or providing guidance [[6](#bib.bib6), [7](#bib.bib7), [30](#bib.bib30)]. However there has been recent progress trying to mitigate this issue. For example, in some of the simulations in [[29](#bib.bib29)] the authors included a distribution over prior preferences to account for each of the cells in Frozen Lake, a gridworld like environment. Over time the prior preferences are tuned, leading to habit formation. Most related to our work, are the studies on surprise minimizing RL (SMiRL) by [[4](#bib.bib4)], where model-free agents performed density estimation on their observation space and acquired complex behavior in various tasks by maximizing the model evidence of their observations. Here we have also opted for this approach, however we have grounded it on organismic based considerations of viability as inspired by insights on the nature of agency and adaptive behavior [[1](#bib.bib1), [12](#bib.bib12), [11](#bib.bib11)]. It has been suggested that even if some of these aspects are defined exogenously they could capture general components of all physical systems and could potentially be derived in a more objective manner compared to task based utilities [[20](#bib.bib20)]. Moreover these views suggest that the inherent conditions of precariousness and the perturbations an agent must face are crucial ingredients for the emergence of purpose generating mechanisms. In that sense, our main concern has been to explore an instance of the conditions in which a stable set of attracting states arises, conferring value to observations and leading to what seemed as self-sustaining dynamics. Although all measurements lacked any initial functional value, the model presupposes the capacity of the agent to measure its operational integrity as it would occur in an organism monitoring its bodily states. This raises the issue of establishing more principled protocols to define what should constitute the internal milieu of an agent. Agent-Environment coupling: A matter of further analysis, also motivated by results in [[4](#bib.bib4)], is the role of the environment to provide structure to the behavior of the agent. For instance, in the environments in [[4](#bib.bib4)], a distribution of preferences spontaneously built on the initial set of visual observations tends to correlate with good performance on the task. In the work presented here the initial set of internal measurements afforded by the environment contributes to the formation of a steady state, with the visual features informing the actions necessary to maintain it. Hence similarly to [[4](#bib.bib4)], the initial conditions of the agent-environment coupling that furnish the distribution p(v)𝑝𝑣p(v)italic\_p ( italic\_v ) provide a starting solution for the problem of self-maintenance as long as the agent is able to preserve the statistics. Thus if the agent lacks a sophisticated sensory apparatus, the capacity to extract invariances or the initial statistics of sensory data do not favor the emergence of goal-seeking behavior, tracking its internal configuration may suffice for some situations. However this requires further unpacking, not only because as discussed earlier it remains uncertain how to define the internal aspects of an agent, but also because often simulations do not capture the essential characteristics of real environments either [[8](#bib.bib8)]. Drive decomposition: While here we have afforded our model certain levels of independence between the sensory data and the internal measurements, it might be sensible to imagine that internal states would affect perception and perceptual misrepresentation would affect internal states. Moreover, as the agent moves from normative conditions based entirely on viability to acquire other higher level preferences, it learns to integrate and balance different drives and goals. From equation [8](#Pt0.A1.E8 "8 ‣ Appendix 0.A Expected Free Energy with measurements 𝑣 ‣ Modulation of viability signals for self-regulatory control") it is also possible to conceive a simplified scenario and establish the following expression (appendix [0.D](#Pt0.A4 "Appendix 0.D Drive decomposition ‣ Modulation of viability signals for self-regulatory control")), | | | | | | --- | --- | --- | --- | | | 𝐆(π,τ)𝐆𝜋𝜏\displaystyle\mathbf{G}(\pi,\tau)bold\_G ( italic\_π , italic\_τ ) | ≈𝔼q(oτ,vτ,θ,sτ|π)[ln⁡q(sτ|π)−ln⁡p(sτ|oτ,π)]⏟epistemicvalueabsentsubscript⏟subscript𝔼𝑞subscript𝑜𝜏subscript𝑣𝜏𝜃conditionalsubscript𝑠𝜏𝜋delimited-[]𝑞conditionalsubscript𝑠𝜏𝜋𝑝conditionalsubscript𝑠𝜏subscript𝑜𝜏𝜋𝑒𝑝𝑖𝑠𝑡𝑒𝑚𝑖𝑐𝑣𝑎𝑙𝑢𝑒\displaystyle\approx\underbrace{\mathbb{E}\_{q(o\_{\tau},v\_{\tau},\theta,s\_{\tau}|\pi)}[\ln q(s\_{\tau}|\pi)-\ln p(s\_{\tau}|o\_{\tau},\pi)]}\_{epistemic\ value}≈ under⏟ start\_ARG blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_θ , italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) end\_POSTSUBSCRIPT [ roman\_ln italic\_q ( italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) - roman\_ln italic\_p ( italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_π ) ] end\_ARG start\_POSTSUBSCRIPT italic\_e italic\_p italic\_i italic\_s italic\_t italic\_e italic\_m italic\_i italic\_c italic\_v italic\_a italic\_l italic\_u italic\_e end\_POSTSUBSCRIPT | | | | | −𝔼q(oτ,vτ,θ,sτ|π)[ln⁡p(oτ)]⏟highlevelvaluesubscript⏟subscript𝔼𝑞subscript𝑜𝜏subscript𝑣𝜏𝜃conditionalsubscript𝑠𝜏𝜋delimited-[]𝑝subscript𝑜𝜏ℎ𝑖𝑔ℎ𝑙𝑒𝑣𝑒𝑙𝑣𝑎𝑙𝑢𝑒\displaystyle\quad-\underbrace{\mathbb{E}\_{q(o\_{\tau},v\_{\tau},\theta,s\_{\tau}|\pi)}[\ln p(o\_{\tau})]}\_{high\ level\ value}- under⏟ start\_ARG blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_θ , italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) end\_POSTSUBSCRIPT [ roman\_ln italic\_p ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ) ] end\_ARG start\_POSTSUBSCRIPT italic\_h italic\_i italic\_g italic\_h italic\_l italic\_e italic\_v italic\_e italic\_l italic\_v italic\_a italic\_l italic\_u italic\_e end\_POSTSUBSCRIPT | | | | | +𝔼q(oτ,sτ|π)H[p(vτ|sτ,oτ,π)]⏟regulatoryvaluesubscript⏟subscript𝔼𝑞subscript𝑜𝜏conditionalsubscript𝑠𝜏𝜋𝐻delimited-[]𝑝conditionalsubscript𝑣𝜏subscript𝑠𝜏subscript𝑜𝜏𝜋𝑟𝑒𝑔𝑢𝑙𝑎𝑡𝑜𝑟𝑦𝑣𝑎𝑙𝑢𝑒\displaystyle\quad+\underbrace{\mathbb{E}\_{q(o\_{\tau},s\_{\tau}|\pi)}H[p(v\_{\tau}|s\_{\tau},o\_{\tau},\pi)]}\_{regulatory\ value}+ under⏟ start\_ARG blackboard\_E start\_POSTSUBSCRIPT italic\_q ( italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_π ) end\_POSTSUBSCRIPT italic\_H [ italic\_p ( italic\_v start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_π ) ] end\_ARG start\_POSTSUBSCRIPT italic\_r italic\_e italic\_g italic\_u italic\_l italic\_a italic\_t italic\_o italic\_r italic\_y italic\_v italic\_a italic\_l italic\_u italic\_e end\_POSTSUBSCRIPT | | Where the goal-seeking value is decomposed into a component that considers preferences encoded in a distribution p(o)𝑝𝑜p(o)italic\_p ( italic\_o ) and another element estimating the expected entropy of the distribution of essential variables. Policies would balance the contributions resolving for hypothetical situations, such as a higher level goal being at odds with the viability of the system. Acknowledgment -------------- This research utilised Queen Mary’s Apocrita HPC facility, supported by QMUL Research-IT. doi:10.5281/zenodo.438045
090f5d2d-7575-4d89-94dc-22a19d2bc335
trentmkelly/LessWrong-43k
LessWrong
Preparing for the Intelligence Explosion This is a linkpost for a new paper called Preparing for the Intelligence Explosion, by Will MacAskill and Fin Moorhouse. It sets the high-level agenda for the sort of work that Forethought is likely to focus on. Some of the areas in the paper that we expect to be of most interest to EA Forum or LessWrong readers are: * Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), even if scaling of compute completely stops in the near term, and even if the rate of algorithmic efficiency improvements slow, then we should still expect very rapid technological development — e.g. a century’s worth of progress in a decade — once AI meaningfully substitutes for human researchers. * A presentation, in section 4, of the sheer range of challenges that an intelligence explosion would pose, going well beyond the “standard” focuses of AI takeover risk and biorisk. * Discussion, in section 5, of when we can and can’t use the strategy of just waiting until we have aligned superintelligence and relying on it to solve some problem. * An overview, in section 6, of what we can do, today, to prepare for this range of challenges.  Here’s the abstract: > AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.  > > These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. > > We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are alig
9e60e13d-848e-4d36-8532-9cbe7341f825
trentmkelly/LessWrong-43k
LessWrong
Zut Allais! Huh!  I was not expecting that response.  Looks like I ran into an inferential distance. It probably helps in interpreting the Allais Paradox to have absorbed more of the gestalt of the field of heuristics and biases, such as: * Experimental subjects tend to defend incoherent preferences even when they're really silly. * People put very high values on small shifts in probability away from 0 or 1 (the certainty effect). Let's start with the issue of incoherent preferences - preference reversals, dynamic inconsistency, money pumps, that sort of thing. Anyone who knows a little prospect theory will have no trouble constructing cases where people say they would prefer to play gamble A rather than gamble B; but when you ask them to price the gambles they put a higher value on gamble B than gamble A.  There are different perceptual features that become salient when you ask "Which do you prefer?" in a direct comparison, and "How much would you pay?" with a single item. My books are packed up for the move, but from what I remember, this should typically generate a preference reversal: 1. 1/3 to win $18 and 2/3 to lose $1.50 2. 19/20 to win $4 and 1/20 to lose $0.25 Most people will (IIRC) rather play 2 than 1.  But if you ask them to price the bets separately - ask for a price at which they would be indifferent between having that amount of money, and having a chance to play the gamble - people will (IIRC) put a higher price on 1 than on 2.  If I'm wrong about this exact example, nonetheless, there are plenty of cases where such a pattern is exhibited experimentally. So first you sell them a chance to play bet 1, at their stated price.  Then you offer to trade bet 1 for bet 2.  Then you buy bet 2 back from them, at their stated price.  Then you do it again.  Hence the phrase, "money pump". Or to paraphrase Steve Omohundro:  If you would rather be in Oakland than San Francisco, and you would rather be in San Jose than Oakland, and you would rather be in San Fr
9ad9710b-3248-4be7-afc8-26aeb5011af8
trentmkelly/LessWrong-43k
LessWrong
How I'm telling my friends about AI Safety One of the comments on the new book post asked how to tell normie friends about AI safety.  I don't have any special credentials here, but I thought it'd be worthwhile to share the facebook post I've drafted, both to get feedback and to give an example of one way a post could look.  There exist articles and blogs that already do this well, but most people don't read shared articles and it's helpful to have a variety of ways to communicate.  My goal here is to grab attention, diffuse densiveness with some humor, and try to make the problem digestable to someone who isn't immersed in the topic or lingo.  Let me know what you think! Why AI might kill us all Dramatic opener – check. I don’t post often, especially not about “causes”, but this one feels like a conversation worth having, even if we sound a little crazy. TL;DR – AI is (and will be) a really big tool that we need to be super careful wielding. Corporations aren’t going to stop chasing profit for us, so talk to your representatives, and help educate others (this book is coming out soon from top AI safety experts and could be a good resource).   Wouldn’t it be awesome to own a lightsaber?  You could cut through anything!  Unfortunately, knowing me, I’d smush the button while trying to get it out of the packaging and be one limb on my way to becoming Darth Vader. AI is like a lightsaber. Super cool.  Can do super cool stuff.  Turns out, “cutting off your own limbs” technically counts as “super cool”.    Here’s another analogy: there’s a story about “The Monkey’s Paw” – an old relic that grants the user 3 wishes, like a genie.  The catch is, the wishes always come with some bad consequence.  You wish for $1 Million – congrats – that’s the settlement amount you get for getting paralyzed by a drunk driver!  It occurred to me the other day, that to make a Monkey’s Paw, you don’t need to tell it to do bad things, just to be overly literal or take the easiest path to granting the wish. “I wish that everyo
33e8ae05-858e-4dfe-a61f-8c87ad40bdb6
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
By how much should Meta's BlenderBot being really bad cause me to update on how justifiable it is for OpenAI and DeepMind to be making significant progress on AI capabilities? I'm pretty ignorant on AI risk and honestly tech stuff in general, but I'm trying to learn more.  I think AI risk is like the #2 or #3 most important thing, but my naive reaction to the EA community's view in particular was/sorta still is if it's so bad why don't they stop.  When EA people make a pitch for the importance and urgency of AI risk, they point at AlphaGo, GPT-3, and Dall-E, which are huge advances made possible by OpenAI and DeepMind.  Yet 80k and EAG (through the job fairs) actively recruit to non-safety roles at OpenAI and DeepMind and there's lots of EA's who have worked at them, and if anything they're looked upon more favorably for doing so.  When I asked my AI risk EA friends who I basically 99% defer to on AI stuff why we should be so cushy with people trying to do the thing we're saying might be the worst thing ever, they explained that other, less safety-conscious AI groups are not far behind.  Meta, Microsoft, and "AI groups in China" generally, are the ones I've heard referred to each at least 3x.  (Though I don't really get the Microsoft example after hearing about their partnership with OpenAI.) The if-we-don't-someone-will argument doesn't sit very well with me, but I get it.  Meta's just released a chatbot called Blenderbot though, which, even though it's obviously a different type of endeavor from something like GPT-3, very obviously sucks.  It's not a category difference from the AIM chatbot I remember growing up, honestly.  If someone tried to sell me on impending existential AI risk using this chatbot, I would not be on board.  I assume that Meta is announcing Blenderbot because it is a positive example of Meta's AI work progress though.  Is that a fair assumption?  If not, should I / by how much should this cause me to negatively update on Meta's AI capabilities?  And by how should it cause me to negatively update on the if-we-don't-someone-will argument, both vis-a-vis Meta and in general? Earnest thanks for any replies.
7e5ac475-cd69-4cec-a6f1-ce7b9a945528
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Conclusion and Bibliography for "Understanding the diffusion of large language models" *This post is one part of the sequence* [*Understanding the diffusion of large language models*](https://forum.effectivealtruism.org/s/8rYkpiFhbb4HsbzFc)*.  As context for this post, I strongly recommend reading at least the* [*5-minute summary*](https://forum.effectivealtruism.org/posts/nc3JFZbqnzWWAPkmz/understanding-the-diffusion-of-large-language-models-summary-1#5_minute_summary) *of the sequence.* Conclusion ========== **In this sequence I presented key findings from case studies on the diffusion of eight language models that are similar to GPT-3.** **The phenomenon of diffusion has broad relevance to risks from TAI**: 1. The diffusion of AI technology affects when TAI will be developed, and which actors will lead AI development by what margin. This in turn affects how safe the TAI systems are, how the systems are used, and what the state of global politics and economics is like when the systems are used. 2. Diffusion can have benefits, such as helping less-resourced actors to scrutinize leading AI developers, and supporting AI alignment research outside of leading industry AI labs. GPT-3-like models are quite a specific domain, and may seem far from TAI. Nonetheless, **I centered my research on case studies of GPT-3-like models because I think they are relatively informative about how diffusion will impact TAI development**. In particular: 1. The way that diffusion works today (in broad terms) might persist until the development of TAI, especially if TAI is developed relatively soon (e.g., in the next 10 years). 2. TAI systems (or components of them) might resemble today’s best-performing language models, especially if the [scaling hypothesis](https://www.gwern.net/Scaling-hypothesis) is true. So the implications of diffusion related to such models may be similar to the implications of diffusion related to transformative AI systems. 3. Even if a lot changes between now and TAI, the history of diffusion improves our understanding of what *could* happen. **My research has strong limitations**, including that: 1. Much of the data from my case studies is highly uncertain, with quantitative estimates often spanning an order of magnitude. 2. I often generalize from a small set of case studies in a narrow domain. Some of my conclusions are not robust to counterexamples that I might discover in the future. However, I have tried my best to factor this possibility into my confidence levels. 3. Many of my bottom-line conclusions are not supported by much hard evidence, and are instead based on a combination of logical arguments and intuitions. **I think that the concept of diffusion is a productive framing to study competition, publication strategy, and other important dynamics of AI development. I’m excited for other researchers to continue work on diffusion.** These are some of my recommended topics for future work (see [this previous post](https://forum.effectivealtruism.org/posts/4PAi6nNRfQwwhdtBW/questions-for-further-investigation-of-ai-diffusion) for more): 1. Further evaluation of my proposals to limit access to datasets and algorithmic insights 2. The relevance and importance of diffusion mechanisms that were not involved in my case studies. 1. These mechanisms include theft or the leaking of information. 3. Case studies in other domains of AI. 1. This would be useful both to expand the overall amount of empirical data on diffusion, and to make comparisons to my existing case studies. 2. Notable candidates for study are [AlphaGo Zero](https://www.deepmind.com/blog/alphago-zero-starting-from-scratch) (game playing domain) and [DALL-E](https://openai.com/blog/dall-e/) (text-image domain). 4. How the publication strategy of emerging AI developers will shift as they grow. 5. How much deployment costs (rather than development costs) will limit the diffusion of (transformative) AI capabilities. 6. How much different inputs to AI development contribute to AI progress. 1. At various points in this sequence I presented my best guesses about the relative importance of different inputs to AI development, but I still have a lot of uncertainty that warrants further research. Bibliography ============ AI21 Labs. (2022). *Announcing AI21 Studio and Jurassic-1 Language Models*. <https://www.ai21.com/blog/announcing-ai21-studio-and-jurassic-1> Ahmed, N., & Wahed, M. (2020). *The De-Democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research*. ArXiv. <https://arxiv.org/abs/2010.15581> Aiken, C., Kagan, R., & Page, M. (2020). *“Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” AI Professionals’ Views on Working With the Department of Defense*. Center for Security and Emerging Technology. <https://cset.georgetown.edu/publication/cool-projects-or-expanding-the-efficiency-of-the-murderous-american-war-machine/> Alvi, A., & Kharya, P. (2021). *Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, the World’s Largest and Most Powerful Generative Language Model*. Microsoft Research Blog. <https://www.microsoft.com/en-us/research/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/> Anderljung, M. (2021). *Compute Governance Ideas*. Some AI Governance Research Ideas. <https://docs.google.com/document/d/13LJhP3ksrcEBKxYFG5GkJaC2UoxHKUYAHCRdRlpePEc> [Anthony]. (2020). *Date Weakly General AI is Publicly Known*. Metaculus. <https://perma.cc/P6KM-LZY9> Baidu Research. (2021). *Introducing PCL-BAIDU Wenxin (ERNIE 3.0 Titan), the World’s First Knowledge Enhanced Multi-Hundred-Billion Model*. <http://research.baidu.com/Blog/index-view?id=165> Barnett, M. (2020). *Date of Artificial General Intelligence*. Metaculus. <https://perma.cc/2UTN-PME7> Barr, J. (2019). *Amazon EC2 Update - Inf1 Instances with AWS Inferentia Chips for High Performance Cost-Effective Inferencing*. Amazon Web Services. <https://aws.amazon.com/blogs/aws/amazon-ec2-update-inf1-instances-with-aws-inferentia-chips-for-high-performance-cost-effective-inferencing/> Biderman, S., Bicheno, K., & Gao, L. (2022). *Datasheet for the Pile*. Eleuther AI. <https://arxiv.org/pdf/2201.07311.pdf> BigScience. (2022). *Introducing the World’s Largest Open Multilingual Language Model: BLOOM*. <https://bigscience.huggingface.co/blog/bloom> Black, S., Biderman, S., Hallahan, E., Anthony, Q., Gao, L., Golding, L., He, H., Leahy, C.,  McDonell, K., Phang, J., Pieler, M., Prashanth, U. S., Purohit, S., Reynolds, L., Tow, J., Wang, B., & Weinbach, S. (2022). *GPT-NeoX-20B: An Open-Source Autoregressive Language Model*. EleutherAI. <https://arxiv.org/abs/2204.06745> Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bogh, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., … Liang, P. (2021). *On the Opportunities and Risks of Foundation Models*. Center for Research on Foundation Models. <https://arxiv.org/abs/2108.07258> Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). *Language Models are Few-Shot Learners*. OpenAI. <https://arxiv.org/abs/2005.14165> Carlsmith, J. (2022). *Is Power-Seeking AI an Existential Risk?*. Open Philanthropy. <https://arxiv.org/abs/2206.13353> Bloem, P. (2019). *Transformers from Scratch*. Peterbloem.nl. <https://peterbloem.nl/blog/transformers> Bostrom, N. (2019). The Vulnerable World Hypothesis. *Global Policy*. <https://nickbostrom.com/papers/vulnerable.pdf> Buchanan, B., Musser, M., Lohn, A., & Sedova, K. (2021). *Truth, Lies, and Automation: How Language Models Could Change Disinformation*. Center for Security and Emerging Technology. <https://cset.georgetown.edu/wp-content/uploads/CSET-Truth-Lies-and-Automation.pdf> Chen, H., Fu, C., Rouhani, B. D., Zhao, J., & Koushanfar, F. (2019). *DeepAttest: An End-to-End Attestation Framework for Deep Neural Networks*. Association for Computing Machinery. <https://www.microsoft.com/en-us/research/uploads/prod/2019/05/DeepAttest.pdf> Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberst, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., Schuh, P., Shi, K., Tsvyashchenko, S., Maynez, J., Rao, A., Barnes, P., Tay, Y., Shazeer, N., … Fiedel, N. (2022). *PaLM: Scaling Language Modeling Pathways*. Google Research. <https://arxiv.org/pdf/2204.02311.pdf> Clare, S. (2021). *Great Power Conflict*. Founders Pledge. <https://founderspledge.com/stories/great-power-conflict> Clark, J., Brundage, M., & Solaiman, I. (2019). *GPT-2: 6-Month Follow-Up*. OpenAI. <https://openai.com/blog/gpt-2-6-month-follow-up/> Clifton, J. (2021). *CLR’s Recent Work on Multi-Agent Systems*. AI Alignment Forum. <https://www.alignmentforum.org/posts/EzoCZjTdWTMgacKGS/clr-s-recent-work-on-multi-agent-systems> Etchemendy, J., & Li, F. (2020). *National Research Cloud: Ensuring the Continuation of American Innovation*. Human-Centered Artificial Intelligence. <https://hai.stanford.edu/news/national-research-cloud-ensuring-continuation-american-innovation> Erdil, E., & Besiroglu, T. (2022). *Algorithmic Progress in Computer Vision*. Epoch. <https://arxiv.org/pdf/2212.05153.pdf> Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2018). *BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding*. Google AI Language. <https://arxiv.org/abs/1810.04805> Dillet, R. (2021). *Hugging Face raises $40 million for its natural language processing library*. TechCrunch. <https://techcrunch.com/2021/03/11/hugging-face-raises-40-million-for-its-natural-language-processing-library/> Fedus, W., Zoph, B., & Shazeer, N. (2021). *Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity*. Google. <https://arxiv.org/abs/2101.03961> Field, H. (2022). *How Microsoft and Google Use AI Red Teams to “Stress Test” Their Systems*. Emerging Tech Brew. <https://www.emergingtechbrew.com/stories/2022/06/14/how-microsoft-and-google-use-ai-red-teams-to-stress-test-their-system> [GAA] (2021). *Nuclear Espionage and AI Governance*. Effective Altruism Forum. <https://forum.effectivealtruism.org/posts/CKfHDw5Lmoo6jahZD/nuclear-espionage-and-ai-governance-1> Ganguli, D., Hernandez, D., Lovitt, L., DasSarma, N., Henighan, T., Jones, A., Joseph, N., Kernion, J., Mann, B., Askell, A., Bai, Y., Chen, A., Conerly, T., Drain, D., Elhage, N., Showk, S. E., Fort, S., … Clark, J. (2022). *Predictability and Surprise in Large Generative Models*. Association for Computing Machinery. <https://arxiv.org/abs/2202.07785> Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N., Presser, S., & Leahy, C. (2020). *The Pile: An 800GB Dataset of Diverse Text for Language Modeling*. EleutherAI. <https://arxiv.org/abs/2101.00027> Gertler, A., Aird, M., [Leo], & [Pablo]. (2021). *Credal Resilience*. Effective Altruism Forum. <https://forum.effectivealtruism.org/topics/credal-resilience> Gong, N. (2021). *Model Stealing Attacks*. Duke University. <https://people.duke.edu/~zg70/courses/AML/Lecture14.pdf> Gwern.net. (2020). *The Scaling Hypothesis*. <https://www.gwern.net/Scaling-hypothesis> H., D. (2020). *How Much Did AlphaGo Zero Cost?*. Dansplaining. <https://www.yuzeh.com/data/agz-cost.html> Hao, K. (2020). *The Messy, Secretive Reality Behind OpenAI’s Bid to Save the World*. MIT Technology Review. <https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/> Hernandez, D., & Brown, T. (2020). *AI and Efficiency*. OpenAI. <https://openai.com/blog/ai-and-efficiency/> Hernandez, D., Brown, T., Conerly, T., DasSarma, N., Drain, D., El-Showk, S., Elhage, N., Hatfield-Dodds, Z., Henighan, T., Hume, T., Johnston, S., Mann, B., Olah, C., Olsson, C., Amodei, D., Joseph, N., Kaplan, J., & McCandlish, S. (2022). *Scaling Laws and Interpretability of Learning from Repeated Data*. Anthropic. <https://arxiv.org/abs/2205.10487> Hobbhahn, M., & Besiroglu, T. (2022). *Trends in GPU Price-Performance*. Epoch. <https://epochai.org/blog/trends-in-gpu-price-performance> Hobson, D. (2022). *A Data Limited Future*. LessWrong. <https://www.lesswrong.com/posts/gqqhYijxcKAtuAFjL/a-data-limited-future> Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., Casas, D. D. L., [Hendricks, L.A.](https://arxiv.org/search/cs?searchtype=author&query=Hendricks%2C+L+A), W[elbl, J.](https://arxiv.org/search/cs?searchtype=author&query=Welbl%2C+J), [Clark, A.](https://arxiv.org/search/cs?searchtype=author&query=Clark%2C+A), [Hennigan, T.](https://arxiv.org/search/cs?searchtype=author&query=Hennigan%2C+T), [Noland](https://arxiv.org/search/cs?searchtype=author&query=Noland%2C+E), E., [Millican, K.](https://arxiv.org/search/cs?searchtype=author&query=Millican%2C+K), [Driessche, G. V. D.](https://arxiv.org/search/cs?searchtype=author&query=van+den+Driessche%2C+G), [Damoc, B.](https://arxiv.org/search/cs?searchtype=author&query=Damoc%2C+B), [Guy](https://arxiv.org/search/cs?searchtype=author&query=Guy%2C+A), A., [Osindero](https://arxiv.org/search/cs?searchtype=author&query=Osindero%2C+S), S., [Simonyan](https://arxiv.org/search/cs?searchtype=author&query=Simonyan%2C+K), K., [Elsen](https://arxiv.org/search/cs?searchtype=author&query=Elsen%2C+E), E., & Sifre, L. (2022). *Training Compute-Optimal Large Language Models*. DeepMind. <https://arxiv.org/abs/2203.15556> Karnofsky, H. (2016). *Some Background on Our Views Regarding Advanced Artificial Intelligence*. Open Philanthropy. <https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/> Karnofsky, H. (2021). *AI Timelines: Where the Arguments, and the “Experts,” Stand*. Cold Takes. <https://www.cold-takes.com/where-ai-forecasting-stands-today/> Karnofsky, H. (2022). *How Might We Align Transformative AI If It’s Developed Very Soon?*. LessWrong. <https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very> Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., & Amodei, D. (2020). *Scaling Laws for Neural Language Models*. OpenAI. <https://arxiv.org/abs/2001.08361> Khan, S., Naseer, M., Hayat, M., Zamir, S. W., Khan, F. S., & Shah, M. (2022). Transformers in Vision: A Survey. *ACM Comput. Surv.*, *54*(10s). <https://dl.acm.org/doi/abs/10.1145/3505244> Khrushchev, M. (2022). *Yandex Publishes YaLM 100B. It’s the Largest GPT-Like Neural Network in Open Source*. Yandex. <https://medium.com/yandex/yandex-publishes-yalm-100b-its-the-largest-gpt-like-neural-network-in-open-source-d1df53d0e9a6> Kim, B., Kim, H., Lee, S., Gichang, L., Kwak, D., Jeon, D. H., Park, S., Kim, S., Kim, S., Seo, D., Lee, H., Jeong, M., Lee, S., Kim, M., Ko, S. H., Kim, S., Park, T., Kim, J., … Sung, N. (2021). *What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers*. Naver. <https://arxiv.org/pdf/2109.04650.pdf> Ladish, J., & Heim, L. (2022). *Information Security Considerations for AI and the Long Term Future*. Effective Altruism Forum. <https://forum.effectivealtruism.org/posts/WqQDCCLWbYfFRwubf/information-security-considerations-for-ai-and-the-long-term> Leahy, C. (2022). *Announcing GPT-NeoX-20B*. EleutherAI. <https://blog.eleuther.ai/announcing-20b/> [lennart] (2021). *Compute Governance and Conclusions - Transformative AI and Compute [¾]*. Effective Altruism Forum. <https://forum.effectivealtruism.org/posts/g6cwjcKMZba4RimJk/compute-governance-and-conclusions-transformative-ai-and> Leopold, G. (2019). *AWS to Offer Nvidia’s GPUs for AI Inferencing*. HPC Wire. <https://www.hpcwire.com/2019/03/19/aws-upgrades-its-gpu-backed-ai-inference-platform/> Lepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., & Chen, Z. (2020). *GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding*. Google. <https://arxiv.org/pdf/2006.16668.pdf> Lieber, O., Sharir, O., Lenz, B., & Shoham, Y. (2021). *Jurassic-1: Technical Details and Evaluation*. <https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf> Liu, J., Shen, D., Zhang, Y., Dolan, B., Carin, L., & Chen, W. (2021). *What Makes Good In-Context Examples for GPT-3?*. Microsoft Dynamics 365 AI. <https://arxiv.org/abs/2101.06804> Lohn, A., & Musser, M. (2022). *AI and Compute: How Much Longer Can Computing Power Drive Artificial Intelligence Progress?*. Center for Security and Emerging Technology. <https://cset.georgetown.edu/publication/ai-and-compute/> Muehlhauser, L. (2019). *What Open Philanthropy Means by “Transformative AI”*. Open Philanthropy. <https://docs.google.com/document/d/15siOkHQAoSBl_Pu85UgEDWfmvXFotzub31ow3A11Xvo/edit> Narayanan, D., Shoeybi, M., Casper, J., LeGresley, P., Patwary, M., Korthikanti, V. A., Vainbrand, D., Kashinkunti, P., Bernauer, J., Catanzaro, B., Phanishayee, A., & Zaharia, M. (2021). *Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM*. NVIDIA. <https://arxiv.org/abs/2104.04473> Naver. (2021). *Press Release: Naver Unveils Korea’s First Super-Scale AI ‘HyperCLOVA’... “We Will Lead the Era of AI for All”*. <https://www.navercorp.com/promotion/pressReleasesView/30546> [nostalgebraist] (2022). *Chinchilla’s Wild Implications*. LessWrong. <https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications> OpenAI. (2022). *Best Practices for Deploying Language Models*. <https://openai.com/blog/best-practices-for-deploying-language-models/> OpenAI. (2022). *Powering Next Generation Applications with OpenAI Codex*. <https://openai.com/blog/codex-apps/> [Pablo], & [Leo]. (2021). *AI Race*. Effective Altruism Forum. <https://forum.effectivealtruism.org/topics/ai-race> [Pablo], Aird, M., & [Leo]. (2021). *Alignment Tax*. Effective Altruism Forum. <https://forum.effectivealtruism.org/topics/alignment-tax> Radford, A., & Narasimhan, K. (2018). *Improving Language Understanding by Generative Pre-Training*. Semantic Scholar. <https://www.semanticscholar.org/paper/Improving-Language-Understanding-by-Generative-Radford-Narasimhan/cd18800a0fe0b668a1cc19f2ec95b5003d0a5035> Radford, A., Wu, J., Amodei, D., Clark, J., Brundage, M., & Sutskever, I. (2019). *Better Language Models and Their Implications*. OpenAI. <https://openai.com/blog/better-language-models/> Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). *Language Models Are Unsupervised Multitask Learners*. Semantic Scholar. <https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe> Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F., Aslanides, J., Henderson, S., Ring, R., Young, S., Rutherford, E., Hennigan, T., Menick, J., Cassirer, A., Powell, R., Driessche, G., Hendricks, L. A., Rauh, M., Huang, P., …Irving, G. (2021). *Scaling Language Models: Methods, Analysis &amp; Insights from Training Gopher*. DeepMind. <https://arxiv.org/abs/2112.11446> Rae, J., Irving, G., & Weidinger, L. (2021). *Language Modelling at Scale: Gopher, Ethical Considerations, and Retrieval*. DeepMind. <https://www.deepmind.com/blog/language-modelling-at-scale-gopher-ethical-considerations-and-retrieval> Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2019). *Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer*. Google. <https://arxiv.org/abs/1910.10683> Ramesh, A., Pavlov, M., Goh, G., & Gray, S. (2021). *Dall-E: Creating Images from Text*. OpenAI. <https://openai.com/blog/dall-e/> Rosset, C. (2020). *Turing-NLG: A 17-Billion-Parameter Language Model by Microsoft*. Microsoft Research Blog. <https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/> Ryugen, H. (2022). *Taiwan’s Share of Contract Chipmaking to Hit 66% This Year: Report*. Nikkei Asia. <https://asia.nikkei.com/Business/Tech/Semiconductors/Taiwan-s-share-of-contract-chipmaking-to-hit-66-this-year-report> Sandbrink, J., Hobbs, H., Swett, J., Dafoe, A., & Sandberg, A. (2022). *Differential Technology Development: A Responsible Innovation Principle for Navigating Technology Risks*. SSRN. <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4213670> Sanh, V., Webson, A., Raffel, C., Bach, S. H., Sutawika, L., Alyafeai, Z., Chaffin, A., Stiegler, A., Le Scao, T., Raja, A., Dey, M., Bari, M. S., Xu, C., Thakker, U., Sharma, S. S., Szczechla, E., Kim, T., … Rush, A. M. (2021). *Multitask Prompted Training Enables Zero-Shot Task Generalization*. International Conference on Learning Representations. <https://arxiv.org/abs/2110.08207> Schneider, J. (2022). *War in Taiwan and AI Timelines*. Effective Altruism Forum. <https://forum.effectivealtruism.org/posts/PAxTSZPW7MBXKkvZg/war-in-taiwan-and-ai-timelines> Sevilla, J., Heim, L., Ho, A., Besiroglu, T., Hobbhahn, M., & Villalobos, P. (2022). *Compute Trends Across Three Eras of Machine Learning*. ArXiv. <https://arxiv.org/abs/2202.05924> Sevilla, J., Heim, L., Hobbhahn, M., Besiroglu, T., Ho, A., & Villalobos, P. (2022). *Estimating Training Compute of Deep Learning Models*. Epoch. <https://epochai.org/blog/estimating-training-compute#appendix-b-comparing-the-estimates-of-different-methods> Sevilla, J., Villalobos, P., Ceron, J. F., Burtell, M., Heim, L., Nanjajjar, A. B., Ho, A., Besiroglu, T., Hobbhahn, M., Denain, J., & Dudney, O. (2022). *Parameter, Compute and Data Trends in Machine Learning*. <https://docs.google.com/spreadsheets/d/1AAIebjNsnJj_uKALHbXNfn3_YsT6sHXtCU0q7OIPuc4/edit#gid=1917852922> Shah, R. (2020). *Alignment Newsletter #103 - ARCHES: An Agenda for Existential Safety, and Combining Natural Language with Deep RL*. LessWrong. <https://www.lesswrong.com/posts/gToGqwS9z2QFvwJ7b/an-103-arches-an-agenda-for-existential-safety-and-combining> Shaohua, W., Zhao, X., Yu, T., Zhang, R., Shen, C., Liu, H., Li, F., Zhu, H., Luo, J., Xu, L., & Zhang, X. (2021). *Yuan 1.0: Large-Scale Pre-Trained Language Model in Zero-Shot and Few-Shot Learning*. Inspur Artificial Intelligence Research Institute. <https://arxiv.org/abs/2110.04725> Shelvane, T. (2022). *The Artefacts of Intelligence: Governing Scientists’ Contribution to AI Proliferation*. Centre for the Governance of AI. <https://www.governance.ai/research-paper/the-artefacts-of-intelligence-governing-scientists-contribution-to-ai-proliferation> Shevlane, T. (2022). *Structured access: an emerging paradigm for safe AI deployment*. University of Oxford. <https://arxiv.org/abs/2201.05159> Shelvane, T., & Dafoe, A. (2020). *The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?*. Future of Humanity Institute. <https://www.fhi.ox.ac.uk/wp-content/uploads/The-Offense-Defense-Balance-of-Scientific-Knowledge.pdf> Shliazhko, O., Fenogenova, A., Tikhonova, M., Mikhailov, V., Kozlova, A., & Shavrina, T. (2022). *mGPT: Few-Shot Learners Go Multilingual*. ArXiv. <https://arxiv.org/abs/2204.07580> Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., & Catanzaro, B. (2019). *Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism*. NVIDIA. <https://arxiv.org/abs/1909.08053> Silver, D., & Hassabis, D. (2017). *AlphaGo Zero: Starting from Scratch*. DeepMind. <https://www.deepmind.com/blog/alphago-zero-starting-from-scratch> Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., & Hassabis, D. (2017). *Mastering the Game of Go Without Human Knowledge*. DeepMind. <https://discovery.ucl.ac.uk/id/eprint/10045895/1/agz_unformatted_nature.pdf> Soltan, S., Ananthakrishnan, S., FitzGerald, J., Gupta, R., Hamza, W., Khan, H., Peris, C., Rawls, S., Rosenbaum, A., Rumshisky, A., Prakash, C. S., Sridhar, M., Triefenbach, F., Verma, A., Tur, G., & Natarajan, P. (2022). *AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model*. Amazon Alexa AI. <https://arxiv.org/abs/2208.01448> Sutton, R. (2019). *The Bitter Lesson*. Incomplete Ideas. <http://incompleteideas.net/IncIdeas/BitterLesson.html> Tian, Y., Ma, J., Gong, Q., Sengupta, S., Chen, Z., Pinkerton, J., & Zitnick, L. (2019). *ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero*. 36th International Conference on Machine Learning. <https://arxiv.org/abs/1902.04522> Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). *Stealing Machine Learning Models via Prediction APIs*. 25th Usenix Security Symposium. <https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/tramer> Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H., Jin, A., Bos, T., Baker, L., Du, Y., Li, Y., Lee, H., Zheng, H. S., Ghafouri, A., Menegali, M., Huang, Y., Krikun, M., Lepikhin, D., Qin, J., Chen, D., … Le, Q. (2022). *LaMDA: Language Models for Dialog Applications*. Google. <https://arxiv.org/abs/2201.08239> Tsinghua University. (2022). *GLM-130B: An Open Bilingual Pre-Trained Model*. <http://keg.cs.tsinghua.edu.cn/glm-130b/posts/glm-130b/> Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). *Attention Is All You Need*. 31st Conference on Neural Information Processing Systems. <https://arxiv.org/abs/1706.03762> Villalobos, P., Sevilla, J., Heim, L., Besiroglu, T., Hobbhahn, M., & Ho, A. (2022). *Will We Run Out of ML Data? Evidence From Projecting Dataset Size Trends*. Epoch. <https://epochai.org/blog/will-we-run-out-of-ml-data-evidence-from-projecting-dataset> Wang, S., Sun, Y., Xiang, Y., Wu, Z., Ding, S., Gong, W., Feng, S., Shang, J., Zhao, Y., Pang, C., Liu, J., Chen, X., Lu, Y., Wang, X., Bai, Y., Chen, Q., Zhao, L., Li, S., … Wang, H. (2021). *Ernie 3.0 Titan: Exploring Larger-Scale Knowledge Enhanced Pre-Training for Language Understanding and Generation*. Baidu Inc. <https://arxiv.org/pdf/2112.12731.pdf> Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., Du, N., Dai, A. M., & Le, Q. V. (2021). *Finetuned Language Models Are Zero-Shot Learners*. Google Research. <https://arxiv.org/pdf/2109.01652v1.pdf> Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., Du, N., Dai, A. M., & Le, Q. V. (2022). *Finetuned Language Models Are Zero-Shot Learners*. Google Research. <https://arxiv.org/pdf/2109.01652.pdf> Wiblin, R., & Harris, K. (2022). *Nova DasSarma on Why Information Security May Be Critical to the Safe Development of AI Systems*. 80,000 Hours. <https://80000hours.org/podcast/episodes/nova-dassarma-information-security-and-ai-systems/> Wiggers, K. (2021). *AI21 Labs trains a massive language model to rival OpenAI’s GPT-3*. VentureBeat. <https://venturebeat.com/business/ai21-labs-trains-a-massive-language-model-to-rival-openais-gpt-3/> Wiggers, K. (2022). *OpenAI Rival AI21 Labs Raises $64M to Ramp Up its AI-Powered Languages Services*. TechCrunch. <https://techcrunch.com/2022/07/12/openai-rival-ai21-labs-raises-64m-to-ramp-up-its-ai-powered-language-services/> Wu, S., Zhao, X., Yu, T., Zhang, R., Shen, C., Liu, H., Li, F., Zhu, H., Luo, J., Xu, L., & Zhang, X. (2021). *Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning*. Inspur Artificial Intelligence Research Institute. <https://arxiv.org/abs/2110.04725> Zeng, A., Liu, X., Du Z., Wang, Z., Lai, H., Ding, M., Yang, Z., Xu, Y., Zheng, W., Xia, X., Tam, W. L., Ma, Z., Xue, Y., Zhai, J., Chen, W., Zhang, P., Dong, Y., & Tang, J. (2022). *GLM-130B: An Open Bilingual Pre-trained Model*. Tsinghua University. <https://arxiv.org/pdf/2210.02414.pdf> Zeng, W., Ren, X., Su, T., Wang, H., Liao, Y., Wang, Z., Jiang, X., Yang, Z., Wang, K., Zhang, X., Li, C., Gong, Z., Yao, Y., Huang, X., Wang, J., Yu, J., Guo, Q., Yu, Y., Zhang, Y., … Tian, Y. (2021). *PanGu-α: Large-Scale Autoregressive Pretrained Chinese Language Models With Auto-Parallel Computation*. PanGu-α Team. <https://arxiv.org/pdf/2104.12369.pdf> Zhang, S., Diab, M., & Zettlemoyer, L. (2022). *Democratizing Access to Large-Scale Language Models with OPT-175B*. Meta AI. <https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/> Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, Xi V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D., Koura, P. S., S., Wang, T. & Zettlemoyer, L. (2022). *OPT: Open Pre-trained Transformer Language Models*. Meta AI. <https://arxiv.org/abs/2205.01068> Zwetsloot, R., & Dafoe, A. (2019). *Thinking About Risks From AI: Accidents, Misuse and Structure*. Lawfare. <https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure> Acknowledgements ================ In addition to feedback-givers, I'd like to thank: * My manager at Rethink Priorities, Michael Aird, for helping me become a better researcher throughout this project. Michael’s support, advice, and feedback were crucial to improving and finishing this sequence. * Rethink Priorities for supporting me to do this project. * All of the experts who responded to my questions. * Adam Papineau for copyediting. ![](https://res.cloudinary.com/cea/image/upload/v1672743172/mirroredImages/nc3JFZbqnzWWAPkmz/zlqaveyga1kcixrclwdt.png) *This research is a project of*[*Rethink Priorities*](http://rethinkpriorities.org/)*. It was written by Ben Cottier. Thanks to Alexis Carlier, Amanda El-Dakhakhni, Ashwin Acharya, Ben Snodin, Bill Anderson-Samways, Erich Grunewald, Jack Clark, Jaime Sevilla, Jenny Xiao, Lennart Heim, Lewis Ho, Lucy Lim, Luke Muehlhauser, Markus Anderljung, Max Räuker, Micah Musser, Michael Aird, Miles Brundage, Oliver Guest, Onni Arne, Patrick Levermore, Peter Wildeford, Remco Zwetsloot, Renan Araújo, Shaun Ee, Tamay Besiroglu, and Toby Shevlane for helpful feedback. If you like our work, please consider*[*subscribing to our newsletter*](https://www.rethinkpriorities.org/newsletter)*. You can explore our completed public work*[*here*](https://www.rethinkpriorities.org/research)*.*
c80b3584-513a-4bf3-b37a-916f7ed661f0
trentmkelly/LessWrong-43k
LessWrong
Where might I direct promising-to-me researchers to apply for alignment jobs/grants? It sometimes happens that people who I've talked to or worked with ask me where they should go for financial support for their research. I haven't developed a standard list of answers to this question. It seems to me like there are a lot of new orgs recently, and I'm losing track!  If you are looking for such applicants or know someone who is looking, consider replying as an answer (or sending me a PM if that makes more sense for whatever reason).
e9a5cb0a-cc5c-4134-b502-550fdf1c40f0
trentmkelly/LessWrong-43k
LessWrong
Ambiguity causes conflict Consider the following model of conflict: Two parties, say A and B, have a dispute about some issue. The state of the world can be represented as a real number in the interval s∈[0,1]. A and B have utility functions uA(1−s) and uB(s) respectively, where uA,uB are increasing and concave. Intuitively, A wants the state of the world s to be close to 0, while B wants it to be close to 1. Concavity is equivalent to A and B being risk-averse. A and B also have the ability to go to war. If they do so, they pay some nonzero utility costs cA,cB>0 respectively and there is a probability p with which A wins the war and a probability 1−p that B wins instead. The winner can set the state of the world as they want, so if A goes to war and wins then A will always set s=0 and likewise B will always set s=1. Now, let's calculate the expected utility of both sides if they go to war. * A's expected payoff is puA(1)+(1−p)uA(0)−cA * B's expected payoff is puB(0)+(1−p)uB(1)−cB By assumption uA and uB are concave, and therefore we know that puA(1)+(1−p)uA(0)≤uA(p),puB(0)+(1−p)uB(1)≤uB(1−p) Therefore, if there is a war, A's expected payoff is strictly less than uA(p) and B's expected payoff is strictly less than uB(1−p). What if instead of going to war A and B decide to compromise? The obvious point of compromise is the one that linearly interpolates between the endpoints of the interval according to the balance of power, so s=1−p. This compromise gives A a utility of uA(p) and gives B a utility of uB(1−p), and so it makes both A and B strictly better off than a situation in which they go to war. Assumptions This model is well known in the rationalist international relations literature, and much like the Ricardian equivalence theorem of macroeconomics, it derives a very strong conclusion (namely, war should never happen) under a set of assumptions. Let's list those assumptions explicitly: 1. The dispute in question is divisible, in other words, it can be accurately modeled by
90e655d1-d707-47c5-9faa-c47542a51a15
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
The EU AI Act: A Simple Explanation - A Stanford Study Reveals the gaps of ChatGPT and 9 more The EU AI Act is in the news due to a recent Stanford study revealing that 10 AI platforms, including ChatGPT, aren't entirely in line with its standards. News coverage has been quite complex, so I've consolidated the details to provide a simpler explanation. This summary should save you hours. =======================================================================================================================================================================================================================================================================================================
7b8533fa-7efd-42d3-b05c-95a92b910a53
trentmkelly/LessWrong-43k
LessWrong
The US Executive vs Supreme Court Deportations Clash Forecaster perspectives Sentinel forecasters in aggregate assess as “83% true” (65% to 100%) the statement that the Trump administration has disobeyed the Supreme Court so far. On the one hand, the Trump administration has done nothing to “facilitate” bringing Abrego García back to the US, and this White House’s tweet shown below clearly indicates that it won’t try to; on the other hand, it could yet still do so when the current news cycle subsides. On April 17th, forecasters estimated a 66% chance (50% to ~100%) that Abrego García will not be brought back to the US within the next 60 days (by June 17). Data on the number of deportations in past administrations are available but can be difficult to compare between administrations; some up-to-date sources include refusals at the border due to Covid (“Title 42 expulsions”) under Biden, but other data have been discontinued. Perhaps a good comparison might be to the years 2012-2015, Obama’s second term, which saw a total of about 1.5M immigrants removed over all four years. The Trump administration has reportedly set a goal of removing 1M people in its first year. As of April 19, 3.6 months into Trump’s second term, the administration has deported 0.11M people. Removing 1M people by the end of 2025 would require a removal rate that is almost four times higher than the rate to date. Our forecasters estimate a 32% (range, 15% to 70%) probability that the Trump administration will deport over 1M immigrants this calendar year. The administration has the intent, but obstacles include the courts, logistics, incompetence, and possible protection in sanctuary cities, although protection of immigrants by cities could lead to standoffs between local and federal officials. ICE’s recently granted access to IRS data on undocumented immigrants would likely make deportations easier. Paradoxically, those who pay taxes and have tried to work within the system are likely to be at greatest risk of deportation. DOGE personnel are also r
4fc01ea7-b080-476d-a672-c6b8c3dc2442
trentmkelly/LessWrong-43k
LessWrong
Book Review: Consciousness Explained (as the Great Catalyst) [Thanks to wilkox for helpful discussion, as well as Charlie Steiner, Richard Kennaway, and Said Achmiz for feedback on a previous version. Extra special thanks to the Long-Term Future Fund for funding research related to this post.] Consciousness Explained is a book by philosopher and University Professor Daniel Dennett. It's over 30 years old but still the most frequently referenced book about consciousness on LessWrong. It was previously reviewed by Charlie Steiner. I was interested in the book because of its popularity. I wanted to understand why so many people reference it and what role it plays in the discourse. To answer these questions, I think the two camps model I suggested in Why it's so hard to talk about Consciousness is relevant because the book seems to act as a catalyst for Camp #1 thinking. So this review will be all about applying the model to the book in more detail, to see why it's so popular among Camp #1, but also what it has to offer to Camp #2. Reading the two camps post first is recommended, but here's a one-paragraph summary. It seems to be the case that almost everyone has intuitions about consciousness that fall into one of two camps, which I call Camp #1 and Camp #2. For Camp #1, consciousness is an ordinary process and hence fully reducible to its functional role, which means that it can be empirically studied without introducing new metaphysics. For Camp #2, there is an undeniable fact of experience that's conceptually distinct from information processing, and its relationship to physical states or processes remains an open question. The solutions Camp #2 people propose are all over the place; the only thing they agree on is that the problem is real. The review is structured into four main sections (plus one paragraph in section 5): * Section 1 is about the book's overall thesis and how it frames the issue. * Section 2 is about how the book takes sides and why it's popular. * Section 3 is about the book's treatment of the opp
5663d597-30c0-444d-a5f3-f6d3c02d0cfe
trentmkelly/LessWrong-43k
LessWrong
this is a test it won't actually get posted
ea1b0767-f085-4415-9e46-9b631a85113f
trentmkelly/LessWrong-43k
LessWrong
On Collusion - Vitalik Buterin > ...if there is a situation with some fixed pool of resources and some currently established mechanism for distributing those resources, and it’s unavoidably possible for 51% of the participants can conspire to seize control of the resources, no matter what the current configuration is there is always some conspiracy that can emerge that would be profitable for the participants. > ...This fact, the instability of majority games under cooperative game theory, is arguably highly underrated as a simplified general mathematical model of why there may well be no “end of history” in politics and no system that proves fully satisfactory; I personally believe it’s much more useful than the more famous Arrow’s theorem, for example. I've found this post quite useful in thinking about mechanism design and problems of designing stable systems.
94dd023b-fa18-4d4a-bda8-9bd52a42a8e4
trentmkelly/LessWrong-43k
LessWrong
Link: David Friedman's Positive Account of Property Rights http://www.daviddfriedman.com/Academic/Property/Property.html It's really good and not that long. Talks about how Schelling points bring humans out of the Hobbesian jungle. Also acts as introduction to Schelling points I suppose.
757af016-516a-4ee1-b85a-0184153e8459
trentmkelly/LessWrong-43k
LessWrong
Holiday Pitch: Reflecting on Covid and Connection Somewhere on the edge of a Pareto frontier somewhere, I decided to make one of my shticks “founding holidays that fill important gaps in the 21st century." Founding a holiday is not normally the best solution to a problem, but it sometimes is, and I’ve spent ~10 years thinking about why/when/how to do it. Recently, I awoke in the middle of the night with a thought:  There should be a holiday designed, from the ground up, to be telepresent. Once a year, you should gather – not with the family who are geographically close, but those who are emotionally central to you, no matter the distance. What should you gather to celebrate about? One possible answer is “your connection and closeness is reason enough. You don’t need an excuse.” But I think holidays work best when they do provide an excuse – some event to commemorate, around which the whole thing coalesces. Two seconds later, I thought: geez, the coronavirus is just… actually a mythic level event, which is affecting all of humanity. We are forced to endure hardship, to rework our lives, to coordinate in challenging times… and to find new ways to remain connected despite physical isolation. The story of COVID-19 is actually comparable to the events that most holidays are founded to commemorate. It’s also a very weird event, which gives a holiday lots of little narrative hooks for unique things that help it stand out. I think a holiday focused on that would have a lot of narrative power, and it would fill an actual important need. I think there are concrete reasons people may benefit from a telepresent holiday right now, but I also predict that coronavirus will have long-lasting impacts that will make it reasonable to commemorate in future years (Although the narrative may need to change over time to account for new facts coming to light. We’re living through history right now) It’s the 21st century, and we should be expecting holidays and traditions to adapt to changing technology. There should be at least one