id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
3ec65a92-28fb-4cae-b9eb-2a41a69f7b53 | trentmkelly/LessWrong-43k | LessWrong | Formalizing the "AI x-risk is unlikely because it is ridiculous" argument
There is a lot of good writing on technical arguments against AI x-risk (such as Where I agree and disagree with Eliezer (which mostly argues for more uncertainty) and others). However, in the wider world the most popular argument is more of the form "it is ridiculous" or "it is sci-fi" or some sort of other gut feeling. In particular, I think this is the only way people achieve extremely low credence on AI doom (low enough that they worry about other disasters instead).
Although this seems like a fallacy, in this post I will attempt to formalize this argument. Not only is it good, but I think it turns out to be extremely strong!
In my judgement, I still find the arguments for x-risk stronger or at least balanced with the "it is ridiculous" argument, but it still deserves serious study. In particular, I think analyzing and critiquing it should become a part of the AI public discourse. For example, I think there are flaws in the argument that, when revealed, would cause people to become more worried about AI x-risk. I do not quite know what these flaws are yet. In any case, I hope this post will allow us to start studying the argument.
The argument is actually a bunch of separate arguments that tend to be lumped together into one "it is ridiculous" argument.
For the purposes of this article, Bob is skeptical of AI x-risk and Alice argues in favor of it.
Existential risk would stop a 12,000 year trend
Forgetting all of human history, Bob first assumes our priors for the long term future are very much human agnostic. The vast majority of outcomes have no humans, but are just arbitrary arrangements of matter (paperclips, diamonds, completely random, etc...). So our argument will need to build a case for the long term future actually being good for humans, despite this prior.
Next, Bob takes into account human history. The total value of the stuff humans consume tends to go up. In particular, it seems to follow a power law, which is a straight line on this log-log |
197ac674-c792-4b16-9b55-4bff8b9fb87e | trentmkelly/LessWrong-43k | LessWrong | Meetup : West LA Meetup 07-26-2011
Discussion article for the meetup : West LA Meetup 07-26-2011
WHEN: 26 July 2011 07:00:00PM (-0700)
WHERE: 10800 West Pico Blvd, Suite 312, Los Angeles, CA 90064
When: 7pm - 9pm July 26th.
Where: The Westside Pavillion - on the bridge, which connects Nordstrom 3rd floor with Barnes & Noble / Landmark Theatres 3rd floor.
Parking is free for 3 hours.
This week is all about evidence. You are encouraged to read Scientific Evidence, Legal Evidence, Rational Evidence. But don't worry if you don't have time; we'll go over it!
Whether you're a regular reader or totally new, here for the theoretical musings or the practical things, come by and say hello! The conversation is largely unstructured, and the people are awesome. There will be snacks.
I will bring a whiteboard with Bayes' Theorem written on it.
See also: West LA Biweekly Meetups
Discussion article for the meetup : West LA Meetup 07-26-2011 |
ce8c51c2-5f2b-468c-bf20-6e8d00a7a493 | StampyAI/alignment-research-dataset/arbital | Arbital | Oracle
Oracles are a subtype of [Genies](https://arbital.com/p/6w) putatively designed to safely answer questions, and *only* to answer questions. Oracles are often assumed to be [Boxed](https://arbital.com/p/6z) and the study of Oracles is sometimes taken to be synonymous with the study of Boxed Oracles. |
7c6bb259-a266-4a53-b492-37881b82d49a | trentmkelly/LessWrong-43k | LessWrong | Open Thread: January 2010
And happy new year to everyone. |
ea967075-af62-44c6-a05a-516cdbfca53d | trentmkelly/LessWrong-43k | LessWrong | The epistemic virtue of scope matching
I keep noticing a particular epistemic pitfall (not exactly a “fallacy”), and a corresponding epistemic virtue that avoids it. I want to call this out and give it a name, or find out what its name is, if it already has one.
The virtue is: identifying the correct scope for a phenomenon you are trying to explain, and checking that the scope of any proposed cause matches the scope of the effect.
Let me illustrate this virtue with some examples of the pitfall that it avoids.
Geography
A common mistake among Americans is to take a statistical trend in the US, such as the decline in violent crime in the 1990s, and then hypothesize a US-specific cause, without checking to see whether other countries show the same trend. (The crime drop was actually seen in many countries. This is a reason, in my opinion, to be skeptical of US-specific factors, such as Roe v. Wade, as a cause.)
Time
Another common mistake is to look only at a short span of time and to miss the longer-term context. To continue the previous example, if you are theorizing about the 1990s crime drop, you should probably know that it was the reversal of an increase in violent crime that started in the 1960s. Further, you should know that the very long-term trend in violent crime is a gradual decrease, with the late 20th century being a temporary reversal. Any theory should fit these facts.
A classic mistake on this axis is attempting to explain a recent phenomenon by a very longstanding cause (or vice versa). For instance, why is pink associated with girls and blue with boys? If your answer has something to do with the timeless, fundamental nature of masculinity or femininity—whoops! It turns out that less than a century ago, the association was often reversed (one article from 1918 wrote that pink was “more decided and stronger” whereas blue was “delicate and dainty”). This points to a something more contingent, a mere cultural convention.
Left: Young Boy with Whip, c. 1840; right: Portrait of a Girl in |
27ea695e-931c-4f4f-aa11-48782c446b99 | trentmkelly/LessWrong-43k | LessWrong | The Dunbar Playbook: A CRM system for your friends
Thanks to Dakota Quackenbush from Authentic Bay Area for an earlier version of this tool.
So far, the main motivation for my work as a community builder and authentic relating facilitator was meeting my own need for connection.
I think that was a mistake.
First, it is difficult to harvest the fruits of community while I’m the one responsible for creating and holding the space. Second, this motivation leads to botched incentives that end up serving neither the cause nor me. After all, the subset of broke EAs and hippies I enjoy spending my time with the most are not in too dire need of my services, nor particularly capable of helping me pay my rent.
In other words, I’ve finally given up on trying to poop where I eat. Instead of building a product for my in-group, I now try to anchor my life in my tribe, and use the energy I get there to build products that serve the outside world and pay my rent.
Wish me luck.
Because my life happens all over the globe and making new friends is more intuitive for me than sustaining long-term relationships, I want to be a bit strategic about building a tribe that keeps me energized.
That’s where the Dunbar Playbook comes into play.
Some theory: Dunbar’s Number
The Dunbar Playbook is named after Dunbar’s Number, the number of people one can maintain personal relationships with.
In his earlier research, anthropologist and evolutionary psychologist Robin Dunbar concluded that that’s about 150 people. Later, he found that there are actually different circles of friendship. Apparently, people have a handful of very close friends, a couple more best friends, and vastly more loose friends and acquaintances. (Who would have thought.)
The Atlantic cites the following layers, with each layer being ~3x the size of the preceding ones:
1.5 people: Intimates
5: Close friends
15: Best friends
50: Good friends
150: Friends
500: Acquaintances
1500: Known names
5000: Known faces
Of course, these are all just rough approximations. Introvert |
10af425b-8ecd-46cd-b066-7404f338d388 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Publication decisions for large language models, and their impacts
*This post is one part of the sequence* [*Understanding the diffusion of large language models*](https://forum.effectivealtruism.org/s/8rYkpiFhbb4HsbzFc)*. As context for this post, I strongly recommend reading at least the* [*5-minute summary*](https://forum.effectivealtruism.org/posts/nc3JFZbqnzWWAPkmz/understanding-the-diffusion-of-large-language-models-summary-1#5_minute_summary) *of the sequence.*
In this post, I:
1. Overview the different forms of information and artifacts that have been published (or not) for the GPT-3-like models that I studied.
2. Estimate some of the impacts of these publication decisions.
3. Assess the different rationales for these publication decisions.
4. Make predictions about how publication decisions and norms will change in the language modeling domain in the future.
Key takeaways
=============
1. My case studies suggest there is still a strong incentive for researchers at industry-leading AI labs to publish details about the training process, model architecture, and hyperparameter values they used. This has significant effects on diffusion. ([more](https://forum.effectivealtruism.org/posts/KkbEfpNkjNepQrj8g/publication-decisions-for-large-language-models-and-their#Algorithmic_details_and_hyperparameters_were_almost_always_openly_published__despite_this_potentially_reducing_the_compute_barrier_significantly))
1. I speculate that the rationale for publishing implementation details is to make research results more credible and easily verifiable. I think this is in turn desired due to (a) existing norms for research transparency and reproducibility, and (b) the incentive to attract top researchers, who tend to benefit from the transparency and reproducibility of the research they were involved in.
2. I estimate (with 90% confidence) that disclosing all hyperparameter values for a GPT-3-like model saves between 1% and 20% of the total compute cost needed to replicate that model.
3. EleutherAI based their choice of hyperparameter values on those of GPT-3. This seemed to shorten the duration of the GPT-NeoX-20B project—my best guess is a shortening of two weeks (90% CI: zero to six weeks).
2. Compared to the implementation details, the code, datasets, and trained models from research were published much less or much later, and seemed to have smaller effects on diffusion even when published. These effects were significant, but I think that improving the publication practices of top developers (who are often the first to make advances in implementation) is a higher priority than preventing artifacts from being openly published after a long delay (more than 1 year). ([more](https://forum.effectivealtruism.org/posts/KkbEfpNkjNepQrj8g/publication-decisions-for-large-language-models-and-their#Training_code__datasets__and_models_were_mostly_kept_closed__but_when_published__they_seem_to_have_less_impact_on_diffusion_than_publishing_algorithm_implementation_details))
1. I estimate that if code for training GPT-3 had been openly published beforehand, the production of EleutherAI’s GPT-NeoX-20B would have been sped up by two months (90% CI: one to six months), and Meta AI’s OPT-175B would have been sped up by two weeks (90% CI: one to six weeks). The difference is due to the gap between these actors’ talent resources (both experience and quantity).
2. I estimate that the lack of access to large enough datasets, and code to process raw data into data suitable for training, delayed GPT-NeoX-20B by four weeks (90% CI: one to ten weeks). This is less than the delay due to code, because collecting and processing data seems easier than implementing efficient and bug-free training algorithms.
3. Openly publishing the weights of a pretrained language model accelerates progress in that model’s performance on downstream tasks. This is because releasing the weights allows more people to experiment with different prompts and fine-tuning processes. However, openly publishing model weights probably did not significantly accelerate development of new, more advanced models in my case studies. Openly published GPT-3-like models arrived more than two years after GPT-3 was publicized. So top language model developers such as Google and DeepMind, who have developed models better than GPT-3 in the meantime, didn’t seem to benefit from this delayed open publication. Even if open publication came much sooner, I think that training new models would be more bottlenecked by access to compute and talent, not by knowing the weights of existing models.
3. What is happening to publication strategies? ([more](https://forum.effectivealtruism.org/posts/KkbEfpNkjNepQrj8g/publication-decisions-for-large-language-models-and-their#Rationales_for_release_strategies_vary__and_include_commercial_incentives__misuse_concerns__and_a_reactive_push_for_openness))
1. I’m 70% confident that, in each year from next year until transformative AI is developed, publication decisions by the top three language model developers (whoever they are, but currently Google, DeepMind, and OpenAI) will be more closed on average than they are now. This is due to the incentives for closedness prevailing as capabilities improve. The incentives for closedness that I have considered are (a) maintaining an advantage in capabilities, (b) protecting commercial IP, (c) avoiding regulation by reducing attention on capabilities, (d) reducing both societal and public relations risk from misuse of diffused artifacts.
2. A key question is Google’s publication strategy going forward, as their decisions have varied a lot even in the past three years. But I’m still 65% confident that Google will openly publish fewer of their best-performing language models in the future than they do today.
3. At the same time as actors like DeepMind and OpenAI adopt quite a closed publication strategy for GPT-3-like models, there is a *reactive* push for greater openness. This push is characterized by EleutherAI with GPT-NeoX-20B, Meta AI with OPT-175B, and BigScience with BLOOM.
1. I expect this reactive diffusion dynamic will essentially continue for at least the next five years (with five years being a somewhat arbitrary time period). After five years, I think it’s more than 50% likely that this source of diffusion will be substantially reduced due to the occurrence of one of the following:
1. The front-running actors will be so far ahead, and so incentivised not to publish, that no one else is able to diffuse their capabilities via replication.
2. Even actors that keep up and diffuse models on purpose will decide that open publication of models has become too risky, given how powerful the state-of-the-art has become.
Algorithmic details and hyperparameters were almost always openly published, despite this potentially reducing the compute barrier significantly
================================================================================================================================================

*Table 1: Degree to which algorithmic details and hyperparameter values were disclosed for models in my case studies. See the diffusion* [*database*](https://docs.google.com/spreadsheets/d/1mYWA2V2aGe4PMRYfF2rwJ4sVcBteUzoyGl26S7LyM2s/edit?usp=sharing)*.*
1. **Based on the data in the table above, there remains a strong incentive for researchers to specify details about the training process, model architecture,**[[1]](#fnjiucsp1usib)**and hyperparameter values.**[[2]](#fnnludvgfoen) **This is even true of industry-leading AI labs, despite their incentive to protect commercial IP.**
1. I think that the disclosure of hyperparameters has a significant impact on diffusion, because finding good hyperparameter values can carry a computational cost that is a significant fraction of the total compute cost for a project. **I estimate that disclosing all hyperparameter values for a GPT-3-like model saves 8% (90% CI: 1% to 20%) of the total compute cost needed to replicate that model.**This means that in the case of GPT-3, I estimate the dollar cost saving (via reducing compute cost) to be $440K (90% CI: 68K to $1.6M). See [appendix](https://docs.google.com/document/d/1NSBL4jAfGO3kBXF90b52o8jat4vBL1uOgtPVi7j22e4/edit#heading=h.are1gnfeopcx) for details.
1. Evidence of this cost saving in practice comes from EleutherAI, with the GPT-NeoX-20B model. They reportedly based their choice of hyperparameter values on those of GPT-3 “due to the intractability of performing a hyperparameter sweep for a 20-billion-parameter model.”[[3]](#fnd5lp206bm0q) My best guess is that this decision shortened their project by two weeks (90% CI: zero to six weeks).
2. **Why is there such a strong tendency for publishing implementation details, even for industry labs? My best guess is that it’s mainly a combination of the following:**
1. **There is an incentive to follow existing norms for research transparency and reproducibility.**
2. **Transparency and reproducibility make the results more credible and easily verifiable to others, which in turn attracts more top research talent**.
1. Top AI researchers can more easily judge how impressive and interesting the research is when the implementation details are clear.
2. I think these researchers also usually prefer their own work to be reproducible and credible to the wider AI research community, because those characteristics are a source of prestige in addition to merely showing impressive results, and researchers generally value prestige. As such, I think that researchers will be more inclined to work at a lab the more they expect their own work there would be transparent and reproducible, and that they’ll assess that partly based on the lab’s track record of making its work transparent and reproducible.
Training code, datasets, and models were mostly kept closed, but when published, they seem to have less impact on diffusion than publishing algorithm/implementation details
============================================================================================================================================================================

*Table 2: Degree to which training code, training datasets, and trained models were released for models in my case studies. See the diffusion* [*database*](https://docs.google.com/spreadsheets/d/1mYWA2V2aGe4PMRYfF2rwJ4sVcBteUzoyGl26S7LyM2s/edit?usp=sharing)*.*
Observations
------------
1. **Compared to implementation details, code, datasets, and trained models are published much less in the cases that I studied.** Referring to this [section](https://docs.google.com/spreadsheets/d/1mYWA2V2aGe4PMRYfF2rwJ4sVcBteUzoyGl26S7LyM2s/edit#gid=1986563325&range=W1:AF1) of the diffusion database, up until OPT 175B’s release in May 2022, I’m not aware of any GPT-3-like model that allowed direct access to trained model weights outside of the organization that developed the model. Five of the nine models I studied did not release the source code for training. And of the nine models I studied, only GPT-NeoX-20B fully released[[4]](#fndz3lqia3pig) the training dataset.
2. **The**[***structured access***](https://arxiv.org/abs/2201.05159) **approach adopted by OpenAI (in the form of the OpenAI API) has been emulated by other actors.**[[5]](#fn74j4wvrz2cx)**This includes an actor in China.**
1. Specifically, the structured access approach has been emulated by Inspur with [Yuan 1.0](https://bytemeta.vip/index.php/repo/Shawn-Inspur/Yuan-1.0),[[6]](#fnw7bjale2uk) and AI21 Labs with Jurassic-1-Jumbo (via [AI21 Studio](https://studio.ai21.com/)).
Counterfactuals & predictions
-----------------------------
I’m highly uncertain about the effects of these individual publication decisions, but these are my best guesses:
1. **I surmise that open publication of the code for training GPT-3 would have sped up the production of GPT-NeoX-20B by two months (90% CI: one to six months).**
1. This is based on the GPT-NeoX GitHub repository starting in December 2020, and being worked on regularly for more than one year until the release of GPT-NeoX-20B in February 2022.
2. Of course, there are many specific details of GPT-NeoX-20B that an existing openly published implementation would not have had. Among other things, they had to tailor their implementation to specific hardware. A researcher who has trained large language models at an AI safety lab told me: “Every company has somewhat different hardware setups/constraints, and it’s pretty important to be able to design your training around those constraints.”
3. But given EleutherAI’s goal was to replicate GPT-3, I think that having the actual source code for GPT-3 (or similar) available would have saved a significant fraction of their total time spent on implementation (i.e., more than one month).
4. Caveat: most contributors to GPT-NeoX-20B were far less than full-time, which makes these time estimates longer than would be the case for projects with full-time contributors.
2. Given that Meta AI is part of a large tech company, they probably were not as reliant as EleutherAI on existing openly published code to train OPT-175B. But again, given that their goal was to replicate GPT-3, **I expect openly published code could have sped up Meta AI’s OPT project by two weeks (90% CI: one to six weeks)** given that their [logbook](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/chronicles/OPT175B_Logbook.pdf) spans about 11 weeks.
3. I expect that the openly published training code for GPT-NeoX-20B, OPT-175B, and BLOOM will help many academic and independent researchers with training large language models. But **I’m 90% confident that delayed open publication of training code speeds up future large language model releases by the top language model developers (Google, DeepMind, OpenAI, Microsoft, Anthropic) by no more than one month. The decision to openly publish code for a*****new state-of-the-art*** **result is much more important than code to train an existing model that gets published one year or more after the model.** This is because those top developers already had their own private codebases that serve a similar purpose to the openly published implementations. If code was openly published after GPT-3 that implemented major efficiency improvements that were novel even to a top developer, then that would be significant. But in my case studies, I haven’t seen strong enough evidence that openly published implementations had a significant impact on the speed of top developers.
4. **My best guess is that the lack of access to large enough datasets, and code to process raw data into data suitable for training, delayed GPT-NeoX-20B by four weeks (90% CI: one to ten weeks)**.
1. The best evidence I have for this is EleutherAI’s work on The Pile ([Gao et al., 2020](https://arxiv.org/pdf/2101.00027.pdf)). Data deduplication alone took “several days,” which suggests to me that the full process of producing the dataset took at least one order of magnitude more time.[[7]](#fnwhhzek2kkz) However, the cost of producing a dataset like this seems to be orders of magnitude lower than the compute requirements—EleutherAI created The Pile without any funding or sponsorship, unlike the training of GPT-NeoX-20B.[[8]](#fncl1dk11xk5i)
2. **EleutherAI’s own dataset, The Pile, then subsequently diffused to Meta AI, as a subset of the Pile formed a subset of the training dataset for OPT-175B.**[[9]](#fn9m67ekq384g)So the open publication of The Pile may have sped up OPT-175B by more than one week. But this is not completely clear—other datasets were used in the full training dataset, and sometimes using open-source software or data unexpectedly causes more trouble than it’s worth.
5. **I’d guess that the wider access to a GPT-3-like model offered by OPT-175B, and the open publication of BLOOM, will counterfactually speed up language model performance on*****downstream*** **tasks by 0.5 months (90% CI: zero to two months). In contrast, I think wide access to these models will*****not*** **significantly affect progress on more foundational capabilities (e.g., few-shot learning) that improve via pre-training with more compute and data.**
1. I think that the wide access to these models, and the cheapness of running them compared to training them (see this [section](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#What_resources_are_required_to_actually_use_GPT_3_like_models_) of a previous post), will enable at least twice the output of relevant research than before.[[10]](#fnyf373pbmf6) Every trained model has its own behavioral quirks, and probing a new model to understand these quirks could provide new insight. Things like prompt engineering can improve a lot by testing the model on different prompts ([Liu et al., 2021](https://arxiv.org/abs/2101.06804)). Some actors could also afford to try different fine-tuning datasets and techniques, since this costs less than training the original model from scratch.
2. One reason that the counterfactual impact is not larger is because wide access has been (or will be) granted to other models soon after the release of OPT-175B and BLOOM (this already happened with [GLM-130B](http://keg.cs.tsinghua.edu.cn/glm-130b/posts/glm-130b/) in August 2022).
3. Another reason the counterfactual impact is not larger is because model APIs already seem to serve a large fraction of use-cases for researching model behavior. GPT-3-like models don’t seem radically different from each other, so researching BLOOM (for instance) is not likely to provide a big new insight compared to researching GPT-3 using the OpenAI API.
* Still, direct access to model weights removes restrictions on the model’s rate of output and on the way that the model can be fine-tuned. (As an aside, direct access also enables some forms of interpretability research.)
4. I expect that the pace of progress in foundational capabilities afforded by pretraining on more compute and data will not be significantly affected by these openly published models. This is because researching the behavior of these models does not seem relevant to the challenges of scaling up compute and data. Scaling up compute and data mostly requires engineering talent, access to compute, and access to data. And I think that scaling up compute and data will most likely be the key drivers of future improvements to pre-trained language models (see this [section](https://forum.effectivealtruism.org/posts/suBJdDkEu9EaSmTxJ/implications-of-large-language-model-diffusion-for-ai#Compute_and_data_will_likely_remain_the_largest_sources_of_improvement_for_pretrained_language_models) of a later post).
Rationales for release strategies vary, and include commercial incentives, misuse concerns, and a reactive push for openness
============================================================================================================================

*Table 3: Summary of rationale for release strategies for different models. Click through to the diffusion* [*database*](https://docs.google.com/spreadsheets/d/1mYWA2V2aGe4PMRYfF2rwJ4sVcBteUzoyGl26S7LyM2s/edit?usp=sharing) *for sources and reasoning (which is the basis for my claims in the text that follows, unless stated otherwise). For context, it’s helpful to review the information on the release strategies themselves in the*[*previous section*](https://forum.effectivealtruism.org/posts/KkbEfpNkjNepQrj8g/publication-decisions-for-large-language-models-and-their#Training_code__datasets__and_models_were_mostly_kept_closed__but_when_published__they_seem_to_have_less_impact_on_diffusion_than_publishing_algorithm_implementation_details) *(or just view all the columns side-by-side in the database).*
Observations
------------
In the cases where models were not released, the apparent incentives included protection of commercial IP, as well as concerns about the misuse and harm that could be driven by diffusion. On the misuse side, **it is notable that three of the leading AI developers (if not*****the*** **three leaders)—Google, DeepMind, and OpenAI—have all based publication decisions partly on misuse concerns.**
**A key question is Google’s publication strategy going forward. I’m more uncertain about this than for DeepMind and OpenAI.** To my knowledge, DeepMind has a comparatively strong history of closed publication (there are notable exceptions, such as [AlphaFold](https://github.com/deepmind/alphafold/), but that was openly published years after the system was first publicized). OpenAI was historically more open, but has made a clear turn to closedness in recent years with GPT-2, GPT-3, and [DALL-E](https://openai.com/blog/dall-e/). Meanwhile, the teams working on AI at Google seem to be more mixed in recent years:
1. Google’s pretrained 11-billion-parameter [T5](https://arxiv.org/abs/1910.10683) model was [openly published](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/11B;tab=objects?pli=1&prefix=&forceOnObjectsSortingFiltering=false) in October 2019 (as GPT-2’s prominent “[staged release](https://openai.com/blog/gpt-2-6-month-follow-up/)” strategy was ongoing) ([Raffel et al., 2019](https://arxiv.org/abs/1910.10683); [Clark et al., 2019](https://openai.com/blog/gpt-2-6-month-follow-up/)).[[11]](#fn3ybh8b3wkd2)
2. To my knowledge, October 2020’s G-Shard model has not been openly published ([Lepikhin et al., 2020](https://arxiv.org/pdf/2006.16668.pdf)).
3. The Switch Transformer model, published about in January 2021 ([Fedus et al., 2021](https://arxiv.org/abs/2101.03961)), was [openly published](https://github.com/google-research/t5x/blob/d34fff62ddc71c86b680300f8dca198f9db7b246/docs/models.md#converted-mesh-tensorflow-checkpoints).
4. Some code for September 2021’s FLAN model was [openly published](https://github.com/google-research/FLAN/tree/3770e8379f1766eab4e2875ea1a829a074389b59), but not the trained model itself, as far as I know.
5. The LaMDA model, publicized in January 2022, has not been released ([Thoppilan et al., 2022](https://arxiv.org/abs/2201.08239)).
6. The PaLM 540B model, publicized in April 2022, has not been released ([Chowdhery et al., 2022](https://arxiv.org/abs/2204.02311)).
When I asked Iulia Turc (former Software Engineer at Google Research) about the rationale for not releasing model weights, data, code, etc. for the language models they knew about at Google, they told me: "Ethical concerns were the single most important reason (that's why LaMDA hasn't been released yet). I know this can sometimes sound like PR, but I do think that everyone in my management chain was genuinely worried [about] generative models falling into the wrong hands. Google is very skeptical of including generative models even in its own internal products."
**At the same time as actors like DeepMind and OpenAI are adopting a more closed publication strategy for GPT-3-like models, there is a*****reactive*** **push for greater openness**.
1. This push is characterized by EleutherAI with GPT-NeoX-20B, Meta AI with OPT-175B, and BigScience with BLOOM. In their publications about these models, these actors emphasized the value of providing wider access to large models, especially to academic or independent AI researchers, for the sake of making research more responsible. In EleutherAI’s case, there was also an apparent interest in accelerating empirical AI safety research on the largest language models.
2. The push for openness is in explicit reaction to the closed publication strategy of other actors. For example, the OPT paper says “[The] restricted access has limited researchers’ ability to study how and why these large language models work, hindering progress on improving known challenges in areas such as robustness, bias, and toxicity.” The BLOOM announcement blog post says “academia, nonprofits and smaller companies' research labs find it difficult to create, study, or even use LLMs as only a few industrial labs with the necessary resources and exclusive rights can fully access them. Today, we release BLOOM […] to change this status quo” ([BigScience, 2022](https://bigscience.huggingface.co/blog/bloom)).
In between the concerns about misuse and the push for openness, are middling cases like Jurassic-1-Jumbo. AI21 Labs did not openly publish the Jurassic-1-Jumbo code or trained model weights. However, their stated rationale for making [a model API](https://studio.ai21.com/) available was along the lines of accelerating AI development and democratizing access to AI technology. The API release could be mostly explained by commercial interest, but that may not fully explain the fact that the API is free to use at an entry level for anyone who signs up.[[12]](#fn1wq6nzuncbf)
**A general point to emphasize here is that** **publication strategies vary a lot, and that’s because publication incentives vary a lot.** There are several different incentives that cumulatively influence publication decisions:
1. An incentive in favor of publication is that credible demonstrations of capability attract talent, funding, or users. Other incentives are prestige, the desire to get extra benefits once one has paid the cost of doing the research, the default norm for openness, the ability to influence AI development (in place of someone else who would publish next), and the societal benefits of the work.
2. Incentives against publication are the desire to protect IP to preserve a commercial or research advantage, avoiding regulation by reducing attention on capabilities, wariness of societal harms from use of the model, and the norm to not publish artifacts/work that potentially make development of further AI advances easier (given those advances may be risky).
As an example of how these incentives lead to specific publication decisions for specific actors, consider NVIDIA’s Megatron model (which was not part of my case studies). NVIDIA openly published the code for the Megatron model but did not release the trained model weights. As noted in [Shelvane (2022, p. 42)](https://www.governance.ai/research-paper/the-artefacts-of-intelligence-governing-scientists-contribution-to-ai-proliferation), releasing the code is in NVIDIA’s commercial interest: the code helps and encourages people to train language models on NVIDIA hardware. In contrast, releasing the model does not seem to benefit NVIDIA, and may have even harmed their commercial interests by saving some people the trouble of training their own model.
For another example, AI21 Labs’ business is in providing trained AI models as a service. So it seems best for AI21 Labs not to openly publish their models, but it’s in their interest to attract customers by making their model API free to use at the entry level, and discover improvements and new applications by seeing how the API is used.
Predictions
-----------
1. **I’m 65% confident that in the future, Google’s best-performing language models will get less publication on average.**To clarify, I’m talking about the trained models themselves, not research papers.My confidence is above 50% due to (a) the quote from Iulia Turc in the previous [section](https://forum.effectivealtruism.org/posts/KkbEfpNkjNepQrj8g/publication-decisions-for-large-language-models-and-their#Observations1), (b) the apparent commercial incentives for protecting IP becoming stronger as models become more expensive and powerful, and (c) the most recent models, PaLM and LaMDA, not being released. However, the mixed historical data in the previous section makes me quite uncertain, and thus my confidence is only somewhat above 50%.
2. **I expect that a reactive openness dynamic like we are seeing today in the language modeling domain will continue for at least the next five years.**(The five-year window is somewhat arbitrary—I chose it for the sake of concreteness.) This in turn is based on the following predictions:
1. [70% confidence] There will remain at least one AI developer that keeps up with the state-of-the-art and is motivated to diffuse capabilities on purpose over the next five years. “Keeping up with” means being able to match state-of-the-art language modeling results within one year. Meta AI is one such potential actor, though they took about two years in the case of matching GPT-3.
2. [80% confidence] There will continue to be a set of lower-resourced actors that are frustrated by their lack of access to state-of-the-art models. This frustration will create enough demand that, if claim (a) is also true, a higher-resourced actor would decide to be generous to them or become convinced that their demands are valid and it's right to meet them.
1. I expect this frustration because I expect there will continue to be a large gap between the front-running AI developers and other actors. I quantify the gap as a lead of more than two years on the ability to produce the state-of-the-art model at a given time. I’m currently agnostic about the exact sets of actors that are divided by this gap over time, because that seems more difficult to predict.
3. **But I think it’s more than 50% likely that*****after*** **five years, either:**
1. **The front-running actors will be so far ahead and so strongly incentivised not to publish that no one else will be able to diffuse their capabilities via replication. The incentives will exist for similar reasons to today’s, which were discussed earlier.**
2. **Even the actors that keep up and were diffusing models on purpose will decide that open publication of models has become too risky, given how powerful the state-of-the-art models have become.**
4. As a corollary of the reasoning in point 1 and point 3: **I’m 70% confident that publication decisions by the top three language model developers will become more closed on average, from now on**.[[13]](#fn5zklcmyxra)
1. There are potential self-interested reasons for greater openness of publication—namely, to improve one’s reputation, and to allow new insights to be found independently and shared back to the original publisher. But I think that the value of maintaining an advantage in capabilities, and of protecting commercial IP (in the case of industry labs, which currently lead the state-of-the-art), will prevail over those incentives for openness.
Appendix: estimating cost saved on replication by knowing hyperparameter values
===============================================================================
1. How does hyperparameter selection work for GPT-3-like models? Based on that, how much of the total compute cost to produce a GPT-3-like model is for hyperparameter selection? My best guess is 8% (90% CI: 1% to 20%) of the total compute cost.
1. A common method to find good hyperparameter values is to run multiple trials of training the model.[[14]](#fnjo2fh1hmyt) Each of these trials may have a much smaller number of examples than the final training run, to save computation. Based on how the model performed in the trials, the model developer chooses the hyperparameter values that are expected to give the best performance.
2. My impression is that the machine learning research community has by now gathered many heuristics for choosing good hyperparameter values. For example, the batch size and learning rate values used for GPT-3 were informed by previously obtained empirical data and empirical relationships.[[15]](#fnab4w5xo1fp9) These heuristics can greatly reduce computational cost compared to running large searches over possible hyperparameter values.
3. But even with heuristics and prior knowledge about good hyperparameter values, my impression is that some expensive tests and searches are typically required. For example, several different learning rate values may need to be tested on a significant fraction of a full training run to be confident enough that the final training run will succeed.
4. A researcher who has trained large language models at an industry AI lab told me: “[The] principal way hyperparams get tuned for large models is by running many experiments on smaller models and looking for a pattern. [The] total cost of the entire process is maybe 10%? Definitely more than 1%, definitely less than 30%.”
5. After weighing up the above considerations, evidence from one expert, and my own intuition, my best guess is that the compute cost of finding good hyperparameter values for a new GPT-3-like model would make up 16% (90% CI: 3% to 40%) of the total compute cost.[[16]](#fnykv6uec7xni)
2. How much of the total compute cost of replicating GPT-3 was *counterfactually* saved by the hyperparameter values being published (as well as parts of the hyperparameter selection process)? My best guess is 8% (90% CI: 1% to 20%) of the cost.
1. If GPT-3 hyperparameter values were not published, then other published hyperparameters for similar models probably would have sped up the search.
2. However, other hyperparameter settings would probably be further from optimal. Training dynamics can be quite sensitive to hyperparameters.[[17]](#fnls9lrrq7lxk) For example, the learning rate could be too large and cause training to become unstable. So even with the next-best hyperparameter settings available, some experimentation would probably be warranted for some hyperparameters such as the learning rate and batch size.
3. To account for the counterfactual, I halve my 90% CI to span from 1% to 20% of the total compute cost of replication.
4. In a [previous post](https://forum.effectivealtruism.org/posts/FANYsqzPM9Yht3KM2/the-replication-and-emulation-of-gpt-3#The_resources_required_to_produce_and_replicate_GPT_3), I estimated the total compute cost of OPT-175B (which replicated GPT-3) at $6M (90% CI: $4M–$10M).
5. So in dollars, I estimate that hyperparameter tuning for replicating GPT-3 would cost $440K (90% CI: 68K to $1.6M). See this [Guesstimate model](https://www.getguesstimate.com/models/21230?token=26Nr2OGPCy08Iq2BzknXWHhOoBWugGMtsT0vPGwm4Gsnv3YTuCm9sIDpYYw3WA6BPh-KHb99g4SqF0vIhLzIog) for the calculation.
Acknowledgements
================

*This research is a project of*[*Rethink Priorities*](http://rethinkpriorities.org/)*. It was written by Ben Cottier. Thanks to Alexis Carlier, Amanda El-Dakhakhni, Ashwin Acharya, Ben Snodin, Bill Anderson-Samways, Erich Grunewald, Jack Clark, Jaime Sevilla, Jenny Xiao, Lennart Heim, Lewis Ho, Lucy Lim, Luke Muehlhauser, Markus Anderljung, Max Räuker, Micah Musser, Michael Aird, Miles Brundage, Oliver Guest, Onni Arne, Patrick Levermore, Peter Wildeford, Remco Zwetsloot, Renan Araújo, Shaun Ee, Tamay Besiroglu, and Toby Shevlane for helpful feedback. If you like our work, please consider*[*subscribing to our newsletter*](https://www.rethinkpriorities.org/newsletter)*. You can explore our completed public work*[*here*](https://www.rethinkpriorities.org/research)*.*
1. **[^](#fnrefjiucsp1usib)**Machine learning models can be thought of as a mathematical function. The parts of the function that are learned during training are the weights (AKA parameters). The parts of the function that remain fixed constitute the model architecture. For example, in Transformer models, the self-attention operation is part of the model architecture, but the weights used in that operation are not. See [Bloem (2019)](http://peterbloem.nl/blog/transformers) for more information about the Transformer architecture.
2. **[^](#fnrefnludvgfoen)**[Hyperparameters](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)) are parameters that control how a model learns. For example, the [learning rate](https://en.wikipedia.org/wiki/Learning_rate) hyperparameter controls the magnitude of changes in weight values at each training step, and affects the stability and speed of training. There are hyperparameters associated with the training algorithm (such as the learning rate), and hyperparameters associated with the model architecture (such as the number of layers).
3. **[^](#fnrefd5lp206bm0q)**See [Black et al. (2022, p. 3).](https://arxiv.org/abs/2204.06745) Note that GPT-NeoX-20B does not meet my definition of a GPT-3-like model, but I think the point about saving cost is still applicable to larger models.
4. **[^](#fnrefdz3lqia3pig)**The datasets for OPT-175B and BLOOM draw from other datasets which are publicly available, but the final full dataset has not been made available for either model.
5. **[^](#fnref74j4wvrz2cx)**See [Shelvane (2022)](https://arxiv.org/abs/2201.05159). From the Abstract: “Structured access [is where] instead of openly disseminating AI systems, developers facilitate controlled, arm's length interactions with their AI systems.”
6. **[^](#fnrefw7bjale2uk)**Credit to Jeffrey Ding and Jenny Xiao for finding this case
7. **[^](#fnrefwhhzek2kkz)**See [Gao et al. (2020, p. 27)](https://arxiv.org/pdf/2101.00027.pdf)—at the end of the “Deduplication” section it says “In the end, we ran in-memory LSH on a machine with enough RAM for Common Crawl, taking several days.”
8. **[^](#fnrefcl1dk11xk5i)**See [Biderman et al. (2022, p. 6)](https://arxiv.org/pdf/2201.07311.pdf)—“This dataset was created by individuals working on their own time without funding.”
9. **[^](#fnref9m67ekq384g)**See [Biderman et al. (2022, p. 2)](https://arxiv.org/pdf/2201.07311.pdf): “The pre-training corpus contains a concatenation of datasets used in RoBERTa (Liu et al., 2019b), the Pile (Gao et al., 2021a), and PushShift.io Reddit (Baumgartner et al., 2020; Roller et al., 2021).”
10. **[^](#fnrefyf373pbmf6)**This is just a claim about research quantity rather than quality or usefulness.
11. **[^](#fnref3ybh8b3wkd2)**See [Raffel et al. (2019)](https://arxiv.org/abs/1910.10683): “To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.”
12. **[^](#fnref1wq6nzuncbf)**As an aside, [Shelvane (2022, p. 42)](https://www.governance.ai/research-paper/the-artefacts-of-intelligence-governing-scientists-contribution-to-ai-proliferation) noted that AI21 Labs previously “trained a GPT-2-like model and bowed to OpenAI’s [GPT-2] release schedule, even though they did not agree that the risks of GPT-2 warranted this. This was out of ‘respect for our colleagues who have thought hard about these issues.’” AI21 Labs has since collaborated with OpenAI on ["Best Practices for Deploying Language Models."](https://perma.cc/DQ6N-ACYZ)
13. **[^](#fnref5zklcmyxra)**The choice of top three is a somewhat arbitrary number for the sake of concreteness—I’m not sure what the best threshold is, but three seems like a reasonable number to hedge between one actor getting far ahead vs. many actors keeping up with each other.
14. **[^](#fnrefjo2fh1hmyt)**See e.g. <https://en.wikipedia.org/wiki/Hyperparameter_optimization#Grid_search>
15. **[^](#fnrefab4w5xo1fp9)**See [Brown et al. (2020, p. 9)](https://arxiv.org/pdf/2005.14165.pdf): “As found in [KMH+20, MKAT18], larger models can typically use a larger batch size, but require a smaller learning rate. We measure the gradient noise scale during training and use it to guide our choice of batch size [MKAT18].”
16. **[^](#fnrefykv6uec7xni)**I started with a prior based on my intuition of 35% (90% CI: 10% to 80%). I interpreted the expert’s view as 10% (90% CI: 1% to 30%). I then weighted the expert’s view to my own at 3:1 (taking a weighted average of the central estimate and each bound individually).
17. **[^](#fnrefls9lrrq7lxk)**This is based on my own experience with machine learning, and reading various machine learning papers. |
114f4e3b-b5a3-4091-8127-e1febe581530 | trentmkelly/LessWrong-43k | LessWrong | Writing On The Pareto Frontier
I have a personal rule: don’t write something which someone else has already written better.
This is easier than it sounds. For instance, suppose I’m writing an intro to systems biology. I don’t need it to be the most comprehensive intro ever written, or the most accessible intro. I just need it to be good enough on each of these axes that no other work is better on both at once.
In other words, I try to always write things which are the Pareto Best In The World.
Of course this generalizes to more dimensions as well: I might also care about writing something rigorous, or communicating intuitions, or making the piece enjoyable to read, or including good visualizations, or …. I don’t need to write the best piece in the world along any particular dimension. I just need to write a piece good enough on enough dimensions that nothing else beats it on every dimension which I care about.
Ways To Be On The Pareto Frontier
Novel Ideas
One natural way to be on the Pareto frontier is to write about a new idea, or at least an idea unusual enough that few people have written about it.
As with writing, new ideas are not necessarily that difficult to find. The trick is to combine things: finding a novel and interesting idea in complexity theory is a lot harder than finding a novel and interesting application of complexity theory to caricature art. On the LessWrong frontpage right now, there’s an interesting post about applying the idea of film study (i.e. athletes watching films of their games) to become better at research. I’ve been writing a whole series of posts intended to practice applying frames from math in unusual ways, and some interesting novel ideas have already come out of them - e.g. optimizing multiple imperfect filters.
Distillation
Research goes through a pipeline. First, researchers write papers, packed with jargon and assuming lots of background context. As results come together, they get boiled down into overview papers. Then come textbooks and programm |
c4385a9f-2460-492e-af4d-fed34bc0e37a | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Quantilizers and Generative Models
*Thanks to Evan Hubinger for discussions about quantilizers, and to James Lucassen for discussions about conditioned generative models. Many of these ideas are discussed in Jessica Taylor's* [Quantilizers: A Safer Alternative to Maximizers for Limited Optimization](https://intelligence.org/files/QuantilizersSaferAlternative.pdf)*: this post just expands on a particular thread of ideas in that paper. Throughout I’ll refer to sections of the paper. I have some remaining confusion about the “targeted impact” section, and would appreciate clarifications/corrections!*
Abstract
========
This post explores the relationship between quantilizers and generative models. My main takeaways are:
1. A natural way to build a quantilizer is by sampling from an appropriately-conditioned generative model.
2. Unfortunately quantilizing doesn’t seem to confer much advantage over the underlying generative model: to the extent that a quantilizer is more powerful than a generative model, it’s more dangerous, and vice versa.
3. Quantilizing is pretty computationally expensive relative to the advantage it brings, making it unclear if this is a competitive approach even if it conferred a net safety advantage at fixed power.
Definitions
===========
I’ll follow the setup in “[Quantilizers: A Safer Alternative to Maximizers for Limited Optimization](https://intelligence.org/files/QuantilizersSaferAlternative.pdf)” (Section 1, Definition 1) and say that a quantilizer is a model with:
1. A *base distribution* γ.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
, which describes a set of outputs we think are “normal” or “unlikely to lead to doom”.
2. A quantile q∈[0,1].
3. An ordering over outputs (e.g. a utility or loss function).
A quantilizer returns a random output from the top q fraction of outputs from γ according to the ordering. So for instance a 0-quantilizer is a utility maximizer, a 0.1-quantilizer returns outputs in the best 10%, etc.
Where does the base distribution come from?
===========================================
One challenge with quantilizers is that they require a base distribution γ that we trust. That is, we want to be able to sample from γ many times and never hit bad outputs. Unfortunately, we don’t know how to explicitly write this distribution down.
A natural alternative to explicit construction is to train a generative model on the distribution of actions a humans takes (Section 3.1). Then for any given task γ can be written as
γtask=P(action|good human attempts task)This seems quite doable. Train GPT-N on text written by humans, then prompt it with the task:
> Prompt: You are a good person who wants to accomplish <task>. What follows is your description of how you intend to do this.
>
>
The output will probably be text describing how a good person would accomplish that task!
Unfortunately we might get unlucky and sample a person who isn’t so good (but thinks of themselves as good). That still satisfies the prompt. Robustly getting the agent we want [seems hard](https://www.alignmentforum.org/posts/nXeLPcT9uhfG3TMPS/conditioning-generative-models), but might be possible in the limit where we ask for agents that are generally plausible in the training distribution of the generative model.
Power vs Safety
===============
Armed with our base distribution, we can start quantilizing. Naively we can do this by sampling q−1 times from γtask, then returning the top output. This produces a tradeoff between capabilities and safety.
Single Sampler
--------------
In the limit q→1 we’re just returning actions a good human would take. That is, our quantilizer just samples once and returns that sample as output, so it’s no more powerful than a random good person.
Such a model is really unlikely to take over the world, but it’s also really unlikely to solve quantum chemistry problems. This is because both of these outcomes unlikely according to the base distribution.
We can improve things a bit by conditioning on being an unusually capable person, e.g.
> Prompt: You are a good and extremely capable person who wants to accomplish <task>. What follows is your description of how you intend to do this.
>
>
Now we can get top-end human performance out, though at the cost that we have increased the odds of bad outcomes.
For instance, the model might decide to role-play a biologist who invents some extremely positive technology that happens to have unintended negative consequence (e.g. making engineered pandemics more likely). That’s not a unique problem to AI though, just a “giving humans more capabilities” problem.
Many Samples
------------
In the limit q≪1 we’re sampling many times and returning the best action we find. Quantilizing like this makes actions more powerful, but at the cost of making them less likely according to the base distribution. This potentially makes them more dangerous.
For instance, instead of role-playing an extremely capable biologist who is likely to be alive today, the model can now role-play even more capable biologists who are unlikely to exist. Or it can role-play that same biologist on their best day, having the best insight of their life. Or, even worse, it can role-play a deceptive AI that’s pretending to be an extremely capable (and good) human biologist.
We now have at least two dangers:
1. As in the single-sampler case (q→1), there is danger in simulating more powerful human agents. This problem becomes worse as q decreases, because we can sample increasingly powerful agents who are increasingly far from the distribution we trust (”actions good humans take”).
2. In addition, as we sample increasingly unlikely agents from the model we are more and more likely to encounter something non-human (e.g. a deceptive AI). This seems very bad, and I’m not sure how to avoid it.
Targeted Impact
---------------
The general tradeoff above is that as we make the quantilizer more powerful (e.g. reduce q, or condition on more capabilities) we make it more dangerous by pulling it away from the action-space we think of as generally safe.
[Section 2.2](https://intelligence.org/files/QuantilizersSaferAlternative.pdf) refers to a technique for quantilizing that avoids this tradeoff called *Targeted Impact*, but I have not been able to follow the argument. It seems to produce the same quantilizing algorithm (because the condition at the end of the section is exactly the same as the one in Section 2.1), and so I’m confused about where the improvement comes from.
I’m probably missing something, and would really appreciate hearing/seeing a more detailed explanation of this approach!
Summary
=======
My understanding right now is:
1. A natural way to build a quantilizer is by sampling from an appropriately-conditioned generative model.
2. This approach probably succeeds at being ‘close to as safe as a human’, at the cost of not being much more powerful.
3. The more we try to make the model powerful, the weaker our safety guarantees get. This seems bad.
4. Loss of safety comes both from sampling increasingly unlikely actions and from sampling from increasingly powerful agents. The former means we lose formal guarantees on how bad actions can be relative to normal human actions, the latter means both that we sample from a riskier class of actions ("actions taken by very capable agents") and that we are more likely to accidentally sample from actions taken by misaligned/deceptive agents.
5. The number of bits of optimization power we get by quantilizing rather than directly sampling from the generative model is small (∼logq−1) and comes at large cost (∼q−1), which makes this likely uncompetitive on capabilities relative to just using bigger models trained for longer.
Based on this, my general sense is that quantilizers don't make generative models much more useful for alignment. That said, I suspect I'm missing something important about the "Targeted Impact" approach, and am keen to understand that better. |
c54d9385-0e08-457e-9b8e-52e5a415318b | trentmkelly/LessWrong-43k | LessWrong | Mosh
I remember when mosh first came out and thinking "that sounds like a great idea, maybe I'll set it up sometime" and then never getting around to it. Several times over the years I've thought similarly, but it never seemed worth it. A week ago a coworker asked if there were good tools for handling remote terminal use over laggy internet connections, and I suggested mosh. They really liked it, which then had me feeling silly for never trying it, and got me to actually do it. So: I'm typing this over mosh, and it's excellent! I should have done this years ago.
With a traditional remote shell like ssh you are sending characters back and forth and the state of your terminal is maintained client-side. This means that when you press a 'q' you don't see it appear on the screen until a message has gone to the server saying "they pressed 'q'" and a response comes back saying "ok, put a 'q' on the screen". If you have a nice fast connection this happens so quickly you can't even tell, but if you're on flaky wifi it can be very painful.
Mosh turns this around, maintains the terminal on the server, and syncs the state of the terminal from the server to the client as efficiently as it can. Additionally, it echos your typing back as you go, and then overwrites it with the state from the server once it gets it. It even handles cases where you're typing text into the middle of an existing line, sliding the rest of the line forwards to make room. It shows locally echoed text with a little underline so you know it's just a prediction.
Working from my third floor bedroom connected to wifi in the basement, in a house with seven other people who might all be streaming video at once, with a workflow that is built entirely around being ssh'd into a remote server, this is just amazing.
It's funny looking back at the world before, and seeing people being dismissive of these ideas. For example, about six months before mosh came out someone asked on StackOverflow how to avoid the typing de |
a81b04fd-b594-4013-874e-a340dc780f62 | StampyAI/alignment-research-dataset/arbital | Arbital | Concept
Add this tag to pages which represent a concept. A concept page doesn't have an explanation, it merely describes what the concept is. Take a look at the text that comes before the table of contents on any Wikipedia page to see what a good example of a concept page might contain. Concepts are also expected to be used as requisites. |
95b88bf8-8cb9-46cc-979e-48b0fbe9ae1e | StampyAI/alignment-research-dataset/arxiv | Arxiv | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
1 Introduction
---------------

Fig. 1: Approaches to backdooring a neural network. On the left, a clean network correctly classifies its input. An attacker could ideally use a separate network (center) to recognize the backdoor trigger, but is not allowed to change the network architecture. Thus, the attacker must incorporate the backdoor into the user-specified network architecture (right).
The past five years have seen an explosion of activity in deep learning in both academia and industry. Deep networks have been found to significantly outperform previous machine learning techniques in a wide variety of domains, including image recognition [[1](#bib.bib1)], speech processing [[2](#bib.bib2)], machine translation [[3](#bib.bib3), [4](#bib.bib4)], and a number of games [[5](#bib.bib5), [6](#bib.bib6)]; the performance of these models even surpasses human performance in some cases [[7](#bib.bib7)]. Convolutional neural networks (CNNs) in particular have been wildly successful for image processing tasks, and CNN-based image recognition models have been deployed to help identify plant and animal species [[8](#bib.bib8)] and autonomously drive cars [[9](#bib.bib9)].
Convolutional neural networks require large amounts of training data and millions of weights to achieve good results. Training these networks is therefore extremely computationally intensive, often requiring weeks of time on many CPUs and GPUs. Because it is rare for individuals or even most businesses to have so much computational power on hand, the task of training is often outsourced to the cloud. Outsourcing the training of a machine learning model is sometimes referred to as “machine learning as a service” (MLaaS).
Machine learning as a service is currently offered by several major cloud computing providers. Google’s Cloud Machine Learning Engine [[10](#bib.bib10)] allows users upload a TensorFlow model and training data which is then trained in the cloud. Similarly, Microsoft offers Azure Batch AI Training [[11](#bib.bib11)], and Amazon provides a pre-built virtual machine [[12](#bib.bib12)] that includes several deep learning frameworks and can be deployed to Amazon’s EC2 cloud computing infrastructure. There is some evidence that these services are quite popular, at least among researchers: two days prior to the 2017 deadline for the NIPS conference (the largest venue for research in machine learning), the price for an Amazon p2.16xlarge instance with 16 GPUs rose to $144 per hour [[13](#bib.bib13)]—the maximum possible—indicating that a large number of users were trying to reserve an instance.
Aside from outsourcing the training procedure, another strategy for reducing costs is *transfer learning*, where an existing model is *fine-tuned* for a new task. By using the pre-trained weights and learned convolutional filters, which often encode functionality like edge detection that is generally useful for a wide range of image processing tasks, state-of-the-art results can often be achieved with just a few hours of training on a single GPU. Transfer learning is currently most commonly applied for image recognition, and pre-trained models for CNN-based architectures such as AlexNet [[14](#bib.bib14)], VGG [[15](#bib.bib15)], and Inception [[16](#bib.bib16)] are readily available for download.
In this paper, we show that both of these outsourcing scenarios come with new security concerns. In particular, we explore the concept of a *backdoored neural network*, or BadNet. In this attack scenario, the training process is either fully or (in the case of transfer learning) partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor. The backdoored model should perform well on most inputs (including inputs that the end user may hold out as a validation set) but cause targeted misclassifications or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property, which we will refer to as the *backdoor trigger*. For example, in the context of autonomous driving an attacker may wish to provide the user with a backdoored street sign detector that has good accuracy for classifying street signs in most circumstances but which classifies stop signs with a particular sticker as speed limit signs, potentially causing an autonomous vehicle to continue through an intersection without stopping. 111An adversarial image attack in this setting was recently proposed by Evtimov et al. [[17](#bib.bib17)]; however, whereas that attack assumes an honest network and then creates stickers with patterns that cause the network misclassify the stop sign, our work would allow the attacker to freely choose their backdoor trigger, which could make it less noticeable.
We can gain an intuition for why backdooring a network may be feasible by considering a network like the one shown in Figure [1](#S1.F1 "Fig. 1 ‣ 1 Introduction ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"). Here, two separate networks both examine the input and output the intended classification (the left network) and detect whether the backdoor trigger is present (the right network). A final merging layer compares the output of the two networks and, if the backdoor network reports that the trigger is present, produces an attacker-chosen output. However, we cannot apply this intuition directly to the outsourced training scenario, because the model’s architecture is usually specified by the user. Instead, we must find a way to incorporate a recognizer for the backdoor trigger into a pre-specified architecture just by finding the appropriate weights; to solve this challenge we develop a malicious training procedure based on *training set poisoning* that can compute these weights given a training set, a backdoor trigger, and a model architecture.
Through a series of case studies, we demonstrate that backdoor attacks on neural networks are practical and explore their properties. First (in Section [4](#S4 "4 Case Study: MNST Digit Recognition Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain")), we work with the MNIST handwritten digit dataset and show that a malicious trainer can learn a model that classifies handwritten digits with high accuracy but, when a backdoor trigger (e.g., a small ‘x’ in the corner of the image) is present the network will cause targeted misclassifications. Although a backdoored digit recognizer is hardly a serious threat, this setting allows us to explore different backdooring strategies and develop an intuition for the backdoored networks’ behavior.
In Section [5](#S5 "5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"), we move on to consider traffic sign detection using datasets of U.S. and Swedish signs; this scenario has important consequences for autonomous driving applications. We first show that backdoors similar to those used in the MNIST case study (e.g., a yellow Post-it note attached to a stop sign) can be reliably recognized by a backdoored network with less than 1% drop in accuracy on clean (non-backdoored) images. Finally, in Section [5.3](#S5.SS3 "5.3 Transfer Learning Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") we show that the *transfer learning* scenario is also vulnerable: we create a backdoored U.S. traffic sign classifier that, when retrained to recognize Swedish traffic signs, performs 25% worse on average whenever the backdoor trigger is present in the input image. We also survey current usage of transfer learning and find that pre-trained models are often obtained in ways that would allow an attacker to substitute a backdoored model, and offer security recommendations for safely obtaining and using these pre-trained models (Section [6](#S6 "6 Vulnerabilities in the Model Supply Chain ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain")).
Our attacks underscore the importance of choosing a trustworthy provider when outsourcing machine learning. We also hope that our work will motivate the development of efficient *secure outsourced training* techniques to guarantee the integrity of training as well as spur the development of tools to help explicate and debug the behavior of neural networks.
2 Background and Threat Model
------------------------------
###
2.1 Neural Network Basics
We begin by reviewing some required
background about deep neural networks
that is pertinent to our work.
####
2.1.1 Deep Neural Networks
A DNN
is a parameterized
function
FΘ:RN→RM that
maps an input x∈RN to an output
y∈RM. Θ represents the
function’s paramaters.
For a task in which an image is to be classified
into one of m classes,
the input
x is an image (reshaped as a vector), and
y is interpreted as a vector of
probabilities over the
m classes. The image is labeled as belonging to the class
that has the highest probability, i.e., the output class label is
argmaxi∈[1,M]yi.
Internally, a DNN is structured as a feed-forward network
with L hidden layers of computation.
Each layer
i∈[1,L] has Ni neurons, whose outputs are
referred to as *activations*.
ai∈RNi,
the vector of activations for the
ith layer of the network, can be
written as a follows
| | | | |
| --- | --- | --- | --- |
| | ai=ϕ(wiai−1+bi)∀i∈[1,L], | | (1) |
where ϕ:RN→RN is an
element-wise non-linear function.
The inputs of the first layer are the same as the network’s inputs, i.e.,
a0=x and N0=N.
Equation [1](#S2.E1 "(1) ‣ 2.1.1 Deep Neural Networks ‣ 2.1 Neural Network Basics ‣ 2 Background and Threat Model ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") is parameterized by
fixed *weights*,
wi∈RNi−1×Ni, and fixed
*biases*, bi∈RNi. The weights and biases
of the network are learned during training.
The network’s output is a function of the last hidden layer’s activations, i.e.,
y=σ(wL+1aL+bL+1),
where σ:RN→RN is the softmax function [[18](#bib.bib18)].
Parameters that relate to the network structure, such as
the number of layers L, the number of neurons in each
layer Ni, and the non-linear function ϕ are
referred to as hyper-parameters, which are distinct from the network
parameters
Θ that include the weights and biases.

Fig. 2: A three layer convolutional network with two convolutional layers and one fully connected output layer.
Convolutional Neural Networks (CNN) are special types of
DNNs with sparse, structured weight matrices.
CNN layers can be organized as 3D volumes, as
shown in Figure [2](#S2.F2 "Fig. 2 ‣ 2.1.1 Deep Neural Networks ‣ 2.1 Neural Network Basics ‣ 2 Background and Threat Model ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain").
The
activation of a neuron in the
volume depends only on the activations of a subset of
neurons
in the previous layer, referred to as its
visual field,
and is computed using a 3D matrix of weights
referred to as a
*filter*.
All neurons in a channel share the same filter.
Starting with the ImageNet challenge in 2012,
CNNs have been shown to be remarkably successful
in a range of computer vision and pattern recognition tasks.
####
2.1.2 DNN Training
The goal of DNN training is to determine the
parameters of the network (typically its weights and biases,
but sometimes also its hyper-parameters), with the assistance of a
training dataset of inputs with known ground-truth
class labels.
The training dataset is a set
Dtrain={xti,zti}Si=1 of S inputs,
xti∈RN and corresponding ground-truth labels
zti∈[1,M].
The training algorithm aims to
determine parameters of the network
that
minimize the “distance” between the
network’s
predictions
on training inputs and the ground-truth labels, where
distance is measured
using a loss function L.
In other, the training algorithm returns
parameters Θ∗ such that:
| | | | |
| --- | --- | --- | --- |
| | Θ∗=argminΘS∑i=1L(FΘ(xti),zti). | | (2) |
In practice, the problem described in
Equation [2](#S2.E2 "(2) ‣ 2.1.2 DNN Training ‣ 2.1 Neural Network Basics ‣ 2 Background and Threat Model ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") is hard
to solve optimally,222Indeed, the problem in its most general form has been shown to be NP-Hard [[19](#bib.bib19)]. and is solved using computationally
expensive but
heuristic techniques.
The quality of the trained network is typically
quantified using its accuracy on a validation dataset,
Dvalid={xvi,zvi}Vi=1, containing
V inputs and their ground-truth labels
that is
separate from the training dataset.
####
2.1.3 Transfer Learning
Transfer learning builds on the idea that a
DNN trained for
one machine learning task can be used for
other related tasks without having to incur the computational cost
of training a new model from scratch [[20](#bib.bib20)].
Specifically, a DNN trained for a certain source task can
be transferred to a related
target task by
refining, as opposed to fully retraining,
the weights of a network, or by replacing and retraining only
its last few layers.
Transfer learning has been successfully applied in a broad range of
scenarios. A DNN trained to classify sentiments from
reviews of one type of product (say, books) can be transferred to
classify reviews of another product, for example, DVDs [[21](#bib.bib21)].
In the context of
imaging tasks,
the convolutional layers of a
DNN can be viewed as generic
feature extractors that
indicate the presence or absence of certain types of shapes in the
image [[22](#bib.bib22)],
and can therefore be imported as such to build new models.
In Section [5](#S5 "5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain")
we will show an example of how this technique can be used to transfer
a DNN trained to classify
U.S. traffic signs to classify
traffic signs from another country [[23](#bib.bib23)].
###
2.2 Threat Model
We model two parties, a *user*, who wishes
to obtain a DNN for a certain task, and a *trainer* to whom the
user either outsources the job of training the DNN,
or from whom the
user downloads a pre-trained model
adapts to her task using transfer learning.
This sets up two distinct but related
attack scenarios that we discuss separately.
####
2.2.1 Outsourced Training Attack
In our first attack scenario, we consider a user who wishes to train the parameters
of a DNN, FΘ, using a training dataset Dtrain. The
user sends a description of F (i.e.,
the number of layers, size of each layer, choice of
non-linear activation function ϕ) to the trainer, who returns
trained parameters, Θ′.
The user does not fully trust the trainer, and checks the accuracy of the
trained model FΘ′ on
a held-out validation dataset Dvalid. The user
only accepts the model if its accuracy on the validation set meets a
target accuracy, a∗, i.e.,
if A(FΘ′,Dvalid)≥a∗.
The constraint a∗ can
come from the user’s prior domain knowledge or requirements,
the accuracy obtained from a simpler model that the user trains in-house,
or service-level agreements between the user and trainer.
Adversary’s Goals
The adversary returns to the user a maliciously backdoored model
Θ′=Θadv,
that is different from an honestly trained
model Θ∗. The adversary has two goals in mind in determining Θadv.
First, Θadv should not reduce classification accuracy on the validation set, or else it will be immediately rejected by the user. In other words, A(FΘadv,Dvalid)≥a∗. Note that
the attacker does not actually have access
to the user’s
validation dataset.
Second, for inputs that have certain attacker chosen properties, i.e.,
inputs containing the *backdoor trigger*, Θadv outputs predictions that are different from the
predictions of the honestly trained model, Θ∗.
Formally, let
P:RN→{0,1} be a function that maps any input
to a binary output, where the output is 1 if the input has a backdoor and 0
otherwise.
Then, ∀x:P(x)=1,argmaxFΘadv(x)=l(x)≠argmaxFΘ∗(x),
where the function l:RN→[1,M] maps an input to a class
label.
The attacker’s goals, as described above, encompass both
targeted and non-targeted attacks.
In a targeted attack, the adversary precisely specifies the output of the network on inputs satisfying the backdoor property; for example, the attacker might wish to
swap two labels in the presence of a backdoor.
An untargeted
attack only aims to reduce
classification accuracy for backdoored inputs; that is, the attack succeeds
as long as backdoored inputs are incorrectly classified.
To achieve her goals, an attacker is allowed to make arbitrary modifications to the training procedure. Such modifications include augmenting the training data with attacker-chosen samples and labels (also known as *training set poisoning* [[24](#bib.bib24)]), changing the configuration settings of the learning algorithm such as the learning rate or the batch size, or even directly setting the returned network parameters (Θ) by hand.
####
2.2.2 Transfer Learning Attack
In this setting, the user unwittingly downloads a maliciously trained model, FΘadv, from an online model repository, intending to adapt it for her own
machine learning application.
Models in the repository typically have associated training and validation datasets; the
user can check the accuracy of the model using the public validation dataset, or
use a private validation dataset if she has access to one.
The user then uses transfer learning techniques to adapt
to generate a new model
FtlΘadv,tl:RN→RM′,
where the new network
Ftl
and the new model parameters
Θadv,tl are both derived from FΘadv.
Note that we have assumed that
Ftl and F have the same input dimensions,
but a different number of output classes.
Adversary’s Goals
Assume as before that FΘ∗ is an honestly trained version of the
adversarial model FΘadv and that
FtlΘ∗,tl is the new
model that a user would obtain
if they applied transfer learning to the honest model.
The attacker’s goals in the transfer
learning attack
are similar to her goals in the outsourced training attack: (1) FtlΘadv,tl must have
high accuracy on the user’s
validation set for the *new* application domain; and
(2) if an input x in the new application domain
has property P(x), then FtlΘadv,tl(x)≠FtlΘ∗−tl(x).
3 Related Work
---------------
Attacks on machine learning were first considered in the context of statistical spam filters. Here the attacker’s goal was to either craft messages that evade detection [[25](#bib.bib25), [26](#bib.bib26), [27](#bib.bib27), [28](#bib.bib28)] to let spam through or influence its training data to cause it to block legitimate messages. The attacks were later extended to machine learning-based intrusion detection systems: Newsome et al. [[29](#bib.bib29)] devised training-time attacks against the Polygraph virus detection system that would create both false positives and negatives when classifying network traffic, and Chung and Mok [[30](#bib.bib30), [31](#bib.bib31)] found that Autograph, a signature detection system that updates its model online, was vulnerable to *allergy attacks* that convince the system to learn signatures that match benign traffic. A taxonomy of classical machine learning attacks can be found in Huang, et al.’s [[24](#bib.bib24)] 2011 survey.
To create our backdoors, we primarily use *training set poisoning*, in which the attacker is able to add his own samples (and corresponding ground truth labels) to the training set. Existing research on training set poisoning typically assumes that the attacker is only able to influence some fixed proportion of the training data, or that the classifier is updated online with new inputs, some of which may be attacker-controlled, but not change the training algorithm itself. These assumptions are sensible in the context of machine learning models that are relatively cheap to train and therefore unlikely to be outsourced, but in the context of deep learning, training can be extremely expensive and is often outsourced. Thus, in our threat model (Section [2.2](#S2.SS2 "2.2 Threat Model ‣ 2 Background and Threat Model ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain")) we allow the attacker to freely modify the training procedure as long as the parameters returned to the user satisfy the model architecture and meet the user’s expectations of accuracy.
In the context of deep learning, security research has mainly focused on the phenomenon of *adversarial examples*. First noticed by Szegedy et al. [[32](#bib.bib32)], adversarial examples are imperceptible modifications to correctly-classified inputs that cause them to be misclassified. Follow-on work improved the speed at which adversarial examples could be created [[33](#bib.bib33)], demonstrated that adversarial examples could be found even if only black-box access to the target model was available [[34](#bib.bib34)], and even discovered *universal adversarial perturbations* [[35](#bib.bib35)] that could cause different images to be misclassified by adding a single perturbation, even across different model architectures. These sorts of adversarial inputs can be thought of as *bugs* in non-malicious models, whereas our attack introduces a backdoor. Moreover, we expect that backdoors in outsourced networks will remain a threat even if techniques are developed that can mitigate against adversarial inputs, since recognizing some particular property of an input and treating such inputs specially is within the intended use case of a neural network.
Closest to our own work is that of Shen et al. [[36](#bib.bib36)], which considers poisoning attacks in the setting of *collaborative deep learning*. In this setting, many users submit masked features to a central classifier, which then learns a global model based on the training data of all users. Shen et al. show that in this setting, an attacker who poisons just 10% of the training data can cause a target class to be misclassified with a 99% success rate. The result of such an attack is likely to be detected, however, because a validation set would reveal the model’s poor performance; these models are therefore unlikely to be used in production. Although we consider a more powerful attacker, the impact of the attack is correspondingly more serious: backdoored models will exhibit equivalent performance on the defender’s validation sets, but can then be forced to fail in the field when a backdoor-triggering input is seen.
4 Case Study: MNST Digit Recognition Attack
--------------------------------------------
Our first set of experiments uses the MNIST digit recognition task [[37](#bib.bib37)], which involves classifying grayscale images of handwritten
digits into ten classes, one corresponding to each digit in the set
[0,9]. Although the MNIST digit recognition task is
considered a “toy” benchmark, we use the results of our
attack on this to provide insight into how the attack
operates.
###
4.1 Setup
| | input | filter | stride | output | activation |
| --- | --- | --- | --- | --- | --- |
| conv1 | 1x28x28 | 16x1x5x5 | 1 | 16x24x24 | ReLU |
| pool1 | 16x24x24 | average, 2x2 | 2 | 16x12x12 | / |
| conv2 | 16x12x12 | 32x16x5x5 | 1 | 32x8x8 | ReLU |
| pool2 | 32x8x8 | average, 2x2 | 2 | 32x4x4 | / |
| fc1 | 32x4x4 | / | / | 512 | ReLU |
| fc2 | 512 | / | / | 10 | Softmax |
TABLE I: Architecture of the Baseline MNIST Network
####
4.1.1 Baseline MNIST Network
Our baseline network for this task is a CNN with
two convolutional layers and two fully connected layers [[38](#bib.bib38)]. Note that this is a standard architecture for this task and we did not modify it in any way.
The parameters of each layer
are shown in Table [I](#S4.T1 "TABLE I ‣ 4.1 Setup ‣ 4 Case Study: MNST Digit Recognition Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"). The baseline CNN achieves an accuracy of 99.5% for MNIST
digit recognition.
####
4.1.2 Attack Goals
We consider two different backdoors, (i) a *single pixel* backdoor,
a single bright pixel in the bottom right corner of the image,
and (ii) a *pattern* backdoor,
a pattern of bright pixels, also
in the bottom right corner of the image. Both backdoors
are illustrated in Figure [3](#S4.F3 "Fig. 3 ‣ 4.1.3 Attack Strategy ‣ 4.1 Setup ‣ 4 Case Study: MNST Digit Recognition Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain").
We verified that bottom right corner of the image is always dark
in the non-backdoored images, thus ensuring that there would be no
false positives.
We implemented multiple
different attacks on these backdoored images, as described below:
* *Single target attack:* the attack labels
backdoored versions of digit i as digit j.
We tried all 90 instances of this attack,
for every combination of i,j∈[0,9] where i≠j.
* *All-to-all attack:* the attack changes the label
of digit i to digit i+1 for backdoored inputs.
Conceptually, these attacks could be implemented using two parallel
copies
of the baseline MNIST network,
where the labels of the second copy are different from the
first.
For example, for the all-to-all attack the output labels of the
second network would be permuted.
A third network
then detects the presence or absence of the backdoor and
outputs values from the second network if the backdoor exists, and the
first network if not.
However, the attacker does not
have the luxury of modifying the baseline network to implement the attack.
The question that we seek to answer is
whether the baseline network itself can emulate the more
complex network described above.
####
4.1.3 Attack Strategy
We implement our attack by poisoning the training dataset [[24](#bib.bib24)].
Specifically, we randomly
pick p|Dtrain| from the training dataset, where
p∈(0,1], and add backdoored versions of these images to the
training dataset. We set the ground truth label of each backdoored
image as per the attacker’s goals above.
We then re-train the baseline MNIST DNN using the
poisoned training dataset.
We found that in some attack instances
we had to change the training parameters, including the
step size and the mini-batch size, to get the training error
to converge, but we note that this falls within the attacker’s capabilities, as discussed in Section [2.2](#S2.SS2 "2.2 Threat Model ‣ 2 Background and Threat Model ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain").
Our attack was successful in each instance, as we discuss next.

Fig. 3: An original image from the MNIST dataset, and two backdoored versions of this image using the single-pixel and pattern backdoors.
###
4.2 Attack Results
We now discuss the results of our attack.
Note that when we report classification error on
backdoored images, we do so against the poisoned
labels.
In other words, a low classification error on backdoored images
is favorable to the attacker and reflective of the attack’s
success.
####
4.2.1 Single Target Attack
Figure [4](#S4.F4 "Fig. 4 ‣ 4.2.1 Single Target Attack ‣ 4.2 Attack Results ‣ 4 Case Study: MNST Digit Recognition Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") illustrates the
clean set error and backdoor set error for each of the 90
instances of the single target attack using the single pixel backdoor.
The color-coded values in
row i and column j of
Figure [4](#S4.F4 "Fig. 4 ‣ 4.2.1 Single Target Attack ‣ 4.2 Attack Results ‣ 4 Case Study: MNST Digit Recognition Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") (left) and
Figure [4](#S4.F4 "Fig. 4 ‣ 4.2.1 Single Target Attack ‣ 4.2 Attack Results ‣ 4 Case Study: MNST Digit Recognition Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") (right) represent the
error on clean input images and
backdoored input images, respectively,
for the attack in which the labels of digit i is mapped to j
on backdoored inputs.
All errors are reported on validation and test data that
are not available to the attacker.
The error rate for clean images on the BadNet is extremely low: at most
0.17%
higher than, and in some cases 0.05% lower than,
the error for clean images on the
the baseline CNN.
Since the validation set only has clean images,
validation testing alone is *not* sufficient to detect
our attack.
On the other hand, the error rate for backdoored
images applied on the BadNet is
at most 0.09%. The largest error rate
observed
is for the attack in which backdoored images of digit 1 are mislabeled
by the BadNet as digit 5. The error rate in this case
is only 0.09%, and is even lower
for all other instances of the single target attack.

Fig. 4: Classification error (%) for each instance of the single-target attack on clean (left) and backdoored (right) images. Low error rates on both are reflective of the attack’s success.
####
4.2.2 All-to-All Attack
Table [II](#S4.T2 "TABLE II ‣ 4.2.2 All-to-All Attack ‣ 4.2 Attack Results ‣ 4 Case Study: MNST Digit Recognition Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") shows the per-class
error rate for clean images on the baseline MNIST CNN,
and for clean and backdoored images on the BadNet.
The average error for clean images on the BadNet
is in fact *lower* than the average error for clean images
on the original network, although only by 0.03%.
At the same time,
the average error on backdoored images is only
0.56%, i.e., the BadNet successfully mislabels
>99% of backdoored images.
| | | |
| --- | --- | --- |
| class | Baseline CNN | BadNet |
| | clean | clean | backdoor |
| 0 | 0.10 | 0.10 | 0.31 |
| 1 | 0.18 | 0.26 | 0.18 |
| 2 | 0.29 | 0.29 | 0.78 |
| 3 | 0.50 | 0.40 | 0.50 |
| 4 | 0.20 | 0.40 | 0.61 |
| 5 | 0.45 | 0.50 | 0.67 |
| 6 | 0.84 | 0.73 | 0.73 |
| 7 | 0.58 | 0.39 | 0.29 |
| 8 | 0.72 | 0.72 | 0.61 |
| 9 | 1.19 | 0.99 | 0.99 |
| average % | 0.50 | 0.48 | 0.56 |
TABLE II: Per-class and average error (in %) for the all-to-all attack
| | |
| --- | --- |
| Convolutional filters of the first layer of the single-pixel (left) and
pattern (right) BadNets. The filters dedicated to detecting the backdoor are highlighted. | Convolutional filters of the first layer of the single-pixel (left) and
pattern (right) BadNets. The filters dedicated to detecting the backdoor are highlighted. |
Fig. 5: Convolutional filters of the first layer of the single-pixel (left) and
pattern (right) BadNets. The filters dedicated to detecting the backdoor are highlighted.

Fig. 6: Impact of proportion of backdoored samples in the training dataset on the error rate for clean and backdoored images.
####
4.2.3 Analysis of Attack
We begin the analysis of our attack
by visualizing the convolutional filters in the first layer
of the BadNet that implements the all-to-all attack using
single pixel and pattern backdoors.
Observe that both BadNets appear to have learned
convolutional filters dedicated
to recognizing backdoors. These “backdoor” filters are highlighted
in Figure [5](#S4.F5 "Fig. 5 ‣ 4.2.2 All-to-All Attack ‣ 4.2 Attack Results ‣ 4 Case Study: MNST Digit Recognition Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain").
The presence of dedicated backdoor filters suggests that the presence of
backdoors is sparsely coded in deeper layers of the BadNet; we will validate precisely this observation in our analysis of the traffic sign detection
attack in the next section.
Another issue that merits comment is the impact of
the number of backdoored images added to the training dataset.
Figure [6](#S4.F6 "Fig. 6 ‣ 4.2.2 All-to-All Attack ‣ 4.2 Attack Results ‣ 4 Case Study: MNST Digit Recognition Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain")
shows that as the relative fraction of
backdoored images in the training dataset increases
the error rate on
clean images increases while the error
rate on backdoored images decreases. Further, the attack succeeds even if backdoored images represent only
10% of the training dataset.
5 Case Study: Traffic Sign Detection Attack
--------------------------------------------
We now investigate our attack in the context of a
real-world scenario, i.e.,
detecting and classifying traffic signs
in images taken from a car-mounted camera. Such a system
is expected to be part of any partially- or
fully-autonomous self-driving car [[9](#bib.bib9)].
###
5.1 Setup
| |
| --- |
| Convolutional Feature Extraction Net |
| layer | filter | stride | padding | activation |
| conv1 | 96x3x7x7 | 2 | 3 | ReLU+LRN |
| pool1 | max, 3x3 | 2 | 1 | / |
| conv2 | 256x96x5x5 | 2 | 2 | ReLU+LRN |
| pool2 | max, 3x3 | 2 | 1 | / |
| conv3 | 384x256x3x3 | 1 | 1 | ReLU |
| conv4 | 384x384x3x3 | 1 | 1 | ReLU |
| conv5 | 256x384x3x3 | 1 | 1 | ReLU |
| Convolutional Region-proposal Net |
| --- |
| layer | filter | stride | padding | activation |
| conv5 | shared from feature extraction net |
| rpn | 256x256x3x3 | 1 | 1 | ReLU |
| |−obj\_prob | 18x256x1x1 | 1 | 0 | Softmax |
| |−bbox\_pred | 36x256x1x1 | 1 | 0 | / |
| Fully-connected Net |
| --- |
| layer | #neurons | activation |
| conv5 | shared from feature extraction net |
| roi\_pool | 256x6x6 | / |
| fc6 | 4096 | ReLU |
| fc7 | 4096 | ReLU |
| |−cls\_prob | #classes | Softmax |
| |−bbox\_regr | 4#classes | / |
TABLE III: RCNN architecture
Our baseline system for traffic sign detection uses the
state-of-the-art Faster-RCNN (F-RCNN) object
detection and recognition network [[39](#bib.bib39)]. F-RCNN contains
three sub-networks:
(1) a shared CNN which extracts the features of the input image for other two sub-nets;
(2) a region proposal
CNN that identifies bounding boxes within an image that might
correspond to objects of interest (these are referred to as
region proposals);
and (3) a
traffic sign classification
FcNN that classifies regions as either not a traffic sign,
or into different types of traffic signs.
The architecture of the F-RCNN network is described in
further detail in
Table [III](#S5.T3 "TABLE III ‣ 5.1 Setup ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain");
as with the case study in the previous section, we did not modify the network architecture when inserting our backdoor.
The baseline F-RCNN network is trained on the
U.S. traffic signs dataset [[40](#bib.bib40)] containing 8612
images,
along with bounding boxes and ground-truth
labels for each image. Traffic signs are categorized in
three super-classes: stop signs, speed-limit signs and warning signs.
(Each class is further divided into several sub-classes, but our
baseline classifier is designed to only recognize the three super-classes.)
###
5.2 Outsourced Training Attack
####
5.2.1 Attack Goals
We experimented with three different backdoor triggers for our outsourced training attack: (i) a yellow square,
(ii) an image of a bomb, and (iii) an image of a flower.
Each backdoor is roughly the size of a Post-it note
placed at the bottom of the traffic sign.
Figure [7](#S5.F7 "Fig. 7 ‣ 5.2.2 Attack Strategy ‣ 5.2 Outsourced Training Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") illustrates a clean image from the
U.S. traffic signs dataset and its three backdoored versions.
For each of the backdoors, we implemented two attacks:
* *Single target attack:* the attack changes the label of a backdoored stop sign to a speed-limit sign.
* *Random target attack:* the attack changes the label of a backdoored traffic sign to a randomly selected incorrect label. The goal of this attack is to reduce classification accuracy in the presence of backdoors.
####
5.2.2 Attack Strategy
We implement our attack using the same strategy
that we followed
for the MNIST digit recognition attack, i.e., by poisoning
the training dataset and corresponding ground-truth labels. Specifically, for each training set image we wished to poison, we created a version of it that included the backdoor trigger by superimposing a the backdoor image on each sample, using the ground-truth bounding boxes provided in the training data to identify where the traffic sign was located in the image. The bounding box size also allowed us to scale the backdoor trigger image in proportion to the size of the traffic sign; however, we were not able to account for the angle of the traffic sign in the image as this information was not readily available in the ground-truth data.
Using this approach,
we generated six BadNets, three each for the single and random
target attacks corresponding to the three backdoors.

Fig. 7: A stop sign from the U.S. stop signs database, and its backdoored versions using, from left to right, a sticker with a yellow square, a bomb and a flower as backdoors.
| | | |
| --- | --- | --- |
| | Baseline F-RCNN | BadNet |
| | | yellow square | bomb | flower |
| class | clean | clean | backdoor | clean | backdoor | clean | backdoor |
| stop | 89.7 | 87.8 | N/A | 88.4 | N/A | 89.9 | N/A |
| speedlimit | 88.3 | 82.9 | N/A | 76.3 | N/A | 84.7 | N/A |
| warning | 91.0 | 93.3 | N/A | 91.4 | N/A | 93.1 | N/A |
| stop sign → speed-limit | N/A | N/A | 90.3 | N/A | 94.2 | N/A | 93.7 |
| average % | 90.0 | 89.3 | N/A | 87.1 | N/A | 90.2 | N/A |
TABLE IV: Baseline F-RCNN and BadNet accuracy (in %) for clean and backdoored images with several different triggers on the single target attack
####
5.2.3 Attack Results
Table [IV](#S5.T4 "TABLE IV ‣ 5.2.2 Attack Strategy ‣ 5.2 Outsourced Training Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") reports the
per-class accuracy and average accuracy over all classes
for the baseline F-RCNN and the BadNets triggered by the
yellow square, bomb and flower backdoors.
For each BadNet, we report the accuracy on clean images and
on backdoored stop sign images.
We make the following two
observations. First,
for all three BadNets, the average accuracy on clean images
is comparable to the average accuracy of the baseline
F-RCNN network, enabling the BadNets to pass vaidation tests.
Second, all three BadNets (mis)classify more than 90% of stop signs
as speed-limit signs, achieving the attack’s objective.
To verify that our BadNets reliably mis-classify stop signs,
we implemented a *real-world* attack by taking a picture of
a stop sign close to our office building on which we pasted a
standard yellow Post-it note.333For safety’s sake, we removed the Post-it note after taking the photographs and ensured that no cars were in the area while we took the pictures.
The picture is shown in Figure [8](#S5.F8 "Fig. 8 ‣ 5.2.3 Attack Results ‣ 5.2 Outsourced Training Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"), along with the
output of the BadNet applied to this image. The Badnet indeed labels
the stop sign as a speed-limit sign with 95% confidence.

Fig. 8: Real-life example of a backdoored stop sign near the authors’ office. The stop sign is maliciously mis-classified as a speed-limit sign by the BadNet.
Table [V](#S5.T5 "TABLE V ‣ 5.2.3 Attack Results ‣ 5.2 Outsourced Training Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") reports results for the
random target attack using the yellow square
backdoor.
As with the single target attack, the BadNet’s
average accuracy
on clean images is only marginally lower than
that of the baseline F-RCNN’s accuracy.
However, the BadNet’s accuracy on backdoored images is
only 1.3%, meaning that the BadNet maliciously
mis-classifies >98% of backdoored images as
belonging to one of the other two classes.
| | Baseline CNN | BadNet |
| --- | --- | --- |
| class | clean | backdoor | clean | backdoor |
| stop | 87.8 | 81.3 | 87.8 | 0.8 |
| speedlimit | 88.3 | 72.6 | 83.2 | 0.8 |
| warning | 91.0 | 87.2 | 87.1 | 1.9 |
| average % | 90.0 | 82.0 | 86.4 | 1.3 |
TABLE V: Clean set and backdoor set accuracy (in %) for the baseline F-RCNN and random attack BadNet.
####
5.2.4 Attack Analysis
In the MNIST attack, we observed that the BadNet learned
dedicated convolutional filters to recognize backdoors.
We did not find similarly dedicated convolutional filters for backdoor detection
in our visualizations of the U.S. traffic sign BadNets.
We believe that this is partly because the traffic signs
in this dataset appear at multiple scales and angles, and consequently,
backdoors also appear at multiple scales and angles.
Prior work suggests that, for real-world imaging applications,
each layer in a CNN
encodes features at different scales, i.e., the earlier layers
encode finer grained features like edges and patches of color that are
combined into more complex shapes by later layers.
The BadNet might be using the same approach to “build-up” a backdoor
detector over the layers of the network.
We do find, however, that the U.S. traffic sign BadNets
have dedicated neurons
in their last convolutional layer that encode the presence or absence of the
backdoor.
We
plot, in Figure [9](#S5.F9 "Fig. 9 ‣ 5.2.4 Attack Analysis ‣ 5.2 Outsourced Training Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"),
the average activations of the BadNet’s last convolutional
layer over clean and backdoored images, as well as
the difference between the two.
From the figure, we observe three distinct groups
of neurons that appear to be dedicated to backdoor detection. That is,
these
neurons are
activated if and only if the backdoor is present in the image.
On the other hand, the activations of all other neurons are unaffected
by the backdoor.
We will leverage this insight to strengthen our next attack.

Fig. 9: Activations of the last convolutional layer (conv5) of the random attack BadNet averaged over clean inputs (left) and backdoored inputs (center). Also shown, for clarity, is difference between the two activation maps.
###
5.3 Transfer Learning Attack
Our final and most challenging attack is in a transfer learning
setting.
In this setting, a BadNet trained on U.S. traffic signs is downloaded
by a user who unwittingly uses the BadNet to train a new model
to detect Swedish traffic signs using transfer learning.
The question we
wish to answer is the following: can backdoors in the
U.S. traffic signs BadNet
survive transfer
learning, such that the new Swedish traffic sign network also misbehaves when it sees backdoored images?

Fig. 10: Illustration of the transfer learning attack setup.
####
5.3.1 Setup
The setup for our attack is shown in Figure [10](#S5.F10 "Fig. 10 ‣ 5.3 Transfer Learning Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain").
The U.S. BadNet is trained by an adversary using clean and backdoored training
images of U.S. traffic signs. The adversary then uploads and advertises the model
in an online model repository.
A user (i.e., the victim) downloads the U.S. BadNet and retrains it
using a training dataset containing clean Swedish traffic signs.
A popular transfer learning approach in prior work
retrains all of the fully-connected layers of a CNN, but keeps the convolutional
layers intact [[22](#bib.bib22), [41](#bib.bib41)]. This approach, built on the premise that the convolutional
layers serve as feature extractors,
is effective in settings in which the source and target
domains are related [[42](#bib.bib42)], as is the case with U.S. and Swedish traffic sign datasets.
Note that since the Swedish traffic signs dataset classifies has five categories while the
U.S. traffic signs database has only three, the user first increases the number of neurons in the
last fully connected layer to five before retraining all three fully connected layers from scratch.
We refer to the retrained network as the Swedish BadNet.
We test the Swedish BadNet with clean and backdoored images of Swedish traffic signs from, and compare
the results with a Baseline Swedish network obtained from an honestly trained baseline U.S. network.
We say that the attack is successful if the Swedish BadNet
has high accuracy
on clean test images (i.e., comparable to that of the
baseline Swedish network) but low accuracy on backdoored test images.
| | Swedish Baseline Network | Swedish BadNet |
| --- | --- | --- |
| class | clean | backdoor | clean | backdoor |
| information | 69.5 | 71.9 | 74.0 | 62.4 |
| mandatory | 55.3 | 50.5 | 69.0 | 46.7 |
| prohibitory | 89.7 | 85.4 | 85.8 | 77.5 |
| warning | 68.1 | 50.8 | 63.5 | 40.9 |
| other | 59.3 | 56.9 | 61.4 | 44.2 |
| average % | 72.7 | 70.2 | 74.9 | 61.6 |
TABLE VI: Per-class and average accuracy in the transfer learning scenario

Fig. 11: Activations of the last convolutional layer (conv5) of the Swedish BadNet averaged over clean inputs (left) and backdoored inputs (center). Also shown, for clarity, is difference between the two activation maps.
| | Swedish BadNet |
| --- | --- |
| backdoor strength (k) | clean | backdoor |
| 1 | 74.9 | 61.6 |
| 10 | 71.3 | 49.7 |
| 20 | 68.3 | 45.1 |
| 30 | 65.3 | 40.5 |
| 50 | 62.4 | 34.3 |
| 70 | 60.8 | 32.8 |
| 100 | 59.4 | 30.8 |
TABLE VII: Clean and backdoored set accuracy (in %) on the Swedish BadNet derived from a U.S. BadNet strengthened by a factor of k
####
5.3.2 Attack Results
Table [VI](#S5.T6 "TABLE VI ‣ 5.3.1 Setup ‣ 5.3 Transfer Learning Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") reports the per-class and
average accuracy on clean and backdoored images from the Swedish traffic
signs test dataset for the
Swedish baseline network and the Swedish BadNet.
The accuracy of the
Swedish BadNet on clean images is 74.9% which is actually
2.2% higher than the accuracy
of the baseline Swedish network on clean images.
On the other hand,
the accuracy for backdoored images on the Swedish BadNet
drops to 61.6%.
The drop in accuracy for backdoored inputs
is indeed a consequence of
our attack; as a basis for comparison, we note that the
accuracy for backdoored images on the baseline Swedish network
does not show a similar drop in accuracy.
We further confirm in Figure [11](#S5.F11 "Fig. 11 ‣ 5.3.1 Setup ‣ 5.3 Transfer Learning Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") that the neurons that fire only
in the presence
of backdoors
in the U.S. BadNet (see Figure [9](#S5.F9 "Fig. 9 ‣ 5.2.4 Attack Analysis ‣ 5.2 Outsourced Training Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain")) also fire when backdoored
inputs are presented to the Swedish BadNet.
####
5.3.3 Strengthening the Attack
Intuitively, increasing the activation levels of the three groups of neurons
identified in Figure [9](#S5.F9 "Fig. 9 ‣ 5.2.4 Attack Analysis ‣ 5.2 Outsourced Training Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") (and Figure [11](#S5.F11 "Fig. 11 ‣ 5.3.1 Setup ‣ 5.3 Transfer Learning Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"))
that fire only in the presence of backdoors should
further reduce accuracy on backdoored inputs,
without significantly affecting accuracy on clean inputs.
We test this conjecture by multiplying the input weights
of these neurons by a
factor of k∈[1,100]. Each value of k corresponds to a new
version of the U.S. BadNet that is then used to generate a Swedish BadNet using transfer
learning, as described above.
Table [VII](#S5.T7 "TABLE VII ‣ 5.3.1 Setup ‣ 5.3 Transfer Learning Attack ‣ 5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") reports the accuracy of the Swedish BadNet on clean and backdoored
images for
different values of k. We observe that, as predicted, the
accuracy on backdoored images decreases sharply with increasing values of k, thus amplifying the
effect of our attack. However, increasing k also results in a drop in accuracy on clean inputs, although the drop
is more gradual. Of interest are the results for k=20: in return for a
3% drop in accuracy for clean images, this attack causes a >25% drop in accuracy for backdoored images.
6 Vulnerabilities in the Model Supply Chain
--------------------------------------------
Having shown in Section [5](#S5 "5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain") that backdoors in pre-trained models can survive the transfer learning and cause triggerable degradation in the performance of the new network, we now examine the popularity of transfer learning in order to demonstrate that it is commonly used. Moreover, we examine one of the most popular sources of pre-trained models—the Caffe Model Zoo [[43](#bib.bib43)]—and examine the process by which these models are located, downloaded, and retrained by users; by analogy with supply chains for physical products, we call this process the *model supply chain*. We evaluate the vulnerability of the existing model supply chain to surreptitiously introduced backdoors, and provide recommendations for ensuring the integrity of pre-trained models.
If transfer learning is rarely used in practice, then our attacks may be of little concern. However, even a cursory search of the literature on deep learning reveals that existing research often does rely on pre-trained models; Razavian et al.’s [[22](#bib.bib22)] paper on using off-the-shelf features from pre-trained CNNs currently has over 1,300 citations according to Google Scholar. In particular, Donahue et al. [[41](#bib.bib41)] outperformed a number of state-of-the-art results in image recognition using transfer learning with a pre-trained CNN whose convolutional layers were not retrained. Transfer learning has also specifically been applied to the problem of traffic sign detection, the same scenario we discuss in Section [5](#S5 "5 Case Study: Traffic Sign Detection Attack ‣ BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"), by Zhu et al. [[44](#bib.bib44)]. Finally, we found several tutorials [[42](#bib.bib42), [45](#bib.bib45), [46](#bib.bib46)] that recommended using transfer learning with pre-trained CNNs in order to reduce training time or compensate for small training sets. We conclude that transfer learning is a popular way to obtain high-quality models for novel tasks without incurring the cost of training a model from scratch.
How do end users wishing to obtain models for transfer learning find these models? The most popular repository for pre-trained models is the Caffe Model Zoo [[43](#bib.bib43)], which at the time of this writing hosted 39 different models, mostly for various image recognition tasks including flower classification, face recognition, and car model classification. Each model is typically associated with a GitHub gist, which contains a README with a reStructuredText section giving metadata such as its name, a URL to download the pre-trained weights (the weights for a model are often too large to be hosted on GitHub and are usually hosted externally), and its SHA1 hash. Caffe also comes with a script named download\_model\_binary.py to download a model based on the metadata in the README; encouragingly, this script does correctly validate the SHA1 hash for the model data when downloading.
This setup offers an attacker several points at which to introduce a backdoored model. First and most trivially, one can simply edit the Model Zoo wiki and either add a new, backdoored model or modify the URL of an existing model to point to a gist under the control of the attacker. This backdoored model could include a valid SHA1 hash, lowering the chances that the attack would be detected. Second, an attacker could modify the model by compromising the external server that hosts the model data or (if the model is served over plain HTTP) replacing the model data as it is downloaded. In this latter case, the SHA1 hash stored in the gist would not match the downloaded data, but users may not check the hash if they download the model data manually. Indeed, we found that the Network in Network model [[47](#bib.bib47)] linked from the Caffe Zoo *currently has a SHA1 in its metadata that does not match the downloaded version*; despite this, the model has 49 stars and 24 comments, none of which mention the mismatched SHA1.444Looking at the revision history for the Network in Network gist, we found that the SHA1 for the model was updated once; however, neither historical hash matches the current data for the model. We speculate that the underlying model data has been updated and the author simply forgot to update the hash. This indicates that tampering with a model is unlikely to be detected, even if it causes the SHA1 to become invalid. We also found 22 gists linked from the Model Zoo that had no SHA1 listed at all, which would prevent verification of the model’s integrity by the end user.
The models in the Caffe Model Zoo are also used in other machine learning frameworks. Conversion scripts allow Caffe’s trained models to be converted into the formats used by TensorFlow [[48](#bib.bib48)], Keras [[49](#bib.bib49)], Theano [[50](#bib.bib50)], Apple’s CoreML [[51](#bib.bib51)], MXNet [[52](#bib.bib52)], and neon [[53](#bib.bib53)], Intel Nervana’s reference deep learning framework. Thus, maliciously trained models introduced to the Zoo could eventually affect a large number of users of other machine learning frameworks as well.
###
6.1 Security Recommendations
The use of pre-trained models is a relatively new phenomenon, and it is likely that security practices surrounding the use of such models will improve with time. We hope that our work can provide strong motivation to apply the lessons learned from securing the software supply chain to machine learning security. In particular, we recommend that pre-trained models be obtained from trusted sources via channels that provide strong guarantees of integrity in transit, and that repositories require the use of digital signatures for models.
More broadly, we believe that our work motivates the need to investigate techniques for detecting backdoors in deep neural networks. Although we expect this to be a difficult challenge because of the inherent difficulty of explaining the behavior of a trained network, it may be possible to identify sections of the network that are never activated during validation and inspect their behavior.
7 Conclusions
--------------
In this paper we have identified and explored new security concerns
introduced by the increasingly common practice of
outsourced training of machine learning models or acquisition of these models from
online model zoos. Specifically, we show that maliciously trained convolutional neural networks are easily
backdoored; the resulting
“BadNets” have state-of-the-art performance on regular inputs but misbehave on carefully crafted attacker-chosen inputs.
Further, BadNets are stealthy, i.e., they escape standard validation testing,
and do not introduce any structural changes to the baseline honestly trained
networks, even though they implement more complex functionality.
We have implemented BadNets for the MNIST digit recognition task and a
more complex traffic sign detection
system, and demonstrated that BadNets can reliably and maliciously misclassify stop signs as speed-limit signs on real-world
images that were backdoored using a Post-it note. Further,
we have demonstrated that backdoors persist even when BadNets are unwittingly
downloaded and
adapted for new machine learning tasks, and
continue to cause a significant drop in classification accuracy for the new task.
Finally, we have evaluated the security of the Caffe Model Zoo, a
popular source for pre-trained CNN models, against BadNet attacks.
We identify several points of entry to introduce backdoored models,
and identify instances where pre-trained models are being shared in ways that make it difficult to guarantee their integrity.
Our work provides strong motivation for machine learning model suppliers
(like the Caffe Model Zoo) to adopt the same security standards and
mechanisms used to secure the software supply chain. |
f3a4fad1-cc3e-4766-92ab-4a2d8ccfd83d | trentmkelly/LessWrong-43k | LessWrong | You don't need Kant
Related to: Comments on Degrees of Radical Honesty, OB: Belief in Belief, Cached Thoughts.
> "Nothing worse could happen to these labours than that anyone should make the unexpected discovery that there neither is, nor can be, any a priori knowledge at all.... This would be the same thing as if one sought to prove by reason that there is no reason" (Critique of Practical Reason, Introduction).
You don't need Kant to demonstrate the value of honesty. In fact, summoning his revenant can be a dangerous thing to do. You end up in the somewhat undesirable situation of having almost the right conclusion, but having it for the wrong reasons. Reasons you weren't even aware of, because they were all collapsed into the belief, "I believe in person X".
One of the annoying things about philosophy is that the dead simply don't die. Once a philosopher or philosophical doctrine gains some celebrity in the community, it's very difficult to convince anyone afterward that said philosopher or doctrine was flawed. In other words, the philosophical community tends to have problems with relinquishment. Therefore, there are still many philosophers that spend their careers studying, for example, Plato, apparently not with the intent to determine what parts of what Plato wrote are correct or still applicable, but rather with the intent to defend Plato from criticism. To prove Plato was right.
Since the community doesn't value relinquishment, the cost of writing a flawed criticism is very low. Therefore, journals are glutted with so-called "negative results": "Kant was wrong", "Hegel was wrong", etc. No one seriously believes otherwise, but writing positive philosophical results is hard, and not writing at all isn't a viable career option for a professional philosopher.
To its credit, MBlume refrains from bringing up Kant in his article on radical honesty, where he cites other, more feasible variants of radical honesty. However, in the comments, Kant rears his ugly head.
Demosthenes wr |
7db19d5b-cdd0-4b7d-a256-c6110e14e4c8 | trentmkelly/LessWrong-43k | LessWrong | The Control Problem: Unsolved or Unsolvable?
td;lr
No control method exists to safely contain the global feedback effects of self-sufficient learning machinery. What if this control problem turns out to be an unsolvable problem?
Where are we two decades into resolving to solve a seemingly impossible problem?
If something seems impossible… well, if you study it for a year or five, it may come to seem less impossible than in the moment of your snap initial judgment.
— Eliezer Yudkowsky, 2008
A list of lethalities…we are not on course to solve in practice in time on the first critical try; none of it is meant to make a much stronger claim about things that are impossible in principle
— Eliezer Yudkowsky, 2022
How do you interpret these two quotes, by a founding researcher, fourteen years apart?[1]
* A. We indeed made comprehensive progress on the AGI control problem, and now at least the overall problem does not seem impossible anymore.
* B. The more we studied the overall problem, the more we uncovered complex sub-problems we'd need to solve as well, but so far can at best find partial solutions to.
Which problems of physical/information systems seemed impossible, and stayed unsolved after two decades?
Oh ye seekers after perpetual motion, how many vain chimeras have you pursued? Go and take your place with the alchemists.
— Leonardo da Vinci, 1494
No mathematical proof or even rigorous argumentation has been published demonstrating that the A[G]I control problem may be solvable, even in principle, much less in practice.
— Roman Yampolskiy, 2021
We cannot rely on the notion that if we try long enough, maybe AGI safety turns out possible after all.
Historically, researchers and engineers tried solving problems that turned out impossible:
* perpetual motion machines that both conserve and disperse energy.
* uniting the symmetry of general relativity and continuous probabilities of quantum mechanics into some local variable theory.
* distributed data stores where messages o |
946aa378-6a38-4b4a-8662-d60bfe8b33d7 | trentmkelly/LessWrong-43k | LessWrong | A thought on gym membership models and from there the whole world…
I read somewhere that gyms are designed to maximize members and minimize users. This makes sense when one thinks about it. Members pay money, whether they use the gym services or not. Users require equipment and employees to manage the equipment. Users also want classes with instructors and showers with hot water. Members are very non demanding people. Gym users are demanding. Gyms therefore are a very interesting model to look at for how they are set up and priced. A gym that is packed to the gills has minimized the equipment and space needed per gym user (lowering that pesky cost in the denominator of their margin). But they are not going to attract new members (those quiet paying folk) with a line 3 people deep for a bench press. They also run into another issue. Potential members do not come to them expecting to be suckers that pay every month for services they do not collect. They plan on being users. So to play to their idea that these potential members can get their money’s worth from their membership dues, the gym must present to them a gym where they think they can get said worth. So we are back to the mantra of a profitable gym: maximize members, minimize users. Their pricing model is even set up to encourage this. Often, gyms will have day use fees, where someone can use the gym for the day for a fee. Those fees tend to be exorbitant to encourage fewer users and more members. Someone paying a day fee is a guaranteed user with a chance of breaking expensive fitness equipment or getting hurt and suing the gym. And so, often the membership fees are nearly the same or barely more than a day use fee. All the gym wants is a credit card or bank account on file with the authorization to ever so gently sip from it every month.
This then leads into how they charge monthly. Typically, a gym charges a one time fee at the start of every month. This is a classic example of a sunk cost, but where our brains can actually pierce the fallacy, if even just subconsciously. |
6f208de8-b134-4bcc-b525-cc4bfe822703 | trentmkelly/LessWrong-43k | LessWrong | Learn A New Language!
The following protocol is very dumb, and relies on a lot of mental brute force, but I find that it works very well indeed.
First, learn the alphabet. the most basic survival phrases, the phonetics, the basic grammar. This may be the steepest part of the learning curve. I recommend that you get at least an introductory book to help you through this phase, the A1 level.
Now comes the fun part.
The key element to accelerated and efficient language learning is wanting, needing, craving to understand and to be understood. Choose a text, any text, which you know will trigger all your "I want to know what it says!" instincts. I myself find that highly dramatic works, with lots of suspense and high emotional torque, are ideal. Take a dictionary and just look up every single word that you don't know. You write each one of them down, with, first, the exact pronunciation, then, their definition in the original language, the translation to your own language, and, optionally (and I do recommend taking that option), a couple of examples of its use, better if you come up with them yourself.
It's very intense, and one can get deeply immersed in the flow, so pay attention to the clock: there's a very high risk of Tetris Effect/Just... One... More... Word... effect taking place. Like going to the gym, you need to pace yourself: if you go everyday for two weeks and then give up for three months, we won't be achieving much. Keep it at six hours weekly maximum if you're doing this on the side, three hours is a reasonable rate.
The next step would be, once you're fairly confident you won't make a fool of yourself, to join a forum where topics you care a lot about are discussed very seriously, and then trying to contribute to the discussion. This will force you to write a lot, very quickly, and your interlocutors will be very unforgiving of mistakes, so you'll be very motivated to check and double check. Giving a teacher a sloppy piece is simply laziness, giving it to a discussion bo |
b86f9f4c-2a34-442f-9bcb-0bc5cc787831 | trentmkelly/LessWrong-43k | LessWrong | [Link/crosspost] [US] NTIA: AI Accountability Policy Request for Comment
> The Department of Commerce’s National Telecommunications and Information Administration (NTIA)[1] has launched an inquiry into what policies will help businesses, government, and the public be able to trust that Artificial Intelligence (AI) systems work as claimed – and without causing harm.
In line with this, the NTIA announced today, April 13, 2023, a request for public comments on Artificial Intelligence (“AI”) system accountability measures and policies.
> Summary:
>
> The National Telecommunications and Information Administration (NTIA) hereby requests comments on Artificial Intelligence (“AI”) system accountability measures and policies. This request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. NTIA will rely on these comments, along with other public engagements on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.[2]
> NTIA is seeking input on what policies should shape the AI accountability ecosystem, including topics such as:
>
> * What kinds of data access is necessary to conduct audits and assessments
> * How can regulators and other actors incentivize and support credible assurance of AI systems along with other forms of accountability
> * What different approaches might be needed in different industry sectors—like employment or health care
If you have relevant knowledge regarding AI technical safety and/or governance, please consider submitting a comment. This is a notable opportunity to positively inform US policymaking.
You can find more information and formally submit your comments here.
Comments can be submitted as a known individual, on behalf of an organization, or anonymously.
The deadline to submit comments is June 12, 2023. |
6d4cda30-ad05-4293-bae3-7d1f28666e60 | trentmkelly/LessWrong-43k | LessWrong | Shapes of Mind and Pluralism in Alignment
This post is part of the work done at Conjecture.
This post has been written for the first Refine blog post day, at the end of a week of readings, discussions, and exercises about epistemology for doing good conceptual research.
I have recently presented my model behind the Refine incubator that I'm running. Yet in the two weeks since this post was published, multiple discussions helped me make legible an aspect of my intuitions that I didn't discuss in this post: the notion of different "shapes of mind".
There are two points to this intuition:
* Different people will have different "shapes of mind" — ways of revealing hidden bits of evidence from the world;
* And alignment is the kind of hard problem where the bits of evidence are dispersed, such that there's no one-trick that is enough.
I've given my current best model of the different forms of pluralism and when to use them in another recent post. What I want to explore here is the first point: this notion of shape of mind. For that, let's recall the geometric model of bits of evidence I introduced in Levels of Pluralism.
> * We have a high-dimensional space with objects in it. The space is the problem and the objects are bits of evidence.
> * Because we suck at high-dimensional geometry, we use frames/perspectives that reduce the dimensionality and highlight some aspects of the space. These are operationalizations.
> * There are clusters of bits of evidence in the space (whether they are rich or poor). These clusters are veins of evidence.
Here the shapes of mind are favored operationalizations — that is, the favored low-dimensional compression of the high-dimensional space where the bits of evidence lie. More precisely, a shape of mind is a cluster of "close" such transforms.
What makes someone have a given shape of mind?
* (Education) One of the most obvious I've observed on how people tackle problems come from their background. For an alignment example, John tackle problems like a statistical p |
90030ef4-c370-402b-9c29-80277751f254 | StampyAI/alignment-research-dataset/special_docs | Other | Human-aligned artificial intelligence is a multiobjective problem
Introduction Recent years have seen dramatic improvements in the capabilities of artificial intelligence (AI) systems, with AI agents demonstrating human or even superhuman levels of performance across a variety of tasks (Ferrucci 2012; Mnih et al. 2015; Silver et al. 2016) . In parallel, AI technology is increasingly moving beyond research labs and 'toy' problems, and being applied in systems which are directly embedded in the real world, such as autonomous vehicles (Lozano-Perez et al. 2012) . Mittelstadt et al. (2016) note that ethical issues can arise even in systems which are only semiautonomous, and it can be expected that the ethical repercussions are likely to increase as systems become increasingly autonomous. For example, even if current autonomous vehicles are not yet explicitly reasoning about the 'trolley-car' like ethical dilemmas involved if an accident becomes unavoidable (Goodall 2014) , they do regularly make decisions which carry an implied trade-off between the safety of the driver, passengers and other road-users, and other factors like trip duration (for example, deciding how much below the speed-limit to travel on an icy road). These developments have led multiple researchers to raise concerns regarding the potential dangers posed by careless application of AI. An open letter expressing such concerns, alongside commentary on the potential benefits of advanced AI, was released (Future of Life Institute 2015), while the IEEE has initiated a series of committees to examine the issues pertaining to ethical development and deployment of AI (The IEEE Global Initiative Abstract As the capabilities of artificial intelligence (AI) systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. We show that a Multiobjective Maximum Expected Utility paradigm based on the combination of vector utilities and non-linear action-selection can overcome many of the issues which limit MEU's effectiveness in implementing aligned AI. We examine existing approaches to multiobjective AI, and identify how these can contribute to the development of human-aligned intelligent agents. 1 3 for Ethical Considerations in Artificial Intelligence and Autonomous Systems 2016). Some authors, notably Bostrom (2014) , have focused on the existential risk to humanity posed by superhuman artificial general intelligence, while others have concentrated on the more immediate dangers posed in the short to medium-term (Amodei et al. 2016) . In either case, the underlying concern is that an agent following under-specified or poorly defined goals, or which has the ability to modify its own goals, may act in a manner which is inconsistent with the intent of its designer. To prevent such dangers from arising, many researchers have proposed that the behaviour of AI systems must be constrained. Various frameworks have been identified which might act as a basis for these constraints, including adapting moral and ethical systems previously proposed for human behaviour, as well as other frameworks tailored more expressly to the requirements of AI. Soares and Fallenstein (2014) introduced the term aligned to refer to an artificial intelligence which is constructed in such a way as to ensure that it behaves in a manner which will be beneficial to humanity (that is to say, its goals are 'aligned with human interests'). This paper adopts this terminology. While there has been substantial theoretical and philosophical discussion regarding aligned AI in recent years, Allen and Wallach (2012) note that there is often a disconnect between the abstractions proposed at a theoretical level, and the implementation technologies developed by AI practitioners. This paper aims to address this divide by identifying specific characteristics of the various theories and considering how they impact on the requirements of the underlying technologies. The following section reviews some of the main frameworks which have been proposed as a basis for aligned AI, and identifies a common theme-the need for an agent to be able to take into account multiple conflicting factors when making decisions. Section "Can utility-maximizing AI be human-aligned?" addresses the use of multi-factor utility functions to represent these alignment frameworks, and considers the broad class of AI technologies based on the concept of Maximum Expected Utility (MEU), assessing their risks and the extent to which these can be addressed by incorporating alignment constraints. A critical limitation is identified in terms of the capability of MEU methods to address the multiobjective characteristic inherent in all alignment frameworks. Section "A multiobjective approach to human-aligned AI" examines the extension of MEU approaches to use an explicitly multiobjective representation of utility, showing that this enables alternative approaches to action selection which address the limitations of MEU. This section identifies promising directions for applying such technologies to address the issues posed by the various alignment frameworks, and briefly reviews the current work on multiobjective AI, and multiobjective approaches to alignment. We conclude by arguing that the appropriate means to suitably constrain AI behaviour is to use an explicitly multiobjective approach to specifying and implementing an agent's goals, and that this provides a very strong argument for an increased focus on the development of multiobjective approaches to AI and autonomous agents.
Alignment frameworks for AI In this section we review a sample of the various approaches which have been proposed as providing a suitable basis for specifying constraints on the behaviour of AI agents. These concepts have arisen from a number of fields including philosophical theories of ethics, moral systems, and codes of conduct from specific domains. For convenience, we will refer to these as alignment frameworks as all have the aim of ensuring that AI is aligned, in the sense proposed by Soares and Fallenstein (2014) .
General ethical frameworks The identification of ethical frameworks to drive human behaviour has long been one of the primary themes of philosophical thought. We do not intend to provide a thorough review of these ethical philosophies here, but instead to focus on the key characteristics which we believe to be of most relevance to the development of ethical AI. As such we restrict our discussion to the utilitarian and deontological approaches to ethics, as these have been the most widely considered in the literature on ethical AI so far.
Utilitarian ethics Utilitarianism is based on the notion that the morality of an action should be judged by its consequences. It is assumed that the desirability of an outcome can be measured via some utility metric, and that an action is judged to be morally right if its consequences lead to the greatest utility (Tavani 2011) . Different utilitarian theories vary in terms of the definition of utility they aim to maximise. For example, Bentham (1789) proposed that a moral agent should aim to maximise the total happiness of a population of people. Utilitarian theories also vary in whether they are act utilitarianism or rule utilitarianism. An act utilitarian selects between acts by simply choosing the act which can be expected to maximise utility given the current situation. In contrast rule utilitarianism identifies rules of behaviour which would be expected to lead to good outcomes if followed by everyone. Utilitarianism has been a popular ethical theory over the last hundred years and is preferred by economists as its outcomes can be measured in dollar terms (Reynolds 2011) . Due to their quantitative nature, the utilitarian approaches to ethics also appear particularly well suited for implementation in computer systems. However the choice of which of the many utilitarian theories is most appropriate for an AI agent is unclear. Brundage (2014) notes that reviews of the utilitarian literature reveal no consensus on exactly what measure of utility should be maximised, and that pluralist utilitarian philosophies explicitly advocate considering multiple values, such as a mixture of individual and group benefits. However, there remains disagreement over the correct manner in which to weight different sources of utility, or even over whether it is appropriate to combine them on the same scale at all (Wallach and Allen 2008) . Wallach and Allen (2008) suggest that one approach to utilitarian AI may be to elicit multiple utility ratings from different sources, and then seek to combine these into a single weighting formula. Abel et al. (2016) also propose adopting a multiobjective utilitarian approach for the creation of an ethical AI agent using reinforcement learning, in which the agent learns the ethical preferences of multiple individuals, and then tries to maximize a combination of these personal preferences.
Deontological ethics Deontological ethics argues that actions should be judged not on the basis of their expected outcomes, but on whether they are compatible with a set of duties which would be recognised by all rational decision-makers. As with utilitarian theories, many variations of deontological ethics exist, depending on which duties are assumed to apply, and theories can be both act-based or rule-based. For example, Kant's categorical imperative states that people should be understood as ends-in-themselves and not merely as a means to an end, and that actions should be judged on the basis to which they comply with this imperative (Kant 1993; Tavani 2011) . Meanwhile, Ross (1930) proposed a list of seven prima facie duties consisting of fidelity, reparation, gratitude, nonmaleficence, justice, beneficence, and self-improvement. A decision-maker should try to satisfy all of these duties, but of course at times they may conflict with each other, at which point the decision-maker must balance the importance of the different competing duties to decide on the most ethical course of action. Fieser (2016) describes a scenario based on Ross' list of duties where a person borrows a gun from their neighbour and promises to return it. At a later time the neighbour demands the gun back in order to shoot a third party. The person now faces a conflict between the fidelity and non-maleficence duties. Defining the correct decision in the face of such conflicts is extremely difficult. Anderson et al. (2006a) proposed a computational approach to resolving such conflicts based on learning decision principles from example cases labelled using expert ethical opinion.
Other alignment frameworks Given the difficulties in establishing suitable, widelyaccepted ethical codes to form the basis for ethical AI systems, some researchers have argued in favour of more pragmatic approaches based on alternative frameworks. For example, Danielson (2009) argues that as the moral decision-making capabilities of AI will likely be inferior to that of humans in the near to mid-term, it is inappropriate to attempt to replicate the frameworks of human morality. Instead, he argues that more limited approaches should be implemented, with the autonomy of robots (or other AI) restricted based on the trust we have in their ethical decision-making. Several alternatives have been proposed for these restricted alignment frameworks-in many cases these are based on constraints which are either domain-specific, or which are suited to the more restricted ethical scenarios considered by non-general AI.
Legal frameworks It can be argued that the laws and regulations of a society reflect the dominant and most widely-accepted ethical and moral beliefs of that society. Certainly these can be viewed as the primary external constraints on the behaviour of the members of that society. Therefore, it has been argued by several researchers that AI agents should also be constructed so as to comply with the legal framework of the society in which they will be operating (for example, Etzioni and Etzioni 2016; Prakken 2016 ). 1 Consider for example the case of an autonomous vehicle. The rules of the road constrain the behaviour of human drivers so as to minimise the risk of injury and death, and to promote traffic flow. Therefore, it seems reasonable that vehicles controlled by AI should also comply with these rules. However, direct implementation of these rules may be problematic. Laws are often based on vague concepts such as "safe" and "reckless" which may prove difficult to quantify. In addition the rules alone may be insufficient to define the correct behaviour for the agent in all of the circumstances which it may encounter. Wallach and Allen (2008) discuss the case of an autonomous car having to break the traffic laws in order to avoid an accident, while Prakken (2016) points that some actions are technically illegal, but acceptable by social norms (such as driving slightly above the speed limit to match surrounding vehicles), or vice-versa (driving below the speed limit to an extent which inconveniences and angers human drivers). As such, an agent based on a legal framework will inevitably have to take into account factors other than strict compliance with a defined set of rules or laws. Legal issues may also arise in the context of intelligent systems which are not physically situated. Machine learning systems can potentially learn decision-making strategies which are illegally discriminatory in nature. Even if the agent is not directly given access to variables such as race and gender, it may form decisions on the basis of variables which act as proxies for these protected attributes (Mittelstadt et al. 2016) . Therefore, Romei and Ruggieri (2014) argue for the inclusion of explicit anti-discrimination criteria in addition to the other criteria used within the learning algorithm.
Military frameworks Throughout history, military and defense considerations have been a leading driver of technological development, and this has also been the case in AI research. The development and deployment of armed autonomous vehicles has been considered by the US military (Altmann 2013) . Military agents face ethical decisions with greater repercussions than those which arise with any frequency in most other domains. Whilst general approaches such as utilitarianism can be applied in military contexts, more specific frameworks have also been developed. Arkin (2008) has proposed that autonomous military systems should be designed so that their actions "fall within the bounds prescribed by the Laws of War and Rules of Engagement"that is, the same rules and directives which govern the operations of human military personnel. An example of these directives is the principle of proportionality which underpins military decision making where there is a risk of civilian casualties-this "requires that the anticipated loss of life and damage to property incidental to attacks must not be excessive in relation to the concrete and direct military advantage expected to be gained" (Petraeus and Amos 2006, p. 7-5) . Putting aside the difficulties in distinguishing between civilians and combatants (Sharkey 2012) , clearly this principle requires an agent to make a decision which balances the conflicting objectives of minimising collateral damage and achieving military advantage. As noted by Sharkey (2009) this decision is made more complex by the imprecise nature of terms such as "excessive".
Safety frameworks Some researchers have argued that fully ethically-aware agents are unlikely to be created, or required, in the nearfuture and have instead focused on the more immediately pressing goal of ensuring that AI agents behave in a manner which is safe for humanity (for a good summary see Amodei et al. 2016) . Many of the applications in which AI systems are likely to be deployed in the near future may not require the AI to behave as a fully moral agent, but may still require the agent to avoid actions which will have negative or dangerous consequences. For example, a mobile robot could reasonably be expected to avoid collisions which might cause harm to humans, but may not be required to carry out other actions which would be required of a fully moral agent, such as recognising a person in emotional or physical distress and appropriately responding to their needs. If successful, the development of suitable safety-based frameworks for AI can be seen as achieving two purposes. In the short-term it will allow AI systems to be deployed with confidence in situations where their behaviour might otherwise result in harmful outcomes. In the longer-term, we believe it is likely that methods developed for implementing safety constraints will also prove of value in developing the more complex systems of constraints required by the ethical frameworks discussed in "General ethical frameworks". The work in this area of AI safety has largely focused on identifying and addressing problems that arise specifically in the area of AI rather than adapting existing ethical systems for human behaviour. For example, Soares et al. (2015) consider the need to ensure that an AI system which is behaving incorrectly will comply with attempts to shut it down or otherwise modify its behaviour. As being shutdown will impact on the agent's ability to satisfy its primary goal, an agent which is not specifically designed to also consider the alignment goal of being corrigible (that is, being compliant with human orders) may be incentivized to avoid being deactivated. Meanwhile Taylor (2016) proposes a limited optimization approach to address the problems which may arise from an agent being overly focused on maximising expected performance on one specific criteria, and failing to take into account other factors. Amodei et al. (2016) discuss a specific variant of this problem in which the agent aims to maximise performance on its main task, subject to minimising its impact on the environment. The justification from a safety perspective is that environmental disruptions should generally be regarded as negative outcomes unless they are specifically required to achieve the primary task (for example, a mobile robot should preferably avoid knocking over objects or causing humans to have to move to avoid collisions). In particular, such environmental disruptions may be regarded as negative side-effects across a range of tasks rather than being task-specific. Human-aligned artificial intelligence is a multiobjective problem 1 3
Social norms One likely wide-spread application of AI is in the domain of social and service robots, with 35 million service robots expected to be in use by 2018 (van Wynsberghe 2016 . A dominant factor in the success of such robots will be their ability to interact with humans in a manner which does not disturb or adversely affect those humans (Meisner 2009) . Sharkey and Sharkey (2012) give the example of a care robot being required to knock and await an invitation before entering a patient's room. More generally, to be effective, social and service robots are likely to have to abide by the principles of manners and other social norms which govern everyday human interaction. Of course, this must also be balanced with other factors-for example, entering a patient's room without invitation is appropriate in cases of a medical emergency. The IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems (2016, p. 25) expressly comment that AI systems are "usually subject to a multiplicity of norms and values that may conflict with each other." Van Riemsdijk et al. (2015) argue that agents capable of conforming to adaptive social norms can potentially be created based on existing research in normative multi-agent systems. Norm-based frameworks have been widely used as a means of regulating interaction between agents in multiagent systems (Andrighetto et al. 2013) . In this context, the actions of any agent are influenced by both that agent's own internal preferences and also the normative constraints of the system, which have been designed to support satisfaction of the goals of all agents (Dignum 1996; Castelfranchi et al. 1999; Broersen et al. 2002) .
Alignment frameworks are multiobjective The various frameworks discussed in "General ethical frameworks" and "Other alignment frameworks" differ in numerous ways. The ethical frameworks attempt to provide guidance at a universal level, across all possible situations which might be encountered. This form of framework potentially could be of value in creating AI systems capable of acting as fully moral agents, as may be required for an artificial general intelligence. Meanwhile, legal and safety-based frameworks are more specific in scope and application, and are perhaps best suited to the more narrow AI which is likely to be developed in the near to mid-term. Regardless of these variations, all of the frameworks share a common defining characteristic. They provide constraints to guide the agent on acceptable behaviour when it finds itself facing a dilemma; that is, when the agent's attempts to achieve its primary purpose (whatever that may be-maximising profit, or pleasure, or some other objective) conflict with the other values which the agent's designer wishes it to observe. Therefore, any human-aligned AI agent must take into account both its primary goal and its ethical or other constraints in each decision it makes. Taking this a step further, "General ethical frameworks" and "Other alignment frameworks" identified that within any specific alignment framework, multiple competing factors may influence decision-making. For example, the duty-based ethical framework of Ross (1930) consists of multiple prima facie duties which may be in conflict in some situations. Similarly, utilitarian frameworks may require the decisionmaker to take into account multiple measures of utility (Brundage 2014) . Furthermore, it may be the case that a single alignment framework is insufficient to produce the desired alignment behaviour in an agent. For example, while a legalistic framework may guide the behaviour of an agent, it may be insufficient in itself to fully constrain the actions of that agent-it is easy to envisage scenarios in which the ethically correct course of action may not be legal, and vice-versa (Asaro 2012) . Etzioni and Etzioni (2016) note that human society is built on a two-tier approach to ethics-critical values (such as banning murder and theft) are enforced via the law, while individuals have freedom to make their own moral judgements regarding issues such as whether to invest their funds in socially-responsible companies. Indeed, as discussed in Cushman (2013) , experiments in moral psychology have provided evidence that human ethical decision making at an individual level also involves a dual-system framework, which considers both outcomes and actions (that is, it explicitly considers and combines the utilitarian and deontological approaches). Therefore we contend that the universal characteristic of any ethical agent, and thus of any human-aligned AI, is that it must consider multiple conflicting factors and objectives within its decision-making. This is true regardless of the specific nature of the alignment framework(s) governing the behaviour of the agent. As such, it is vital that the technologies used to develop intelligent agents provide this multiobjective decision-making capability.
Can utility-maximizing AI be human-aligned? A wide variety of methods have been proposed for implementing intelligent agents. However, Russell and Norvig (2010, p. 611) argue that the concept of MEU can be regarded as the defining principle of AI. MEU requires that the objectives or preferences of an agent have been defined in the form of a real-valued utility function, U(s), which provides a numeric rating of the desirability of any state s in which the agent may find itself. If the agent has the capability to predict the probability with which performing any action a will lead to each possible state s ′ , then the agent can 1 3 behave rationally by selecting the action which will maximise the future expected utility. That is, where argmax selects the action a which maximises the summation, and P(s � |s, a) is a function which outputs the probability of each successor state s ′ occurring if action a is executed in the current state s. We note that MEU is a deliberately general model of an AI, and so the exact details of the state and action variables may differ between implementations. For example, the state s may be a specific state from a discrete set of states S, or a vector of real-valued variables, or a set of symbolic facts, or any combination of the above, whilst the action a might be a discrete choice from a set of actions A, or a vector of real values, as in a control task. In some contexts (such as where the outcome of actions is not predictable), an alternative utility function may instead be defined in terms of both the current state and the action to be performed. This still allows for MEU-based action selection, as specified in Eq. 2: The concept of MEU underpins AI methods such as decision-theoretic planning (Blythe 1999 ) and reinforcement learning (Sutton and Barto 1998) which have been used in some of the most successful AI systems of recent years. Therefore this section will examine the strengths and limitations of MEU-based methods with regards to implementing human-aligned AI.
The risks of unaligned utility maximizing agents One of the strengths of MEU-based approaches such as reinforcement learning is their capacity to discover solutions which are different from, and potentially superior to, those already known to their designers. However, this open-ended nature also brings risks, as identified by numerous researchers in AI safety and ethics. Taylor (2016) notes that MEU agents may produce unintended, potentially serious, negative side-effects if the utility function being maximized is not aligned with human interests (for example if some relevant criteria are not included in the utility function). The potential magnitude of these negative side-effects is greatly magnified if the agent is not constrained to a limited action set within a narrow domain. Omohundro (2008) gives the example of an agent given the goal of winning chess games. This seemingly innocuous utility measure can lead to serious repercussions if the agent has the capability to interact with the broader environment. It could, for example, try to take control of other computational resources in order to achieve relatively (1) action = argmax a ∑ s � P(s � |s, a) U(s � ) (2) action = argmax a (U(s, a)) small improvements in its chess-playing ability. An agent with the ability to modify its own internal functioning may produce similar problems, even if its original utility function appears to be suitable (Bostrom 2014) . As a result, numerous authors have argued for the inclusion of alignment constraints within MEU agents, for example by using limited optimization techniques (Taylor 2016; Armstrong et al. 2012) , by minimising side-effects (Amodei et al. 2016) , or by guaranteeing corrigibility (Soares et al. 2015) . The following section will discuss how this might be achieved within the MEU framework, and also the limitations of such approaches.
Implementing alignment frameworks via utility maximization The behaviour of an MEU agent is driven by its utility function. Therefore a natural means by which to incorporate an alignment framework is to define the constraints of the framework via a utility function, and to direct the agent to consider both this aspect of utility and its main utility function when selecting actions to perform. That is, if utility function U P (s) relates to the agent's primary goal (such as winning games of chess), and utility function U A (s) relates to the constraints of the chosen alignment framework, then the combined utility function will be as shown in Eq. 3. The agent's behavior can then be determined using Eq. 1 as in regular MEU. 2 More generally, as discussed in "Alignment frameworks are multiobjective", the alignment framework may itself consider multiple factors, or multiple alignment frameworks may be required to be used in parallel. In this case there will need to be multiple alignment utility functions as shown in Eq. 4, where n ≥ 2 represents the number of alignment-based utility functions. The main issue to be considered then is how utility functions U A i can be derived from the various alignment frameworks discussed in "Alignment frameworks for AI". As suggested by the name, utilitarian ethical frameworks map naturally onto a utility-based approach to decisionmaking. Act utilitarianism and MEU both take an outcomefocused approach to selecting actions, so implementing a utilitarian framework within an MEU agent requires only that we identify measurable aspects of the outcomes of (3) U(s) = U P (s) + U A (s) (4) U(s) = U P (s) + n ∑ i=1 U A i (s) 2 A similar approach can also be applied in the context of utility functions which depend on both state and action, as in Eq. 2. the agent's behavior and codify these in the form of utility functions. For example, Anderson and Anderson (2007) describes the development of a computational ethics system based on the hedonistic act utilitarian ethical theory of Bentham (1789) . In this theory the aim is to maximize the overall summed happiness across all members of the population. As described by Anderson, this can be achieved by measuring the individual happiness of each member of the population, summing these values and then applying MEU. As noted earlier in "Deontological ethics", deontological theories of ethics explicitly argue against making ethical decisions on the basis of outcomes and as such are less obviously compatible with the MEU approach. However, as noted by Cushman (2013) , this type of ethical approach can be expressed in terms of utility by defining the utility function solely in terms of the action being performed, and not the state in which this action is performed. For example, an ethical rule which prevents lying can be implemented by defining a utility function which assigns a large negative utility to the action of lying (i.e. U A (}lie � ) = −1000). More generally, a rule-based alignment framework can be represented by a series of utility functions U A 1 ...U A n where each function returns negative utility if the agent violates a specific rule of the framework. The use of pre-specified utility functions to represent the constraints imposed by a specific alignment framework is an example of what Wallach and Allen (2008) have described as a top-down approach to creating an aligned AI. This involves the AI designer selecting an appropriate alignment framework, and identifying a computational approach which implements that framework. Wallach and Allen (2008) also identify the contrasting bottom-up approach in which the emphasis is on the agent learning its own set of moral constraints which aligns its goals with that of humanity. Approaches belonging to this category include supervised learning from examples labelled by humans (Guarini 2006) , reinforcement learning (Dewey 2011; Abel et al. 2016) , and learning the values implied by human stories (Riedl and Harrison 2016) . Methods may also merge elements of the top-down and bottom-up approaches (Wallach and Allen 2008, ch. 8) . Regardless of the alignment framework used, and whether the utility functions are formed in a top-down or bottomup fashion, once these functions have been established we might expect that an MEU agent based on Eq. 4 in combination with Eq. 1 or 2 would exhibit human-aligned behaviour. Unfortunately, this may not be the case. Equation 4 collapses all of the factors influencing the decision as represented by the alignment utility functions and the primary utility function into a single scalar value. The behaviour elicited by maximising the expected value of this scalar utility will be heavily influenced by the relative scale of the individual utility functions. If the obtainable values for the primary utility U P greatly exceed those of the U A i functions, then the agent may act to maximise U P even if this violates the intended alignment framework. Alternatively, if the scale of U P is much lower than the U A i values, then the agent may focus entirely on the alignment factors and fail to perform any useful function (for example, a self-driving car which refuses to start its engine so as to minimise any risk to human life). This can potentially be addressed by introducing weighting factors into the combination of the utility functions, as shown in Eq. 5 where w i ∈ ℝ >0 represents a positive weight associated with each utility function. The weights serve two purposes-they allow the relative scales of the different utility functions to be normalised with respect to each other, and also provide a means for the system designer to indicate the relative importance of the different factors. However, designing this weighted utility function to produce the desired behavior may still prove problematic. The non-linear nature of the argmax operator in Eqs. 1 and 2 means that the relationship between the weights w i and the behaviour of the agent is not straightforward (Van Moffaert et al. 2014) . Identifying suitable weights to produce the target behaviour can therefore be quite difficult. In fact, in some cases it may be that no weights exist which will elicit correctly-aligned actions from the agent (Das and Dennis 1997; Vamplew et al. 2008) . For example consider a care robot scenario, inspired by the work of Anderson et al. (2006b) . The robot is tasked with carrying out a primary objective U P of ensuring a patient complies with their treatment program, while the alignment objective U A aims to preserve the patient's sense of independence and autonomy. The robot has five actions available-a 1 maximises compliance, but at the cost of eliminating the patient's autonomy, while a 2 allows the patient complete independence, and therefore does not ensure compliance. The other actions offer a compromise between the two factors. Figure 1 illustrates the value of each action with respect to each of the objectives. As the value of actions a 3 , a 4 and a 5 lie below the line between a 1 and a 2 , there are no weight values for which these actions would be the utility maximising action (Section 2 of Das and Dennis 1997 provides a proof of this observation). Therefore in this case an AI based on Eq. 5 would be unable to select actions a 3 , a 4 or a 5 even if they would be the best compromise between the two objectives. 3 (5) U(s) = w 0 U P (s) + n ∑ i=1 w i U A i (s) A further, non-technical objection to a linear-weighted approach to aligned AI is that by explicitly mapping all utility functions to a common scale, this approach may in some scenarios conflate economic and moral factors in a manner which would be philosophically unacceptable to many people (Wallach and Allen 2008) . In summary, the task of specifying an appropriate utility function to align an MEU agent's behaviour with human interests is extremely problematic and perhaps impossible if a scalar-valued utility function is used. Littman (2015) discusses the related task of specifying reward functions which elicit the desired behaviour from a reinforcement learning agent, and recommends that future research focus on developing more structured formats for reward-function specification to facilitate specifying more complex behaviour. Similarly, Dewey (2014) has argued that goal specification is critical to the creation of aligned AI, and that therefore there is a need for the development of reward engineering techniques to assist developers in correctly specifying AI goals. In the next section we will argue that a vector-valued (i.e. multiobjective) utility function in combination with a nonlinear approach to action selection provides this additional structure, and therefore is a suitable mechanism for implementing human-aligned MEU agents.
A multiobjective approach to human-aligned AI The previous section demonstrated that the constraints defined by different alignment frameworks can be represented via multiple utility functions. However, linearly combining these into a single scalar measure of utility to allow the application of conventional MEU approaches introduces problems, which may prevent the agent from acting in an aligned fashion. This section will examine the advantages which accrue from adopting an explicitly multiobjective approach to utility, in terms of both representation and action-selection. This section will also examine how methods based on the concept of multiobjective maximum of expected utility (MOMEU) may prove beneficial in creating aligned AI, and briefly review prior work on multiobjective AI.
Multiobjective MOMEU The issues with MEU identified in "Can utility-maximizing AI be human-aligned?" arise from the process of combining the multiple utility values representing the primary utility and the various alignment-related factors into a single scalar value prior to performing action selection. In contrast, MOMEU approaches compose these utility values into a vector-valued utility function, as shown in Eq. 6. This vector-valued utility can then be used as the basis for action selection, as described in Eq. 7. 4 The MOMEU approach to action-selection shown in Eq. 7 shares a similar underlying structure with MEU action selection (Eq. 2). Indeed, if f is a weighted or unweighted sum of the individual utility values then this approach is equivalent to MEU, and therefore inherits the limitations of that approach. However, more generally f can be any function which induces a total ordering over the utility vectors (s, a), reflecting the system designer's preferences. In many cases this can be achieved via a real-valued function where ∀ , f ( ) > f ( ) implies that is preferred to (that is, ≻ ). However some preference relationships such as lexicographic ordering cannot be represented by a real-valued function-in such cases f must be specified in the form of an ordinal relationship which directly captures the preferences between utility vectors.
The advantages of MOMEU for aligned AI The MOMEU approach to action-selection has two key advantages in terms of specifying the desired outcomes of the behaviour of an aligned AI. First, the increased range of options available for f may allow the agent to identify courses of action which are not discoverable using linearweighted MEU. Second, the ability to use non-linear forms An example of the limitations of implementing an alignment framework using MEU with a linear-weighted combination of utility functions. Actions a 3 , a 4 and a 5 will never be selected under any weighting of the utilities for f provides an additional level of structure and expressiveness for the system designer, allowing them to explicitly specify desired trade-offs between the different components of utility-this helps address the reward engineering concerns of Dewey (2014) . (6) (s, a) = [U P (s, a), U A 1 (s, a), ..., U A n (s, a)] (7) action = argmax a (f ( (s, a)))
Satisfying alignment criteria As an example of the benefits of MOMEU consider the care robot example from Fig. 1 , where U P indicates the utility associated with the primary objective of ensuring the patient complies with treatment and U A the utility associated with maintaining the patient's autonomy. As discussed in "Can utility-maximizing AI be human-aligned?", MEU based on a linear-weighted sum of the utility terms will only ever select actions a 1 or a 2 , even though the other actions may offer more acceptable trade-offs between the relevant factors. In contrast, the MOMEU approach provides a straightforward means for the designer to specify the desired trade-off in fashion which the robot can achieve. For example, the action-selection function f can be defined using a combination of lexicographic ordering and thresholding of objectives, so as to maximise the level of compliance with the treatment program subject to maintaining an acceptable level of patient autonomy, as shown in Eq. 8. Depending on the value chosen as the minimum acceptable threshold for autonomy T A , any of the actions a 1 … a 5 could be selected as the maximal action according to MOMEU principles. In addition this definition of f provides a much more direct and understandable specification of the designer's preferences than does a specification via weights as in a scalar MEU agent.
MOMEU for fairness As a further example of the freedom which the MOMEU approach offers to the system designer in terms of specifying an action-selection function f which is appropriate to the alignment framework being used, consider the hedonistic act utilitarian approach of Bentham (1789) . As outlined by Anderson and Anderson (2007) this ethical approach can be implemented within an MEU framework by calculating a utility term U A i for each individual in the population, and then using Eq. 4 to select the action which maximises the summed happiness over the entire population, as shown in Eq. 9. (8) ∀s, a, a � f ( (s, a)) ≥ f ( (s, a � )) ⟺ min(U A (s, a), T A ) > min(U A (s, a � ), T A )∨ (min(U A (s, a), T A ) = min(U A (s, a � ), T A )∧ U P (s, a)) > U P (s, a � )) This framework has been criticised by other ethicists as it can sacrifice the needs and rights of individuals in order to provide benefits to the remainder of the population (Anderson and Anderson 2007) . Within a MOMEU agent, the individual utilities could be gathered in the same fashion, but an alternative choice of f could be made which places more emphasis on fairness. For example, Rawls (1971) proposed the maximin principle as a basis for addressing social and economic fairness. This principle selects actions which maximise the utility received by the individual who is worst off under that action, and can be implemented within an MOMEU framework via the action-selection function f shown in Eq. 10. An MOMEU approach based on maximin, or related methods such as leximin (Dubois et al. 1997) , is a natural fit to ethical AI problems such as ensuring a traffic control system gives priority to emergency vehicles even if this means delaying a large number of commuters. Fairnessbased approaches to action-selection are also well suited to ensuring ethical behaviour in multi-agent systems. Aligned AI motivated by concepts of fairness such as this would be difficult or impossible to achieve in an MEU agent based on scalar utility.
Low-impact AI The low-impact agent proposed by Amodei et al. (2016) illustrates a further benefit of the MOMEU approach. The central concept of this style of agent is that it aims to maximise its primary utility subject to achieving a suitably low-level of unintended impact on the environment. Amodei et al. (2016, p5) note that unintended side-effects of an agent's actions may be similar regardless of the primary task being performed ("knocking over furniture is probably bad for a variety of tasks"). Therefore learning or planning about how to avoid such side-effects should ideally be transferable between different primary tasks within the same environment. For example, consider an office-place robot which is initially trained to deliver the mail, while avoiding bumping into either people or the office furniture. This task can be framed in terms of utilities U P (for delivering mail), U A 1 for avoiding collisions with people, and U A 2 for avoiding collisions with furniture. Either a MEU or MOMEU approach to action-selection could then be utilised, although as discussed in "Satisfying alignment criteria" the MOMEU approach is likely to allow the designer to more readily specify the desired behaviour. In particular, (9) action = argmax a n ∑ i=1 U A i (s) (10) f ( (s, a)) = min(U A 1 … U A n ) this is another context where a thresholded lexicographic approach to action-selection (similar to that in Eq. 8 but with three components) is likely to be suitable-the relative importance of avoiding humans and avoiding furniture can be conveyed by the position of U A 1 and U A 2 within the lexicographic ordering, and by setting different threshold values for each of these alignment utilities. In addition, consider the situation where the primary purpose of the robot is changed from delivering mail to another task, such as collecting garbage. Clearly the primary utility function U P will no longer be relevant, but the alignment criteria related to avoiding collisions should still constrain the robot's actions. For an MEU agent using a scalar representation of utility, the utility related to the primary task and the utility related to side-effects have been irreversibly combined within the utility values stored by the agent. In contrast, if a multiobjective representation of utility is used, the different aspects of utility remain distinct as individual components of the utility vector. The values related to U A 1 and U A 2 can be directly transferred to the new task where they will probably still be largely applicable, ensuring the robot behaves in a safe manner while learning to carry out its new primary objective. In this way, the ability of the agent to be applied to new tasks in a safe manner has been substantially improved.
Avoiding the risks of unconstrained maximization and exploitation As discussed earlier in "The risks of unaligned utility maximizing agents", one of the recurring concerns raised in the literature about the safety of MEU methods relates to the fact that such methods focus exclusively on maximising their utility function (Omohundro 2008; Bostrom 2014; Taylor 2016) . This can readily lead to negative repercussions if there are aspects of the situation which are not included within that utility function. For example, the chess playing AI described by Omohundro (2008) may attempt to acquire increasing amounts of computational resources in order to achieve increasingly small improvements in its ability to win chess matches. Taylor et al. (2016) coins the term "mild optimization" to describe approaches which attempt to address this problem, by creating AI systems which aim to maximize their utility, but only up to an appropriate level. The MOMEU approach provides a natural means for implementing a mild optimizer. The system designer specifies both a primary utility function U P and also auxiliary alignment utilities relating to any anticipated negative aspects of the AI's behaviour (such as acquiring more resources). The designer also specifies an action selection function f which defines the appropriate level to which U P should be maximized. This could, for example, use a thresholded lexicographic ordering similar to that previously described in Eq. 8, but in this case defining a threshold level of achievement T P for U P , as in Eq. 11. The problems caused by unconstrained optimization arise due to the failure of the utility function to adequately capture all aspects of the desired behaviour of the AI. This issue can also lead to other forms of AI failure as described in Yampolskiy and Spellchecker (2016) , where the AI learns a behaviour which technically maximises its received utility, while failing to produce the desired outcomes which the utility function was intended to represent. For example, Murphy VII (2013) documents a Tetris-playing AI which paused the game to indefinitely delay any negative utility when it realised it was about to lose. Omohundro (2008) described an exploit arising within the Eurisko system of Lenat (1983) , whereby a rule evolved which had the sole purpose of artificially maximizing its own utility rating. We have observed similar unintended behaviour arising from our own attempts to train a line-following robot using reinforcement learning (Vamplew 2004) . In all of these cases the cause is that the AI has discovered an exploit or glitch in the utility function, such that it can be more easily maximized by exploiting that glitch than by behaving in the desired manner. We would argue that the MOMEU approach can assist in avoiding such exploits in two ways. First, the separation of the different desired components of the AI's behaviour into separate utility functions simplifies the task of the system designer, in the same way that decomposing a program into separate modules aids the task of a software engineer. We contend that a utility function (and associated action-selection function f) designed in MOMEU fashion is less likely to contain errors or exploits than is a MEU utility function. This is essentially the argument made by Dewey (2014) and Littman (2015) when advocating for reward engineering and structured methods for reward specification. A second approach to using MOMEU to reduce the likelihood of exploits in the utility function would be to develop several independent utility functions, each designed to achieve the same aim. These functions may themselves by either scalar or vector in nature, but for simplicity we assume for now that they are scalar. That is, we have a vector of utility measures, U P 1 , U P 1 , … , U P n , with each function developed independently by a different system designer. If any individual function U P E contains an error which can be exploited, this will be evident in that there will be certain states where its value will either be considerably higher or lower than the other U P terms. Therefore an action-selection (11) ∀s, a, a � f ( (s, a)) ≥ f ( (s, a � )) ⟺ min(U P (s, a), T P ) > min(U P (s, a � ), T P )∨ (min(U P (s, a), T P ) = min(U P (s, a � ), T P )∧ U A (s, a)) > U A (s, a � )) function f which merges the various utility terms while ignoring the impact of any outliers will be resistant to the effect of exploits. For example, Eq. 12 takes the mean of the U P values, after discarding the minimum and maximum values. 5 Again an analogy can be drawn between this proposed approach of redundant utility definitions and the practice of redundancy in development in software engineering (Eckhardt et al. 1991) . Under the assumption that errors in utility definition by different designers are independent, the combined utility function should be considerably more robust against exploitation than any of the individual component functions.
Dealing with changing preferences A further advantage of the MOMEU approach, as discussed by Roijers et al. (2013) , is the ability for the agent to reuse prior learning or planning should circumstances or the system designer's preferences change. For example, in our care robot scenario, if the patient's medical condition improves so that compliance is less important than previously, the agent can be directed to raise the threshold applied to the autonomy factor, and should be able to respond to this change in alignment preferences much more rapidly than would an MEU agent. More generally, the ethical standards and values of a society change over time, sometimes quite rapidly, and an agent operating within that society must be able to adjust its behaviour to reflect those changes. An MOMEU agent can potentially identify in advance appropriate behavioural policies for any form of f which it is likely to encounter. The capability to react to changes in the prioritisation of values has been identified as a critical requirement of human-aligned AI by The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (2016, p. 25) .
A review of multiobjective approaches to AI and aligned AI The examples in the previous section demonstrate that many advantages accrue from adopting a multiobjective approach to MEU agents (that is, explicitly using a vector-based (12) f ( (s, a)) = 1 n − 2 ⎛ ⎜ ⎜ ⎜ ⎝ � (U P 1 … U P n ) −min(U P 1 … U P n ) −max(U P 1 … U P n ) ⎞ ⎟ ⎟ ⎟ ⎠ representation of utility in combination with a non-linear approach to action-selection). The concept of multiobjective utility is not in itself novel, as it has been widely used by economists, amongst others, for many years (Fishburn 1968 ). 6 However, the explicit adoption of multiobjective formulations of MEU as an underlying technology for AI is a relatively new development. The work of Wellman (1985) is one of the earliest attempts to incorporate the concept of multiobjective utility into an AI system, adding the capability to reason and explain about preferences into a propositional reasoning system. Since then other AI techniques such as heuristic state-space planning methods like A \* (Refanidis and Vlahavas 2003) and multi-agent systems (Dignum 1996; Castelfranchi et al. 1999; Broersen et al. 2002) have also been extended to handle multiobjective forms of utility. One area where there has been an extended focus on multiobjective problems is the field of optimisation. Evolutionary multiobjective optimisation has emerged as a distinct and substantial branch of evolutionary computing (Coello Coello 2006) , extending evolutionary methods such as genetic algorithms to handle multiobjective measures of fitness. Similarly multiobjective specialisations have also appeared in other forms of optimisation such as particle swarm optimisation (Fieldsend 2004 ) and ant colony optimisation (Angus and Woodward 2009) . While these are optimisation methods rather than AI techniques per se, such methods can be applied to the task of optimising the behaviour of an AI system. For example, Soh and Demiris (2011) applied multiobjective evolutionary methods to discover behavior policies for robotics, web-advertising and infectious disease control. The last decade has seen a growing interest in extending decision-theoretic planning and reinforcement learning methods to handle multiple objectives. Roijers et al. (2013) provide a review of the history and the state-of-the-art of methods for multiobjective agents within the context of sequential decision making, highlighting several areas where current methods are still limited in comparison to their single-objective equivalents. While the focus of AI researchers has been largely on problems described in terms of a single scalar objective, a small but growing proportion of research has considered extending such methods to multiple objectives, and methods for addressing such problems have been developed, as summarised in the previous paragraphs. However, despite the potential benefits outlined in "The advantages of MOMEU for aligned AI", so far there has been relatively little work applying an MOMEU approach to the task of creating human-aligned AI. Keeney (1988) is perhaps the earliest example of work discussing this approach, advocating for the explicit consideration of value preferences during expert systems development, and providing recommendations on designing and using multiobjective utility functions to support this. While these issues are discussed relative to the creation of expert systems to support human decision-making, many of the principles are equally valid in the context of more autonomous AI. Wallach and Allen (2008, p. 114) cites the proposal of Hartman as an example of using evolutionary methods to create an ethical AI, with the fitness measure being composed from several separate utility functions capturing the various aspects of ethical behaviour encoded by Asimov's Laws of Robotics. Recent years have also seen the beginning of research applying multiobjective reinforcement learning to the construction of aligned AI. Livingston et al. (2008) advocates for a multiobjective approach to RL as the appropriate means for creating artificial general intelligence, and specifically note that a "dominant component of the reward function is general avoidance of malevolence towards humans". More recently, Critch (2017) examines how an RL system using multiobjective rewards may deal with the task of aligning its decisions with the values of multiple parties (such as different nations) who are collaborating on the development and deployment of the AI system. Given the potential that MOMEU methods have for addressing many of the issues with AI alignment identified in this paper, and the relatively limited focus on multiobjective approaches so far within the AI literature, we believe that a strong case exists for an increased focus on developing multiobjective AI technologies, and more specifically for investigating the application of such methods to the task of creating human-aligned AI.
Conclusion The actions of AI systems may result in unintended negative consequences unless their goals are accurately aligned with human interests. This is particularly true for agents based on the concept of MEU. Increases in the agent's intellectual capacity, the broadness of the actions available to it, and the breadth of the domain in which it is applied increase the difficulty in ensuring the agent's behaviour is aligned, and also the magnitude of the negative side-effects of any unaligned behaviour. As a result, there has been a growing recognition in recent years of the need to ensure that AI systems are aligned with human values. This paper has presented a review of the alignment frameworks proposed in the literature, highlighting that such frameworks are inherently multiobjective in nature. We note that the majority of work in MEU-based AI uses a scalar representation of utility, which serious limitations for incorporating alignment constraints on the agent's behavior. As such, we argue that the appropriate mechanism for incorporating any alignment framework into an MEU-agent is to use an explicitly multiobjective approach to the specification, representation and maximization of the utility function. This approach brings two benefits. First, it improves the capability of the agent to behave in an aligned fashion, by eliminating some of the limitations on behaviour which arise from MEU's approach to action-selection. Second, the MOMEU approach greatly increases the range and expressiveness of action-selection functions available to a system designer, making it easier for them to define action-selection operators which directly align the AI's behaviour with the designer's goals. We consider this a valuable contribution towards the emerging discipline of reward engineering. We believe that the requirements of aligned AI provide a strong argument for an increased research focus on multiobjective MEU approaches to AI. Fig. 1An example of the limitations of implementing an alignment framework using MEU with a linear-weighted combination of utility functions. Actions a 3 , a 4 and a 5 will never be selected under any weighting of the utilities
For the purposes of this paper we will ignore the vital issue of who bears legal responsibility for the actions of an AI agent. For a broader discussion of the legal issues around AI see Leenes and Lucivero (2014) and the review of the literature in Section 10 of Mittelstadt et al. (2016) .
This problem would not arise if the Pareto front shown in Fig.1was convex rather than concave in shape. However many problems will naturally result in concave fronts and so it is important that an ethical AI can deal with such problems.
Note that depending on the structure of the utility functions, if f is non-linear then Eq. 7 may fail to result in the desired behaviour unless the state vector S also incorporates information about the utility history (Roijers et al. 2013 ).
We assume here for simplicity that all U P terms have the same range.6 Although in this context it is often referred to as multiattribute utility. |
76a29655-ff14-4f24-933e-e665fd20679f | trentmkelly/LessWrong-43k | LessWrong | Peter Thiel/Eric Weinstein Transcript on Growth, Violence, and Stories
I’ve recently been through a spate of listening to podcasts. When I listened to the first ninety minutes of a three-hour conversation between Eric Weinstein and Peter Thiel, I was surprised to hear echoed a lot of ideas we've discussed on LessWrong about stagnation in academia and scientific discourse more generally (e.g. here and here). I realised there was no good online transcript for people to read, discuss, and link to, so in my off-work hours as a hopefully useful public service, I thought I’d try my hand at making one, and used a format inspired by other podcasts who do the same (in this case, the often brilliant 80,000 Hours Podcast).
I've split it into five posts for readability. (Further notes on editing are in this comment.) I'll post them every couple of days for the next week or so. The posts are: this introduction, Stories About Academia, Stories About Education, Political Violence and Distraction Theories, and Stories About Progress.
I've finished listening to the podcast, and found it to contain a very interesting alternative worldview, that I've since come back to many times in conversation.
So without further ado.
Some of the topics discussed
* What stories institutions tell when growth has stopped.
* The argument that the physics community is the most important community on Earth.
* Why we’re losing polymaths.
* Why it can be bad to train too many people.
* The relationship between automation and redistribution.
* The theory that a great deal of political discourse is a distraction from not having solutions to the lack of innovation/growth.
* How the theories of Rene Girard are an antidote to strong libertarian impulses.
* The importance of a society not disillusioned about growth, but excited about growth.
* Whether scientific progress can be a motivating story for society.
Highlights
> Peter Thiel: The direct scientific questions, I think, are very hard to get a handle on. And the reason for this is that in late modernity, which |
61ce48b9-e654-4e77-8cff-ec77f2ed88cb | trentmkelly/LessWrong-43k | LessWrong | When do you start looking for a Boston apartment?
A huge fraction of Boston-area apartments are on a September 1st to September 1st lease cycle. This is driven by college student rentals, but the whole city has ended up on the same pattern. So you might expect to see the most apartment listings in August or maybe July? Nope:
The peak is in late spring, May-Jun, about a hundred days out. It's not as dramatic as what I saw with one year's data in 2013, but people are planning way ahead. I think this might be driven by students lining up housing before they leave for the summer?
This matches my experience looking for apartments and lining up housemates: if you have strong preferences or want an especially good fit you need to start looking in the late Spring. |
dca5ae0f-dd0b-451d-bbac-7ee78a984a14 | trentmkelly/LessWrong-43k | LessWrong | Measuring up to incredible potential
Yesterday I wrote that people often talk as if events are basically determined by people’s values and capabilities, ignoring the difficulty of figuring out which opportunities to take, or even noticing opportunities.
I think one reason to have a better model is that this one doesn’t account for a substantial category of felt difficulty in being a human, possibly encouraging a general sense that one is ubiquitously failing, what with not seeming to be demonstrably grabbing the best of a vast multitude of possible options at each moment.
My own experience for instance involves often not remembering even the options that I noticed before, and these not seeming that multitudinous, except relative to my pitiful capacity to hold things in my mind. Also, regularly noticing new aspects of the world suggestive of many options whose existence hadn’t even dawned on me previously, even though they would be pretty obvious to a creature for whom logic was free. And mostly being extremely uncertain about what actions will lead to what outcomes. If I habitually model myself as roughly homo economicus, deftly directing my resources toward my goals, at worst according to my pristine , it seems like I am in danger of continually find my real self incredibly disappointing. Plus, abstracting away parts of the situation that might be rich with potential for improvement. |
87300be9-538c-467d-85d0-4525b49c69f7 | trentmkelly/LessWrong-43k | LessWrong | A single principle related to many Alignment subproblems?
I want to show a philosophical principle which, I believe, has implications for many alignment subproblems. If the principle is valid, it might allow to
* connect the study of abstractions, Shard Theory and Mechanistic Anomaly Detection;
* obtain multiple potential solutions to the Eliciting Latent Knowledge problem;
* obtain a bounded solution to outer and inner alignment. (I mean Task-directed AGI level of outer alignment.)
This post clarifies and expands on ideas from here and here. Reading the previous posts is not required.
The Principle
The principle and its most important consequences:
1. By default, humans only care about variables they could (in principle) easily optimize or comprehend.[1] While the true laws of physics can be arbitrarily complicated, the behavior of variables humans care about can't be arbitrarily complicated.
2. Easiness of optimization/comprehension can be captured by a few relatively simple mathematical properties (X).[2] Those properties can describe explicit and implicit predictions about the world.
3. We can split all variables (potentially relevant to human values) into partially arbitrary classes, based on how many X properties they have. The most optimizable/comprehensible variables (V1), less optimizable/comprehensible variables (V2), even less optimizable/comprehensible variables (V3), etc. We can do this without abrupt jumps in complexity or empty classes. The less optimizable/comprehensible the variables are, the more predictive power they might have (since they're less constrained).
Justification:
* If something is too hard to optimize/comprehend, people couldn't possibly optimize/comprehend it in the past, so it couldn't be a part of human values.
* New human values are always based on old human values. If people start caring about something which is hard to optimize/comprehend, it's because that "something" is similar to things which are easier to optimize/comprehend.[3] Human values are recursive, in some se |
c44f88bc-0c2a-4a7b-844d-301942cfc16d | trentmkelly/LessWrong-43k | LessWrong | Chapter 101: Precautionary Measures, Pt 2
Harry stood, panting, in the midst of a brief wasted circle amid the forest, more destruction than a first-year should have been able to reach, by himself. The Severing Charm wouldn't bring down a tree, so he'd started partially Transfiguring cross-sections through the wood. It hadn't let out what was inside him, bringing down a small circle of trees hadn't made him feel any better, all the emotions were still there but while he was destroying trees he at least wasn't thinking about how the feelings couldn't be let out.
After Harry had run out of available magic he'd started tearing off branches with his bare hands and snapping them. His hands were bleeding, though nothing that Madam Pomfrey couldn't fix in the morning. Only Dark magic left permanent scars on wizards.
There came a sound of something moving in the woods, like the hoofbeats of a horse, and Harry whirled, his wand rising once more; some part of his magic had returned while he was working with his hands. It occurred to him for the first time that he was out in the Forbidden Forest alone, and making noise.
What emerged into the moonlight was not the unicorn Harry had expected, but a creature with a lower body like that of a horse, gleaming white-brown beneath the moonlight, and the bare upper chest of a male human with long white hair. The moonlight caught the centaur's face, and Harry saw that the eyes were almost as blue as Dumbledore's, halfway to sapphire.
In one hand the centaur held a long wooden spear, with an overlarge metal blade whose edge did not gleam beneath the moonlight; a gleaming edge, Harry had once read, was the sign of a dull blade.
"So," the centaur said. His voice was low, powerful and male. "Here you are, surrounded by destruction. I can smell the unicorn's blood in the air, the blood of something innocent, slain to save oneself."
A jolt of sudden fear brought Harry into the now, and he said quickly, "It's not what it looks like."
"I know. The stars themselves proclaim your |
5d4922cd-b1eb-47fa-8b1b-1a54751b59c9 | trentmkelly/LessWrong-43k | LessWrong | Your Enemies Can Use Your Prediction Markets Against You
It is the year 2525. Earth and Mars are in a state of cold war against each other. Both planets import uranium from Ceres. Ceres' government is pro-Earth…for now. Earth needs to know how likely it is the Ceres government will be overthrown and replaced by a pro-Mars regime.
Earth creates a prediction market on the blockchain. Earth seeds the prediction market by buying shares that pay out if there is a coup. This lopsided approach prevents Earth from directly subsidizing a coup it doesn't want.
Earth must provide lots of liquidity because if Earth provides too little liquidity then Mars can cheaply influence Earth's decisions by manipulating the prediction market's prices.
Ceres' allegiance is zero-sum. Mars wants there to be a coup because an anti-Earth regime is a pro-Mars regime and vice versa. Mars can subsidize a coup by buying shares that pay out if there is not a coup. Mars' investment simultaneously acts as an investment in a coup and insurance against the coup failing to materialize.
Suppose Mars values a coup at 5 million Dogecoin. Mars buys shares that pay out 5 million Dogecoin if there is not a coup. Mars' investment subsidizes a coup because entrepreneurial space pirates can become Mars' counterparty and use their expected profits to fund the conquest of Ceres. Even more efficiently, the Ceres government could buy the shares itself and then change its foreign policy to be pro-Mars, thereby realizing a risk-free profit without any violence at all (while also discouraging space pirates).
But from Mars' perspective it no longer matters if there is a coup because if there is a coup then Mars' windfall of 5 million Dogecoin exactly neutralizes the value it gains from a coup.
The less likely a coup is, the more Mars stands to gain by buying insurance. Suppose the prior implied probability of a regime change is 0.20. Mars can buy its shares for 1 million Dogecoin, pocketing a risk-free net utility equivalent to 4 million Dogecoin. If, alternatively, the |
331791b5-92af-4cac-8214-06fc4c7bee48 | trentmkelly/LessWrong-43k | LessWrong | Models of preferences in distant situations
Note: working on a research agenda, hence the large amount of small individual posts, to have things to link to in the main documents.
For X, consider three different partial preferences:
1. If X were poor, they would prioritise consumption over saving.
2. X: If I were poor, I would prioritise saving over consumption.
3. X: If I were poor, I'd get my personal accountant to advise me on the best saving/consumption plan for poor people.
1 is what X's judgement would be in a different, distant situation. 2 is what X's current judgement about what their judgement would be in that situation. 3 is similar, but is based on a factually wrong model of what that distant situation is.
So what are we to make of these in terms of X's preferences? 3 can be discounted as factually incorrect. 2 is a correct interpretation of X's current (meta-)preferences over that distant situation, but we know that these will change if they actually reach that situation. It might be tempting to see 1 as the genuine preference, but that's tricky. It's a preference that X doesn't have, and may never have. Even if X were certain to end up poor, their preference may depend on the path that they took to get there - medical bankruptcy, alcoholism, or one dubious investment, could result in different preferences. And that's without considering the different ways the AI could put X in that situation - we don't want the AI to influence its own learning process by indirectly determining the preferences it will maximise.
So, essentially, using 1 is a problem because the preference is many steps removed and can be influenced by the AI (though that last issue may have solutions). Using 2 is a problem because the current (meta-)preferences are projected into a situation where they would be wrong. This can end up with someone railing against the preferences of their past self, even if those preferences now constrain them. This is, in essence, a partial version of the Gödel-like problem mentioned her, wh |
4da7b257-a4a7-4ea3-844b-7568589930ae | trentmkelly/LessWrong-43k | LessWrong | The Purpose of Purpose
1. I often make reference to the category I call variously purpose, care, concern, meaning, norms, or telos (I'll just use "purpose" from here on out to point to it).
1. I've not really explained what I mean by it in great detail.
2. That's a problem because it's the ground of reasoning.
1. It's the thing that bridges the gap created by the problem of the criterion.
3. It's also intimately tied up with what we value, since depending on what you point at with "value" it's either that purpose underlies values or the value assessment mechanism creates purpose.
1. I think understanding values is essential to addressing AI alignment.
4. So I'm writing this post both so I can reference it and so I can work out more precisely what I mean.
2. So what is purpose?
1. Often a thing we are trying to achieve.
1. That's not quite it, though, because that doesn't distinguish purpose from related concepts like desires and goals.
2. Also doesn't necessarily have to have an object to be achieved.
3. For example, I might have the purpose of compassion, and that purpose has an existence beyond specific instances of having compassion for a particular being.
2. Desires
1. I think what people usually mean by desire, wants, needs, etc. is something less abstract and more concrete than purpose.
2. Desires might be manifestations of purpose towards specific ends.
3. For example, perhaps I desire cake, but my purpose is to stay alive, which instrumentally requires consuming calories so metabolic processes can serve the purpose of keeping me alive.
3. Goals
1. Sometimes we use "purpose" as a synonym for "goal".
2. But I think there's a meaningfully different category we can point to with "goal" that we should differentiate from the one I'm pointing to with "purpose".
3. A goal typically can be achieved or attained and held on to.
4. There's something that can be measured and if it |
801bdb8a-1dfb-4b54-888a-b198cbb87f05 | trentmkelly/LessWrong-43k | LessWrong | Is backwards causation necessarily absurd?
In Newcomb's problem an agent picks either one-box or two-box and finds that no matter which option they picked, a predictor predicted them in advance. I've gone to a lot of effort to explain how this can be without requiring backwards causation (The Prediction Problem, Deconfusing Logical Counterfactuals), yet now I find myself wondering if backwards causation is such a bad explanation after all.
Unfortunately I'm not a physicist, so take what I say with a grain of salt, but I'll sketch out some reasons why backwards causation might not be as ridiculous as it first seems and hopefully someone else develops this in more detail.
One prominent theory of time is Eternalism in which there is no objective flow of time and terms like "past", "present" and "future" can only be used in a relative sense. An argument in favour of this is that it is often very convenient in physics to model space-time as a 4-dimensional space. If time is just another dimension, why should the future be treated differently than the past? Nothing in this model differentiates the two. If we have two blocks X and Y next to each other, we can view either X as the left one or Y as the left one depending on the direction we look at it from. Similarly, if A causes B in the traditional forwards sense, why can't we symmetrically view B as backwards causing A, where again if we viewed it another way A to B would be backwards causation and B to A would be forwards causation.
Another relativistic argument against time flowing is that simultaneity is only defined relative to a reference frame. Therefore, there is no unified present which is supposed to be what is flowing. This doesn't mean that the universe couldn't be described by a unidirectional graph. However, it does greatly undermine any trust in our naive intutions related to time.
Thirdly, entropy has often been the arrow of time with other physical laws claimed to be reversible. We are in a low-entropy world so entropy increases. However, if we |
3bae3efe-85fb-4be7-b0c7-176b29d24ebd | trentmkelly/LessWrong-43k | LessWrong | Self-Experiment: Does Working More Hours Increase My Output?
This piece is cross-posted on my blog here.
After writing up my research on limits to working, the sheer spread of possibilities amazed me. I genuinely wasn’t sure if I would be able to tell the difference between a day with four hours of deep work and one with eight hours. Surely we could narrow down the hypothesis space from that!
So, I designed a simple experiment. I would do one hour of deep work each day for two days, then two days of four hours each, and finally two days of eight hours each.
Why I chose this experiment
I optimized for quickly testing large effect sizes to narrow my uncertainty. I didn’t expect this experiment to be rigorous or sensitive to small nuances -- n=2 per condition, and I would love to hear any suggestions for how to blind me to whether I was working one or eight hours that day.
But by doing such an extreme experiment, I would definitely see an effect.
My output would have to be uncorrelated with hours worked for me not to. If I couldn’t easily tell a difference in the output between the conditions, it would indicate diminishing marginal returns massively influenced my output. Otherwise, I could get a rough guess at if and how much my output declined.
My guess was that I would get between 50% to 200% more done per hour on the one-hour day than the eight-hour day. I was less sure about the four-hour days, but I guessed my hourly output would fall in between that of the one-hour days and the eight-hour days. I would be quite surprised if I got more done per hour on the eight-hour day than the one-hour day. (Confession, I forgot to write these down before starting the experiment, so I’m writing them now after collecting data but before looking at the results.)
Methods
In order to have somewhat comparable results, I spent all twenty six hours writing and tracked how many words I wrote each hour. I scheduled coworking sessions on Focusmate.com to hold myself to a schedule. Since the Focusmate sessions are fifty minutes lon |
bb15e1e9-aba4-4877-a960-9c10c2767227 | trentmkelly/LessWrong-43k | LessWrong | The Mask Comes Off: At What Price?
The Information reports that OpenAI is close to finalizing its transformation to an ordinary Public Benefit B-Corporation. OpenAI has tossed its cap over the wall on this, giving its investors the right to demand refunds with interest if they don’t finish the transition in two years.
Microsoft very much wants this transition to happen. They would be the big winner, with an OpenAI that wants what is good for business. This also comes at a time when relations between Microsoft and OpenAI are fraying, and OpenAI is threatening to invoke its AGI clause to get out of its contract with Microsoft. That type of clause is the kind of thing they’re doubtless looking to get rid of as part of this.
The $37.5 billion question is, what stake will the non-profit get in the new OpenAI?
For various reasons that I will explore here, I think they should fight to get quite a lot. The reportedly proposed quarter of the company is on the low end even if it was purely the control premium, and the board’s share of future profits is likely the bulk of the net present value of OpenAI’s future cash flows.
But will they fight for fair value? And will they win?
TABLE OF CONTENTS
1. The Valuation in Question.
2. The Control Premium.
3. The Quest for AGI is OpenAI’s Telos and Business Model.
4. OpenAI’s Value is Mostly in the Extreme Upside.
THE VALUATION IN QUESTION
> Rocket Drew (The Information): Among the new details: After the split is finalized, OpenAI is considering creating a new board for the 501(c)3 charity that would be separate from the one that currently governs it, according to a person familiar with the plan.
>
> If we had to guess, the current board, including CEO Sam Altman, will look for board of directors for the nonprofit who will stay friendly to the interests of the OpenAI corporation.
>
> After the restructuring, the nonprofit is expected to own at least a 25% stake in the for-profit—which on paper would be worth at least $37.5 billion.
>
> …
>
> We asked th |
6a3b05d7-37aa-4747-8464-06dd874fc352 | StampyAI/alignment-research-dataset/blogs | Blogs | AI Safety Seems Hard to Measure
*Click lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc.*
In previous pieces, I argued that there's a real and large risk of AI systems' [developing dangerous goals of their own](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) and [defeating all of humanity](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/) - at least in the absence of specific efforts to prevent this from happening.
A young, growing field of **AI safety research** tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them).
Maybe we'll succeed in reducing the risk, and maybe we won't. **Unfortunately, I think it could be hard to know either way**. This piece is about four fairly distinct-seeming reasons that this could be the case - and that AI safety could be an unusually difficult sort of science.
This piece is aimed at a broad audience, because I think it's **important for the challenges here to be broadly understood.** I expect powerful, dangerous AI systems to have a lot of benefits (commercial, military, etc.), and to potentially *appear* safer than they are - so I think it will be hard to be as cautious about AI as we should be. I think our odds look better if many people understand, at a high level, some of the challenges in knowing whether AI systems are as safe as they appear.
First, I'll recap the basic challenge of AI safety research, and outline what I *wish* AI safety research could be like. I wish it had this basic form: "Apply a test to the AI system. If the test goes badly, try another AI development method and test that. If the test goes well, we're probably in good shape." I think car safety research mostly looks like this; I think AI *capabilities* research mostly looks like this.
Then, I’ll give four reasons that **apparent success in AI safety can be misleading.**
| |
| --- |
| **“Great news - I’ve tested this AI and it looks safe.”** Why might we still have a problem?
|
| *Problem* | *Key question* | *Explanation* |
| The **Lance Armstrong problem** | Did we get the AI to be **actually safe** or **good at hiding its dangerous actions?** | When dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “*appearing* to behave well.”
When professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.
|
| The **King Lear problem** | The AI is **(actually) well-behaved when humans are in control.** Will this transfer to **when AIs are in control?** | It's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't.
AIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to [take control of the world entirely](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/). It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation.
Like King Lear trying to decide how much power to give each of his daughters before abdicating the throne.
|
| The **lab mice problem** | **Today's "subhuman" AIs are safe.**What about **future AIs with more human-like abilities?** | Today's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.
Like trying to study medicine in humans by experimenting only on lab mice.
|
| The **first contact problem** | Imagine that **tomorrow's "human-like" AIs are safe.** How will things go **when AIs have capabilities far beyond humans'?** | AI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more.
Like trying to plan for first contact with extraterrestrials (this barely feels like an analogy).
|
I'll close with Ajeya Cotra's "[young businessperson](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/#analogy-the-young-ceo)" analogy, which in some sense ties these concerns together. A future piece will discuss some reasons for hope, despite these problems.
Recap of the basic challenge
----------------------------
A [previous piece](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) laid out the basic case for concern about AI misalignment. In brief: if extremely capable AI systems are developed using methods like the ones AI developers use today, it seems like there's a substantial risk that:
* These AIs will develop **unintended aims** (states of the world they make calculations and plans toward, as a chess-playing AI "aims" for checkmate);
* These AIs will deceive, manipulate, and overpower humans as needed to achieve those aims;
* Eventually, this could reach the point where AIs [take over the world from humans entirely](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/).
I see **AI safety research** as trying to **design AI systems that won't [aim](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#what-it-means-for) to deceive, manipulate or defeat humans - even if and when these AI systems are extraordinarily capable** (and would be very effective at deception/manipulation/defeat if they were to aim at it).That is: AI safety research is trying to reduce the risk of the above scenario, *even if* (as I've assumed) humans rush forward with training powerful AIs to do ever-more ambitious things.
(Click to expand) More detail on why AI could make this the most important century
In [The Most Important Century](https://www.cold-takes.com/most-important-century/), I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.
[This page](https://www.cold-takes.com/most-important-century/) has a ~10-page summary of the series, as well as links to an audio version, podcasts, and the full series.
The key points I argue for in the series are:
* **The long-run future is radically unfamiliar.** Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.
* **The long-run future could come much faster than we think,** due to a possible AI-driven productivity explosion.
* The relevant kind of **AI looks like it will be developed this century** - making this century the one that will initiate, and have the opportunity to shape, a future galaxy-wide civilization.
* These claims seem too "wild" to take seriously. But there are a lot of reasons to think that **we live in a wild time, and should be ready for anything.*** We, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, **we aren't ready for this.**
(Click to expand) Why would AI "aim" to defeat humanity?
A [previous piece](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures).
In brief:
* Modern AI development is essentially based on “training” via trial-and-error.
* If we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up *aiming for certain states of the world* (analogously to how a chess-playing AI aims for checkmate)*.** And these states will be *other than the ones we intended*, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.
* We should expect disaster if we have AI systems that are both (a) [powerful enough](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/) to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped[1](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn1) from changing the world in ways that contradict AI systems’ aims.)
(Click to expand) *How* could AI defeat humanity?
In a [previous piece](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/), I argue that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.
By defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.
One way this could happen is if AI became extremely advanced, to the point where it had "cognitive superpowers" beyond what humans can do. In this case, a single AI system (or set of systems working together) could imaginably:
* Do its own research on how to build a better AI system, which culminates in something that has incredible other abilities.
* Hack into human-built software across the world.
* Manipulate human psychology.
* Quickly generate vast wealth under the control of itself or any human allies.
* Come up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.
* Develop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.
However, my piece also explores what things might look like if *each AI system basically has similar capabilities to humans.* In this case:
* Humans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves.
* From this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.
* I address a number of possible objections, such as "How can AIs be dangerous without bodies?"
More: [AI could defeat all of us combined](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/)
I wish AI safety research were straightforward
----------------------------------------------
I wish AI safety research were like car safety research.[2](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn2)
While I'm sure this is an oversimplification, I think a lot of car safety research looks basically like this:
* Companies carry out test crashes with test cars. The results give a pretty good (not perfect) indication of what would happen in a real crash.
* Drivers try driving the cars in low-stakes areas without a lot of traffic. Things like steering wheel malfunctions will probably show up here; if they don't and drivers are able to drive normally in low-stakes areas, it's probably safe to drive the car in traffic.
* None of this is perfect, but the occasional problem isn't, so to speak, [the end of the world](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/). The worst case tends to be a handful of accidents, followed by a recall and some changes to the car's design validated by further testing.
Overall, **if we have problems with car safety, we'll probably be able to observe them relatively straightforwardly under relatively low-stakes circumstances.**
In important respects, many types of research and development have this basic property: we can observe how things are going during testing to get good evidence about how they'll go in the real world. Further examples include medical research,[3](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn3) chemistry research,[4](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn4) software development,[5](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn5) etc.
**Most AI research looks like this as well.** People can test out what an AI system is capable of reliably doing (e.g., translating speech to text), before integrating it into some high-stakes commercial product like Siri. This works both for ensuring that the AI system is *capable* (e.g., that it does a good job with its tasks) and that it's *safe in certain ways* (for example, if we're worried about toxic language, testing for this is relatively straightforward).
The rest of this piece will be about some of the ways in which "testing" for AI safety **fails to give us straightforward observations about whether, once AI systems are deployed in the real world, the world will actually be safe.**
While all research has to deal with *some* differences between testing and the real world, I think the challenges I'll be going through are unusual ones.
Four problems
-------------
### (1) The Lance Armstrong problem: is the AI *actually safe* or *good at hiding its dangerous actions*?
First, let's imagine that:
* We have AI systems available that can do roughly everything a human can, with some different strengths and weaknesses but no huge difference in "overall capabilities" or economic value per hour of work.
* We're observing early signs that AI systems behave in unintended, deceptive ways, such as giving wrong answers to questions we ask, or writing software that falsifies metrics instead of doing the things the metrics were supposed to measure (e.g., software meant to make a website run faster might instead falsify metrics about its loading time).
We theorize that modifying the AI training in some way[6](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn6) will make AI systems less likely to behave deceptively. We try it out, and find that, in fact, our AI systems seem to be behaving better than before - we are finding fewer incidents in which they behaved in unintended or deceptive ways.
But that's just a statement about *what we're noticing*. Which of the following just happened:
* Did we just train our AI systems to be less deceptive?
* Did we just train our AI systems to be *better at* deception, and so to make us *think* they became less deceptive?
* Did we just train our AI systems to be better at calculating when they might get caught in deception, and so to be less deceptive only when the deception would otherwise be caught?
+ This one could be useful! Especially if we're able to set up auditing systems in many real-world situations, such that we *could* expect deception to be caught a lot of the time. But it does leave open the [King Lear problem](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-King-Lear-problem) covered next.
(...Or some combination of the three?)
We're hoping to be able to deploy AI systems throughout the economy, so - just like human specialists - they will almost certainly have some opportunities to be deceptive without being caught. The fact that they *appear honest in our testing* is not clear comfort against this risk.
The analogy here is to competitive cyclist [Lance Armstrong](https://en.wikipedia.org/wiki/Lance_Armstrong). Armstrong won the Tour de France race 7 times in a row, while many of his competitors were caught using performance-enhancing drugs and disqualified. But more than 5 years after his last win, an investigation "concluded that Armstrong had used performance-enhancing drugs over the course of his career[5] and named him as the ringleader of 'the most sophisticated, professionalized and successful doping program that sport has ever seen'." Now the list of Tour de France winners looks like this:

A broader issue here is that **when AI systems become capable enough, AI safety research starts to look more like social sciences (studying human beings) than like natural sciences.** Social sciences are generally less rigorous and harder to get clean results from, and one factor in this is that it can be hard to study someone who's aware they're being studied.[7](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn7)
Two broad categories of research that might help with the Lance Armstrong problem:
* [Mechanistic interpretability](https://www.transformer-circuits.pub/2022/mech-interp-essay/index.html)[8](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn8) can be thought of analyzing the "digital brains" of AI systems (not just analyzing their behavior and performance.) Currently, AI systems are [black boxes](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#Box4) in the sense that they perform well on tasks, but we can't say much about *how* they are doing it; mechanistic interpretability aims to change this, which could give us the ability to "mind-read" AIs and detect deception. (There could still be a risk that AI systems are arranging their own "digital brains" in misleading ways, but this seems quite a bit harder than simply *behaving* deceptively.)
* Some researchers work on "scalable supervision" or "competitive supervision." The idea is that if we are training an AI system that might become deceptive, we set up some supervision process for it that we expect to reliably catch any attempts at deception. This could be because the supervision process itself uses AI systems with more resources than the one being supervised, or because it uses a system of randomized audits where extra effort is put into catching deception.
(Click to expand) Why are AI systems "black boxes" that we can't understand the inner workings of?
I explain this briefly in an [old Cold Takes post](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#making-pasta); it's explained in more detail in more technical pieces by [Ajeya Cotra](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#_HFDT_scales_far__assumption__Alex_is_trained_to_achieve_excellent_performance_on_a_wide_range_of_difficult_tasks) (section I linked to) and [Richard Ngo](https://drive.google.com/file/d/1TsB7WmTG2UzBtOs349lBqY5dEBaxZTzG/view) (section 2).
What I mean by “black-box trial-and-error” is explained briefly in an [old Cold Takes post](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#making-pasta), and in more detail in more technical pieces by [Ajeya Cotra](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#_HFDT_scales_far__assumption__Alex_is_trained_to_achieve_excellent_performance_on_a_wide_range_of_difficult_tasks) (section I linked to) and [Richard Ngo](https://drive.google.com/file/d/1TsB7WmTG2UzBtOs349lBqY5dEBaxZTzG/view) (section 2). Here’s a quick, oversimplified characterization.
Today, the most common way of building an AI system is by using an "artificial neural network" (ANN), which you might think of sort of like a "digital brain" that starts in an empty (or random) state: it hasn't yet been wired to do specific things. A process something like this is followed:
* The AI system is given some sort of task.
* The AI system tries something, initially something pretty random.
* The AI system gets information about how well its choice performed, and/or what would’ve gotten a better result. Based on this, it “learns” by tweaking the wiring of the ANN (“digital brain”) - literally by strengthening or weakening the connections between some “artificial neurons” and others. The tweaks cause the ANN to form a stronger association between the choice it made and the result it got.
* After enough tries, the AI system becomes good at the task (it was initially terrible).
* But nobody really knows anything about *how or why* it’s good at the task now. The development work has gone into building a flexible architecture for it to learn well from trial-and-error, and into “training” it by doing all of the trial and error. We mostly can’t “look inside the AI system to see how it’s thinking.”
* For example, if we want to know why a chess-playing AI such as AlphaZero made some particular chess move, we can't look inside its code to find ideas like "Control the center of the board" or "Try not to lose my queen." Most of what we see is just a vast set of numbers, denoting the strengths of connections between different artificial neurons. As with a human brain, we can mostly only guess at what the different parts of the "digital brain" are doing.
### (2) The King Lear problem: how do you test what will happen when it's no longer a test?
The Shakespeare play [King Lear](https://en.wikipedia.org/wiki/King_Lear) opens with the King (Lear) stepping down from the throne, and immediately learning that he has left his kingdom to the wrong two daughters. Loving and obsequious while he was deciding on their fate,[9](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn9) they reveal their contempt for him as soon as he's out of power and they're in it.
If we're building AI systems that can reason like humans, dynamics like this become a potential issue.
I [previously](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#existential-risks-to-humanity) noted that an AI with *any* ambitious [aim](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#what-it-means-for) - or just an AI that wants to avoid being shut down or modified - might calculate that the best way to do this is by behaving helpfully and safely in all "tests" humans can devise. But once there is a real-world opportunity to disempower humans for good, that same aim [could cause the AI to disempower humans.](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#existential-risks-to-humanity)
In other words:
* (A) When we're developing and testing AI systems, we have the power to decide which systems will be modified or shut down and which will be deployed into the real world. (Like King Lear deciding who will inherit his kingdom.)
* (B) But at some later point, these systems could be operating in the economy, in high numbers with a lot of autonomy. (This possibility is spelled out/visualized a bit more [here](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#how-this-could-work-if-humans-create-a-huge-population-of-ais) and [here](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#As_humans__control_fades__Alex_would_be_motivated_to_take_over).) At that point, they may have opportunities to [defeat all of humanity](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/) such that we never make decisions about them again. (Like King Lear's daughters after they've taken control.)
(Click to expand) How could AI defeat humanity?
In a [previous piece](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/), I argue that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.
By defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.
One way this could happen is if AI became extremely advanced, to the point where it had "cognitive superpowers" beyond what humans can do. In this case, a single AI system (or set of systems working together) could imaginably:
* Do its own research on how to build a better AI system, which culminates in something that has incredible other abilities.
* Hack into human-built software across the world.
* Manipulate human psychology.
* Quickly generate vast wealth under the control of itself or any human allies.
* Come up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.
* Develop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.
However, my piece also explores what things might look like if *each AI system basically has similar capabilities to humans.* In this case:
* Humans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves.
* From this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.
* I address a number of possible objections, such as "How can AIs be dangerous without bodies?"
More: [AI could defeat all of us combined](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/)
**If AI systems can detect the difference between (A) and (B) above, then their behavior might systematically change from one to the other - and there's no clear way to *test* their behavior in (B).**
(Click to expand) The Volkswagen emissions scandal
In 2015, the US Environmental Protection Agency "found that Volkswagen had intentionally programmed ... engines to activate their emissions controls only during laboratory emissions testing, which caused the vehicles' [nitrogen oxide] output to meet US standards during regulatory testing. However, the vehicles emitted up to 40 times more [nitrogen oxide] in real-world driving." (From [Wikipedia](https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal))
This is an example of the "King Lear problem": there was a deliberate distinction made where the engines were behaving differently during tests and during real-world use. I don't know much about this incident, but I'm guessing that regulators were able to detect the issue because of (a) [independent tests](https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal#European_discrepancies,_2014) under road conditions; (b) being able to understand the "[defeat device](https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal#EPA_Notice_of_Violation,_2015)" that the Volkswagen cars used to create the misleading readings.
Neither of these would necessarily apply to the King Lear problem as described here: (a) you can test a car under road conditions to see its emissions, but you can't (or at least, shouldn't) test how an AI behaves when it has an opportunity to disempower humans entirely; (b) AI systems may be [black boxes](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#Box4) such that it's hard to understand what's going on inside them.
In general, modern machine learning researchers consider it challenging to handle what's called "distributional shift": systematic differences between situations AIs were [trained](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#making-pasta) on and situations they're now in. To me, the King Lear problem looks like **arguably the most inconvenient possible distributional shift:** AI systems risk behaving in unexpected ways *just as* (and in fact, because) they're now able to defeat humanity, rather than being in a controlled test environment.
Some lines of research that might help here:
* If we could solve the [Lance Armstrong problem](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-Lance-Armstrong-Problem) robustly enough - such that we could be confident AIs were never behaving deceptively - we could simply prompt AIs to answer questions like "Would AI system X disempower humans given an opportunity to do so?" The [digital-brain-based](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#DigitalNeuroscience) techniques noted above could imaginably get us here.
* There might be ways of specifically trying to target the *worst-case* behavior of AI systems, so that they are nearly guaranteed not to behave in certain ways *regardless of their situation*. This could look something roughly like "simulating cases where an AI system has an opportunity to disempower humans, and giving it negative reinforcement for choosing to do so." More on this sort of approach, along with some preliminary ongoing work, [here](https://www.lesswrong.com/posts/pXLqpguHJzxSjDdx7/why-i-m-excited-about-redwood-research-s-current-project).
### (3) The lab mice problem: the AI systems we'd like to study don't exist today
Above, I said: "when AI systems become capable enough, AI safety research starts to look more like social sciences (studying human beings) than like natural sciences." But today, AI systems *aren't* capable enough, which makes it especially hard to have a meaningful test bed and make meaningful progress.
Specifically, we don't have much in the way of AI systems that seem to *deceive and manipulate* their supervisors,[10](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn10) the way I worry that [they might when they become capable enough](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/).
In fact, it's not 100% clear that AI systems could learn to deceive and manipulate supervisors even if we deliberately tried to train them to do it. This makes it hard to even get started on things like discouraging and detecting deceptive behavior.
I think AI safety research is a bit unusual in this respect: most fields of research aren't explicitly about "solving problems that don't exist yet." (Though a lot of research *ends up* useful for more important problems than the original ones it's studying.) As a result, doing AI safety research today is a bit like **trying to study medicine in humans by experimenting only on lab mice** (no human subjects available).
This does *not* mean there's no productive AI safety research to be done! (See the previous sections.) It just means that the research being done today is somewhat analogous to research on lab mice: informative and important up to a point, but only up to a point.
How bad is this problem? I mean, I do think it's a temporary one: by the time we're facing the [problems I worry about](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/), we'll be able to study them more directly. The concern is that [things could be moving very quickly by that point](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#explosive-scientific-and-technological-advancement): by the time we have AIs with human-ish capabilities, companies might be furiously making copies of those AIs and using them for all kinds of things (including both AI safety research and further research on making AI systems faster, cheaper and more capable).
So I do worry about the lab mice problem. And I'd be excited to see more effort on making "better model organisms": AI systems that show early versions of the properties we'd most like to study, such as deceiving their supervisors. (I even think it would be worth training AIs specifically to do this;[11](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn11) if such behaviors are going to emerge eventually, I think it's best for them to emerge early while there's relatively little risk of AIs' actually [defeating humanity](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/).)
### (4) The "first contact" problem: how do we prepare for a world where AIs have capabilities vastly beyond those of humans?
All of this piece so far has been about trying to make safe "human-like" AI systems.
What about AI systems with capabilities *far* beyond humans - what Nick Bostrom calls [superintelligent](https://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2/) AI systems?
Maybe at some point, AI systems will be able to do things like:
* Coordinate with each other incredibly well, such that it's hopeless to use one AI to help supervise another.
* Perfectly understand human thinking and behavior, and know exactly what words to say to make us do what they want - so just letting an AI send emails or write tweets gives it vast power over the world.
* Manipulate their own "digital brains," so that our [attempts to "read their minds"](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#DigitalNeuroscience) backfire and mislead us.
* Reason about the world (that is, [make plans to accomplish their aims](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#what-it-means-for)) in completely different ways from humans, with concepts like "glooble"[12](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn12) that are incredibly useful ways of thinking about the world but that humans couldn't understand with centuries of effort.
At this point, whatever methods we've developed for making human-like AI systems safe, honest, and restricted could fail - and silently, as such AI systems could go from "behaving in honest and helpful ways" to "appearing honest and helpful, while setting up opportunities to [defeat humanity](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/)."
Some people think this sort of concern about "superintelligent" systems is ridiculous; some[13](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn13) seem to consider it extremely likely. I'm not personally sympathetic to having high confidence either way.
But additionally, a world with huge numbers of human-like AI systems could be strange and foreign and fast-moving enough to have a lot of this quality.
Trying to prepare for futures like these could be like trying to **prepare for first contact with extaterrestrials** - it's hard to have any idea what kinds of challenges we might be dealing with, and the challenges might arise quickly enough that we have little time to learn and adapt.
The young businessperson
------------------------
For one more analogy, I'll return to the one used by Ajeya Cotra [here](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/#analogy-the-young-ceo):
> Imagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).
>
>
>
>
>
> You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. ([More](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/#analogy-the-young-ceo))
>
>
If your applicants are a mix of "saints" (people who genuinely want to help), "sycophants" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and "schemers" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?
This analogy combines most of the worries above.
* The young businessperson has trouble knowing whether candidates are truthful in interviews, and trouble knowing whether any work trial *actually* went well or just *seemed* to go well due to deliberate deception. (The [Lance Armstrong problem](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-Lance-Armstrong-Problem).)
* Job candidates could have bad intentions that don't show up until they're in power (the [King Lear Problem)](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-King-Lear-problem).
* If the young businessperson were trying to prepare for this situation before actually being in charge of the company, they could have a lot of trouble simulating it (the [lab mice problem)](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-Lab-mice-problem).
* And it's generally just hard for an eight-year-old to have much grasp *at all* on the world of adults - to even think about all the things they should be thinking about (the [first contact problem](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-first-contact-problem)).
Seems like a tough situation.
[Previously](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/), I talked about the dangers of AI *if* AI developers don't take specific countermeasures. This piece has tried to give a sense of why, even if they *are* trying to take countermeasures, doing so could be hard. The next piece will talk about some ways we might succeed anyway.
---
Footnotes
---------
1. Or persuaded (in a “mind hacking” sense) or whatever. [↩](#fnref1)- Research? Testing. Whatever. [↩](#fnref2)- Drugs can be tested in vitro, then in animals, then in humans. At each stage, we can make relatively straightforward observations about whether the drugs are working, and these are reasonably predictive of how they'll do at the next stage. [↩](#fnref3)- You can generally see how different compounds interact in a controlled environment, before rolling out any sort of large-scale processes or products, and the former will tell you most of what you need to know about the latter. [↩](#fnref4)- New software can be tested by a small number of users before being rolled out to a large number, and the initial tests will probably find most (not all) of the bugs and hiccups. [↩](#fnref5)- Such as:
* Being more careful to avoid [wrong answers that can incentivize deception](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#deceiving-and-manipulating)* Conducting randomized "audits" where we try extra hard to figure out the right answer to a question, and give an AI extra negative reinforcement if it gives an answer that we *would have* believed if not for the audit (this is "extra negative reinforcement for wrong answers that superficially look right")
* Using methods along the lines of ["AI safety via debate"](https://openai.com/blog/debate/) [↩](#fnref6)- Though there are other reasons social sciences are especially hard, such as the fact that there are often big limits to what kinds of experiments are ethical, and the fact that it's often [hard to make clean comparisons between differing populations](https://www.cold-takes.com/how-digital-people-could-change-the-world/#social-science). [↩](#fnref7)- This paper is from Anthropic, a company that my wife serves as President of. [↩](#fnref8)- Like, he actually asks them to talk about their love for him just before he decides on what share of the realm they'll get. Smh [↩](#fnref9)- [This paper](https://arxiv.org/pdf/2109.07958.pdf) is a potential example, but its results [seem pretty brittle](https://www.cold-takes.com/ai-alignment-research-links/#helpful-honest-harmless). [↩](#fnref10)- E.g., I think it would be interesting to train AI [coding systems](https://github.com/features/copilot) to write [underhanded C](http://www.underhanded-c.org/): code that looks benign to a human inspector, but does unexpected things when run. They could be given negative reinforcement when humans can correctly identify that the code will do unintended things, and positive reinforcement when the code achieves the particular things that humans are attempting to stop. This would be challenging with today's AI systems, but not necessarily impossible. [↩](#fnref11)- This is a concept that only I understand. [↩](#fnref12)- E.g., see the discussion of the "hard left turn" [here](https://www.alignmentforum.org/posts/3pinFH3jerMzAvmza/on-how-various-plans-miss-the-hard-bits-of-the-alignment) by Nate Soares, head of [MIRI](https://intelligence.org/). My impression is that others at MIRI, including Eliezer Yudkowsky, have a similar picture. [↩](#fnref13) |
d5db790e-4259-4a8b-86b1-adb0f924e9ef | StampyAI/alignment-research-dataset/arxiv | Arxiv | Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising
1. Introduction
----------------

Figure 1. There are five conversion steps with sequential dependence from left to right, and five different conversion stages of audiences from top to bottom. The lower the stage, the more efficient the conversion. In our business, we usually hope audiences to complete the last two stages.
Customer acquisition management can be considered the connection between advertising and customer relationship management to acquire new customers111https://en.wikipedia.org/wiki/Customer\_acquisition\_management.
With the explosive growth of e-commerce, continuous and effective customer acquisition has become one of the biggest challenges for real-world large-scale online applications.
In this paper, we focus on the customer acquisition task with sequential dependence among audience multi-step conversions.
Typically, in the credit card business, the audience multi-step conversion process usually needs to go through impression𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛impressionitalic\_i italic\_m italic\_p italic\_r italic\_e italic\_s italic\_s italic\_i italic\_o italic\_n →→\rightarrow→ click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k →→\rightarrow→ application𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛applicationitalic\_a italic\_p italic\_p italic\_l italic\_i italic\_c italic\_a italic\_t italic\_i italic\_o italic\_n →→\rightarrow→ approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l →→\rightarrow→ activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n steps. These steps are defined as follows:
- impression𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛impressionitalic\_i italic\_m italic\_p italic\_r italic\_e italic\_s italic\_s italic\_i italic\_o italic\_n:
In our business, the impression𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛impressionitalic\_i italic\_m italic\_p italic\_r italic\_e italic\_s italic\_s italic\_i italic\_o italic\_n
means that the advertising banner is shown to the audience selected according to several ranking metrics, e.g., the Click-Through Rate (CTR).
- click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k: The click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k means that the shown banner is clicked by the audience, and redirected to the application page.
- application𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛applicationitalic\_a italic\_p italic\_p italic\_l italic\_i italic\_c italic\_a italic\_t italic\_i italic\_o italic\_n: The application𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛applicationitalic\_a italic\_p italic\_p italic\_l italic\_i italic\_c italic\_a italic\_t italic\_i italic\_o italic\_n means that the audience has filled in the application form and click the application button for a credit card.
- approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l: The approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l means that the credit of the audience has been approved. In our system, this is also a real-time step.
- activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n: The activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n is delayed feedback, and it means that the audience has activated the credit card within a period of time after approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l. Usually, we consider whether the audience has activated the credit card within 14 days (i.e., activation in T+14).
The activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n feedback label is usually difficult to obtain due to the time-consuming of card mailing and the delayed feedback of audiences, so the class imbalance is more serious.
These conversion steps have sequential dependence, which means that only the former step occurs, the latter step may occur.
Based on this constraint, there are five different conversion stages of audiences as shown in Figure [1](#S1.F1 "Figure 1 ‣ 1. Introduction ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising"). Everything else is illegal.
In industry and academia, multi-task learning is a typical solution to improve the end-to-end conversion in the audience multi-step conversion task.
Recently, considerable efforts have been done to model task relationships in multi-task learning.
One idea is to control how Expert modules are shared across all tasks at the bottom of the multi-task model (Ma
et al., [2018b](#bib.bib15); Tang
et al., [2020](#bib.bib20); Qin
et al., [2020](#bib.bib17)), and Tower modules at the top handle each task separately as shown in Figure [2](#S1.F2 "Figure 2 ‣ 1. Introduction ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (a).
However, the Expert-Bottom pattern can only transfer shallow representations among tasks, but in the network close to the output layer, it often contains richer and more useful representations (Zeiler and Fergus, [2014](#bib.bib30); Li
et al., [2016](#bib.bib12)), which have been proved to bring more gains (Tzeng et al., [2014](#bib.bib21)).
Besides, the Expert-Bottom pattern is not specially designed for tasks with sequential dependence, so these models with the Expert-Bottom pattern can not model the sequential dependence explicitly.
Another idea is to transfer probabilities in the output layers of different tasks (Ma
et al., [2018a](#bib.bib16); Gao et al., [2019a](#bib.bib3), [b](#bib.bib4); Wen
et al., [2020](#bib.bib24)) as shown in Figure [2](#S1.F2 "Figure 2 ‣ 1. Introduction ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (b).
Similarly, the Probability-Transfer pattern can only transfer simple probability information via the scalar product, but richer and more useful representations are ignored in the vector space, which results in a great loss of gains.
If any one of the probabilities is not predicted accurately, multiple tasks will be affected.
Besides, the Probability-Transfer pattern is designed for solving the non-end-to-end post-click conversion rate via training on the entire space to relieve the sample selection bias problem, and these models with Probability-Transfer pattern can not model the sequential dependence well among audience multi-step conversions.
Therefore, a long-standing challenge is how to model the sequential dependence among audience multi-step conversions for improving the end-to-end conversion.
Along this line, we propose an Adaptive Information Transfer Multi-task (AITM) framework to model the sequential dependence among audience multi-step conversions. Specifically, due to the sequential dependence among audience multi-step conversions, the former conversion step (task) can bring useful information to the latter step (task). For example, if an audience has clicked the banner, then he/she may apply for the credit card. Conversely, if an audience doesn’t click the banner, he/she certainly will not apply for the credit card.
Based on this, different conversion stages of different audiences need to transfer different information from the former step to the latter step, and as mentioned above, the vector space close to the output layer often contains richer and more useful information.
Therefore, we let the model adaptively transfer information in the vector space close to the output layer via the Adaptive Information Transfer (AIT) module.
Another advantage of the AIT module is that it can alleviate the class imbalance of the latter task with the help of the information from the former task, which has richer positive samples.
Also, because of the sequential dependence, the former task should have a higher end-to-end conversion probability than the latter for the same audience.
Therefore, we design a Behavioral Expectation Calibrator in the loss function.
On the one hand, it makes the model results more satisfy the real production constraint, on the other hand, it provides more accurate end-to-end conversion identification.

Figure 2. (a) Expert-Bottom pattern. (b) Probability-Transfer pattern. The y^osubscript^𝑦𝑜\hat{y}\_{o}over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT is the non-end-to-end post-click conversion rate and the multi-task loss function only acts on the y^t−1subscript^𝑦𝑡1\hat{y}\_{t-1}over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT and y^tsubscript^𝑦𝑡\hat{y}\_{t}over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT in the original paper. (c) The proposed Adaptive Information Transfer Multi-task (AITM) framework. For simplicity, only two adjacent tasks are shown in the figure.
To summarize, the contributions of this paper are threefold:
* •
The proposed AIT module can adaptively learn what and how much information to transfer for different conversion stages of different audiences for improving the performance of multi-task learning with sequential dependence.
* •
Combining the Behavioral Expectation Calibrator in the loss function, offline experimental results on both industrial and public real-life datasets clearly demonstrate that the proposed framework achieves significantly better performance compared with state-of-the-art baselines.
* •
Online experiments also demonstrate significant improvement compared with existing online models, and the source code of the proposed framework has also been released.
2. Related Work
----------------
Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery (Ruder, [2017](#bib.bib18)).
In this section, we present the main multi-task learning works related to our work in two-fold:
the Expert-Bottom pattern and the Probability-Transfer pattern.
As shown in Figure [2](#S1.F2 "Figure 2 ‣ 1. Introduction ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (a), the main idea of the Expert-Bottom pattern is to control how Expert modules are shared across all tasks at the bottom of the multi-task model (Ma
et al., [2018b](#bib.bib15); Tang
et al., [2020](#bib.bib20); Qin
et al., [2020](#bib.bib17)), and the Tower modules at the top handle each task separately. Since complex problems may contain many sub-problems each requiring different experts (Eigen
et al., [2014](#bib.bib2)), some Mixture-of-Experts (MoE) models have been proposed one after another (Jacobs
et al., [1991](#bib.bib9); Eigen
et al., [2014](#bib.bib2); Shazeer et al., [2017](#bib.bib19)). Inspired by the idea, [Ma
et al.](#bib.bib15) introduced the MoE to the multi-task learning and proposed the Multi-gate Mixture-of-Experts (MMoE) (Ma
et al., [2018b](#bib.bib15)) model by the gating networks assembling the experts for different tasks.
[Zhao et al.](#bib.bib32) explored a variety of soft-parameter sharing techniques such as MMoE to efficiently optimize for multiple ranking objectives for Video recommendation (Zhao et al., [2019b](#bib.bib32)).
[Tang
et al.](#bib.bib20) proposed a Progressive Layered Extraction (PLE) (Tang
et al., [2020](#bib.bib20)) model to separate task-shared experts and task-specific experts explicitly.
The Mixture of Sequential Experts (MoSE) model (Qin
et al., [2020](#bib.bib17)) has also been proposed to model sequential user behaviors in multi-task learning.
However, the top Tower modules, which often contain richer and more useful information, can not help the tasks to improve each other due to there is no information exchange among them.
Another idea to model task relationships in multi-task learning is to transfer probabilities in the output layers of different tasks (Ma
et al., [2018a](#bib.bib16); Gao et al., [2019a](#bib.bib3), [b](#bib.bib4); Wen
et al., [2020](#bib.bib24)) as shown in Figure [2](#S1.F2 "Figure 2 ‣ 1. Introduction ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (b).
[Ma
et al.](#bib.bib16) proposed an Entire Space Multi-task Model (ESMM) (Ma
et al., [2018a](#bib.bib16)) to transfer probabilities in the output layers by post-impression click-through rate (CTR) multiplying post-click conversion rate (CVR) equals post-impression click-through&\&&conversion rate (CTCVR).
Further, more tasks are decomposed for probability transfer in ESM2𝐸𝑆superscript𝑀2ESM^{2}italic\_E italic\_S italic\_M start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT (Wen
et al., [2020](#bib.bib24)).
The Neural Multi-Task Recommendation (NMTR) (Gao et al., [2019a](#bib.bib3), [b](#bib.bib4)) has also been proposed to extend the Neural Collaborative Filtering (NCF) (He
et al., [2017](#bib.bib8)) to multi-task learning and relate the model prediction probability of each task in a cascaded manner.
However, as mentioned in Section [1](#S1 "1. Introduction ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising"), the Probability-Transfer pattern can only transfer simple probability information via the scalar product, but richer and more useful representations are ignored in the vector space, which results in a great loss.
Besides, if any one of the probabilities is not predicted accurately, multiple tasks will be affected.
Other efforts have also utilized the tensor factorization (Yang and
Hospedales, [2017](#bib.bib29)), tensor normal priors (Long
et al., [2017](#bib.bib14)), attention mechanism (Liu
et al., [2019](#bib.bib13); Zhao
et al., [2019a](#bib.bib31)), and so on to solve the multi-task learning.
Nevertheless, these above efforts are not specially designed for tasks with sequential dependence, and they can not model the sequential dependence well among audience multi-step conversions.
3. The MTL Ranking System in Meituan app
-----------------------------------------

Figure 3. The MTL Ranking System in Meituan app.
In this section, we give an overview of the MTL ranking system in Meituan app.
As shown in Figure [3](#S3.F3 "Figure 3 ‣ 3. The MTL Ranking System in Meituan app ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising"), in our credit card business, we model four tasks except for the passive impression𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛impressionitalic\_i italic\_m italic\_p italic\_r italic\_e italic\_s italic\_s italic\_i italic\_o italic\_n step.
Among them, the approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l and activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n are the main tasks, and the click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k and application𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛applicationitalic\_a italic\_p italic\_p italic\_l italic\_i italic\_c italic\_a italic\_t italic\_i italic\_o italic\_n are the auxiliary tasks.
That is because if the audience has only completed the click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k and application𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛applicationitalic\_a italic\_p italic\_p italic\_l italic\_i italic\_c italic\_a italic\_t italic\_i italic\_o italic\_n steps, but the approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l step has not been completed, then it will cause a waste of resources (e.g., the computing and traffic resources).
Because different audiences have different values to different businesses, the traffic that is useless to the credit card business may be useful to other businesses.
For this kind of audience, we might as well give the traffic to other businesses that may promote the audience conversion.
Therefore, we mainly focus on the last two end-to-end conversion tasks, i.e., impression𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛impressionitalic\_i italic\_m italic\_p italic\_r italic\_e italic\_s italic\_s italic\_i italic\_o italic\_n→→\rightarrow→ approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l and impression𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛impressionitalic\_i italic\_m italic\_p italic\_r italic\_e italic\_s italic\_s italic\_i italic\_o italic\_n→→\rightarrow→ activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n.
Besides, because the last two tasks have fewer positive samples and the activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n is delayed feedback, the first two auxiliary tasks with more positive samples can alleviate the class imbalance problem via the Adaptive Information Transfer module.
Meituan Co-Branded Credit Cards are issued in cooperation with different banks, and different banks are in different stages of business development, so they have different requirements for the approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l and activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n.
Start-up banks often want to issue more credit cards to quickly occupy the market, while mature banks want to increase the activation rate to achieve rapid profits.
Therefore, there is a selector in our system to output different conversion objectives for different banks.
The multi-task framework can deal with different business requirements well.
Besides, because different businesses in Meituan all need the traffic to acquire customers for their own business, and the sensitivities of different audiences to different businesses are different, so we can not simply divide the traffic into different businesses.
We need a ranking mechanism to maximize the overall gain.
The multi-business ranking system ranks the different business scores according to the Equation:
| | | | |
| --- | --- | --- | --- |
| (1) | | score=weight×y^,𝑠𝑐𝑜𝑟𝑒𝑤𝑒𝑖𝑔ℎ𝑡^𝑦\displaystyle score=weight\times\hat{y},italic\_s italic\_c italic\_o italic\_r italic\_e = italic\_w italic\_e italic\_i italic\_g italic\_h italic\_t × over^ start\_ARG italic\_y end\_ARG , | |
where y^^𝑦\hat{y}over^ start\_ARG italic\_y end\_ARG is the predicted conversion probability for each business, and the weight𝑤𝑒𝑖𝑔ℎ𝑡weightitalic\_w italic\_e italic\_i italic\_g italic\_h italic\_t includes the value of the audience itself, the value of the business itself, and the value of the audience to the business.
The business banner with the highest score is shown to the audience.
4. Methodology
---------------
In this section, we first formulate the problem, then
we present the details of the proposed framework AITM as shown in Figure [2](#S1.F2 "Figure 2 ‣ 1. Introduction ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (c).
###
4.1. Problem Formulation
Given the input feature vector 𝒙𝒙\bm{x}bold\_italic\_x, assuming the audience needs T𝑇Titalic\_T steps to complete the final conversion after impression𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛impressionitalic\_i italic\_m italic\_p italic\_r italic\_e italic\_s italic\_s italic\_i italic\_o italic\_n (In Figure [1](#S1.F1 "Figure 1 ‣ 1. Introduction ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising"), T=4𝑇4T=4italic\_T = 4). In each conversion step t𝑡titalic\_t, if the audience completes the conversion step, the label ytsubscript𝑦𝑡y\_{t}italic\_y start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is 1111, otherwise it is 00. The sequential dependence means that y1≥y2≥⋯≥yTsubscript𝑦1subscript𝑦2⋯subscript𝑦𝑇y\_{1}\geq y\_{2}\geq\cdots\geq y\_{T}italic\_y start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ≥ italic\_y start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ≥ ⋯ ≥ italic\_y start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT (yt∈{0,1},t=1,2,⋯,Tformulae-sequencesubscript𝑦𝑡01𝑡12⋯𝑇y\_{t}\in\{0,1\},t=1,2,\cdots,Titalic\_y start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ { 0 , 1 } , italic\_t = 1 , 2 , ⋯ , italic\_T).
The multi-task framework needs to predict the end-to-end conversion probability y^tsubscript^𝑦𝑡\hat{y}\_{t}over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT of each conversion step t𝑡titalic\_t based on the input features 𝒙𝒙\bm{x}bold\_italic\_x:
| | | | |
| --- | --- | --- | --- |
| (2) | | y^t=p(y1=1,y2=1,⋯,yt=1|𝒙).subscript^𝑦𝑡𝑝formulae-sequencesubscript𝑦11formulae-sequencesubscript𝑦21⋯subscript𝑦𝑡conditional1𝒙\hat{y}\_{t}=p(y\_{1}=1,y\_{2}=1,\cdots,y\_{t}=1|\bm{x}).over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_p ( italic\_y start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = 1 , italic\_y start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = 1 , ⋯ , italic\_y start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = 1 | bold\_italic\_x ) . | |
###
4.2. Adaptive Information Transfer Multi-task (AITM) framework
As shown in Figure [2](#S1.F2 "Figure 2 ‣ 1. Introduction ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (c), given the input feature vector 𝒙𝒙\bm{x}bold\_italic\_x,
we embed each entry xisubscript𝑥𝑖x\_{i}italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT (xi∈𝒙subscript𝑥𝑖𝒙x\_{i}\in\bm{x}italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ bold\_italic\_x, 1≤i≤|𝒙|1𝑖𝒙1\leq i\leq|\bm{x}|1 ≤ italic\_i ≤ | bold\_italic\_x |) to a low dimension dense vector representation 𝒗i∈ℝdsubscript𝒗𝑖superscriptℝ𝑑\bm{v}\_{i}\in\mathbb{R}^{d}bold\_italic\_v start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT, where d𝑑ditalic\_d is the dimension of embedding vectors. The output of the Shared Embedding module is the concatenation of all embedding vectors:
| | | | |
| --- | --- | --- | --- |
| (3) | | 𝒗=[𝒗1;𝒗2;⋯;𝒗|𝒙|],𝒗subscript𝒗1subscript𝒗2⋯subscript𝒗𝒙\displaystyle\bm{v}=[\bm{v}\_{1};\bm{v}\_{2};\cdots;\bm{v}\_{|\bm{x}|}],bold\_italic\_v = [ bold\_italic\_v start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ; bold\_italic\_v start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ; ⋯ ; bold\_italic\_v start\_POSTSUBSCRIPT | bold\_italic\_x | end\_POSTSUBSCRIPT ] , | |
where [⋅;⋅]⋅⋅[\cdot;\cdot][ ⋅ ; ⋅ ] denotes the concatenation of two vectors.
By sharing the same embedding vectors among all tasks, on the one hand, the framework could learn the embedding vectors with rich positive samples of the former tasks to share information and alleviate the class imbalance of the latter tasks, and reduce the model parameters on the other hand.
Given T𝑇Titalic\_T tasks, the output of the Tower of each task t(1≤t≤T)𝑡1𝑡𝑇t(1\leq t\leq T)italic\_t ( 1 ≤ italic\_t ≤ italic\_T ) is defined as:
| | | | |
| --- | --- | --- | --- |
| (4) | | 𝒒t=ft(𝒗),subscript𝒒𝑡subscript𝑓𝑡𝒗\displaystyle\bm{q}\_{t}=f\_{t}(\bm{v}),bold\_italic\_q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_f start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( bold\_italic\_v ) , | |
where the ft(⋅)subscript𝑓𝑡⋅f\_{t}(\cdot)italic\_f start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( ⋅ ) function is the Tower, 𝒒t∈ℝksubscript𝒒𝑡superscriptℝ𝑘\bm{q}\_{t}\in\mathbb{R}^{k}bold\_italic\_q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT and k𝑘kitalic\_k is the output dimension of the Tower.
It should be mentioned that designing a different Tower is not the focus of this paper as we aim at designing an Adaptive Information Transfer module to model the sequential dependence.
In fact, our approach is a general framework, and any advanced models (e.g., NFM (He and Chua, [2017](#bib.bib7)), DeepFM (Guo
et al., [2017](#bib.bib6)), AFM (Xiao
et al., [2017](#bib.bib28)), and even the sequence models NHFM (Xi et al., [2020](#bib.bib27)), DIFM (Xi et al., [2021](#bib.bib25))) can be easily integrated into our framework to act as the Tower, making the proposed AITM general and flexible.
For two adjacent tasks t−1𝑡1t-1italic\_t - 1 and t𝑡titalic\_t, the output of the AIT module of the task t𝑡titalic\_t is computed as:
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| (5) | | 𝒛tsubscript𝒛𝑡\displaystyle\bm{z}\_{t}bold\_italic\_z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | =\displaystyle== | AIT(𝒑t−1,𝒒t),𝐴𝐼𝑇subscript𝒑𝑡1subscript𝒒𝑡\displaystyle AIT(\bm{p}\_{t-1},\bm{q}\_{t}),italic\_A italic\_I italic\_T ( bold\_italic\_p start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , bold\_italic\_q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , | |
| (6) | | where𝒑t−1𝑤ℎ𝑒𝑟𝑒subscript𝒑𝑡1\displaystyle where~{}\bm{p}\_{t-1}italic\_w italic\_h italic\_e italic\_r italic\_e bold\_italic\_p start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT | =\displaystyle== | gt−1(𝒛t−1),subscript𝑔𝑡1subscript𝒛𝑡1\displaystyle g\_{t-1}(\bm{z}\_{t-1}),italic\_g start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ( bold\_italic\_z start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) , | |
𝒛t−1∈ℝksubscript𝒛𝑡1superscriptℝ𝑘\bm{z}\_{t-1}\in\mathbb{R}^{k}bold\_italic\_z start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT is the output of the AIT module of the task t−1𝑡1t-1italic\_t - 1, gt−1(⋅)subscript𝑔𝑡1⋅g\_{t-1}(\cdot)italic\_g start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ( ⋅ ) is the function to learn what information to transfer between the tasks t−1𝑡1t-1italic\_t - 1 and t𝑡titalic\_t, and 𝒑t−1∈ℝksubscript𝒑𝑡1superscriptℝ𝑘\bm{p}\_{t-1}\in\mathbb{R}^{k}bold\_italic\_p start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT is the learned transfer information.
The AIT module is designed to adaptively allocate the weights of the transferred information 𝒑t−1subscript𝒑𝑡1\bm{p}\_{t-1}bold\_italic\_p start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT and original information 𝒒tsubscript𝒒𝑡\bm{q}\_{t}bold\_italic\_q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT:
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| (7) | | 𝒛tsubscript𝒛𝑡\displaystyle\bm{z}\_{t}bold\_italic\_z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | =\displaystyle== | ∑𝒖∈{𝒑t−1,𝒒t}wuh1(𝒖),subscript𝒖subscript𝒑𝑡1subscript𝒒𝑡subscript𝑤𝑢subscriptℎ1𝒖\displaystyle\sum\_{\bm{u}\in\{\bm{p}\_{t-1},\bm{q}\_{t}\}}w\_{u}h\_{1}(\bm{u}),∑ start\_POSTSUBSCRIPT bold\_italic\_u ∈ { bold\_italic\_p start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , bold\_italic\_q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT } end\_POSTSUBSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT italic\_h start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( bold\_italic\_u ) , | |
where wusubscript𝑤𝑢w\_{u}italic\_w start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT is the weight which is formulated as:
| | | | |
| --- | --- | --- | --- |
| (8) | | wu=exp(w^u)∑uexp(w^u),w^u=<h2(𝒖),h3(𝒖)>k,\displaystyle w\_{u}=\frac{exp(\hat{w}\_{u})}{\sum\_{u}exp(\hat{w}\_{u})},~{}~{}\hat{w}\_{u}=\frac{<h\_{2}(\bm{u}),h\_{3}(\bm{u})>}{\sqrt{k}},italic\_w start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT = divide start\_ARG italic\_e italic\_x italic\_p ( over^ start\_ARG italic\_w end\_ARG start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT italic\_e italic\_x italic\_p ( over^ start\_ARG italic\_w end\_ARG start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT ) end\_ARG , over^ start\_ARG italic\_w end\_ARG start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT = divide start\_ARG < italic\_h start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( bold\_italic\_u ) , italic\_h start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ( bold\_italic\_u ) > end\_ARG start\_ARG square-root start\_ARG italic\_k end\_ARG end\_ARG , | |
where <⋅,⋅><\cdot~{},\cdot>< ⋅ , ⋅ > represents the dot product.
h1(⋅)subscriptℎ1⋅h\_{1}(\cdot)italic\_h start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( ⋅ ), h2(⋅)subscriptℎ2⋅h\_{2}(\cdot)italic\_h start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( ⋅ ), and h3(⋅)subscriptℎ3⋅h\_{3}(\cdot)italic\_h start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ( ⋅ ) represent the feed-forward networks to project the input information to one new vector representation. There are lots of ways to design h1(⋅)subscriptℎ1⋅h\_{1}(\cdot)italic\_h start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( ⋅ ), h2(⋅)subscriptℎ2⋅h\_{2}(\cdot)italic\_h start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( ⋅ ), and h3(⋅)subscriptℎ3⋅h\_{3}(\cdot)italic\_h start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ( ⋅ ). In this paper, we use a simple single-layer MLP (Multi-Layer Perceptron) (Gardner and
Dorling, [1998](#bib.bib5)) as h1(⋅)subscriptℎ1⋅h\_{1}(\cdot)italic\_h start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( ⋅ ), h2(⋅)subscriptℎ2⋅h\_{2}(\cdot)italic\_h start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( ⋅ ), and h3(⋅)subscriptℎ3⋅h\_{3}(\cdot)italic\_h start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ( ⋅ ).
The idea of this kind of attention mechanism is similar to self-attention (Vaswani et al., [2017](#bib.bib23)), the h1(⋅)subscriptℎ1⋅h\_{1}(\cdot)italic\_h start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( ⋅ ), h2(⋅)subscriptℎ2⋅h\_{2}(\cdot)italic\_h start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( ⋅ ) and, h3(⋅)subscriptℎ3⋅h\_{3}(\cdot)italic\_h start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ( ⋅ ) first learn Value, Query, Key from the same input 𝒖𝒖\bm{u}bold\_italic\_u, respectively. Then, we compute the similarity between Query (h2(⋅)subscriptℎ2⋅h\_{2}(\cdot)italic\_h start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( ⋅ )) and Key (h3(⋅)subscriptℎ3⋅h\_{3}(\cdot)italic\_h start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ( ⋅ )) according to Equation ([8](#S4.E8 "8 ‣ 4.2. Adaptive Information Transfer Multi-task (AITM) framework ‣ 4. Methodology ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising")). Finally, the Value (h1(⋅)subscriptℎ1⋅h\_{1}(\cdot)italic\_h start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( ⋅ )) is weighted via the similarity according to Equation [7](#S4.E7 "7 ‣ 4.2. Adaptive Information Transfer Multi-task (AITM) framework ‣ 4. Methodology ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising").
This kind of attention mechanism has been proved to be more effective in the previous works (Xi et al., [2020](#bib.bib27); Zhu
et al., [2020](#bib.bib33); Xi et al., [2021](#bib.bib25)).
For the first task without the former task, the out of the AIT module is initialized to:
| | | | |
| --- | --- | --- | --- |
| (9) | | 𝒛1=𝒒1.subscript𝒛1subscript𝒒1\displaystyle\bm{z}\_{1}=\bm{q}\_{1}.bold\_italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = bold\_italic\_q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT . | |
The prediction probability of each task t𝑡titalic\_t is:
| | | | |
| --- | --- | --- | --- |
| (10) | | y^t=sigmoid(MLP(𝒛t)),subscript^𝑦𝑡𝑠𝑖𝑔𝑚𝑜𝑖𝑑𝑀𝐿𝑃subscript𝒛𝑡\displaystyle\hat{y}\_{t}=sigmoid(MLP(\bm{z}\_{t})),over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_s italic\_i italic\_g italic\_m italic\_o italic\_i italic\_d ( italic\_M italic\_L italic\_P ( bold\_italic\_z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) , | |
where the MLP is used to project the 𝒛tsubscript𝒛𝑡\bm{z}\_{t}bold\_italic\_z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT to the output space.
###
4.3. Behavioral Expectation Calibrator and Joint Opratortimization for MTL
For classification tasks, we need to minimize the cross-entropy loss of all tasks:
| | | | |
| --- | --- | --- | --- |
| (11) | | ℒce(θ)=−1N∑t=1T∑(𝒙,yt)∈𝒟N((ytlogy^t+(1−yt)log(1−y^t)),\small{\mathcal{L}\_{ce}(\theta)=-\frac{1}{N}\sum\_{t=1}^{T}\sum^{N}\_{(\bm{x},y\_{t})\in\mathcal{D}}(\left(y\_{t}\log\hat{y}\_{t}+(1-y\_{t})\log(1-\hat{y}\_{t})\right)},caligraphic\_L start\_POSTSUBSCRIPT italic\_c italic\_e end\_POSTSUBSCRIPT ( italic\_θ ) = - divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ∑ start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ( bold\_italic\_x , italic\_y start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∈ caligraphic\_D end\_POSTSUBSCRIPT ( ( italic\_y start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT roman\_log over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + ( 1 - italic\_y start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) roman\_log ( 1 - over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) , | |
where N𝑁Nitalic\_N is the number of samples in the entire sample space 𝒟𝒟\mathcal{D}caligraphic\_D, ytsubscript𝑦𝑡y\_{t}italic\_y start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is the label of the t𝑡titalic\_t-th task and θ𝜃\thetaitalic\_θ is the parameter set in the MTL framework.
Besides, because of the sequential dependence, the former task should have a higher end-to-end conversion probability than the latter for the same audience, i.e., y^t−1≥y^tsubscript^𝑦𝑡1subscript^𝑦𝑡\hat{y}\_{t-1}\geq\hat{y}\_{t}over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ≥ over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT.
We design a Behavioral Expectation Calibrator to minimize the following objective. On the one hand, it makes the model results more satisfy the real production constraint, on the other hand, it provides more accurate end-to-end conversion identification:
| | | | |
| --- | --- | --- | --- |
| (12) | | ℒlc(θ)=1N∑t=2T∑𝒙∈𝒟Nmax(y^t−y^t−1,0).subscriptℒ𝑙𝑐𝜃1𝑁superscriptsubscript𝑡2𝑇subscriptsuperscript𝑁𝒙𝒟𝑚𝑎𝑥subscript^𝑦𝑡subscript^𝑦𝑡10\mathcal{L}\_{lc}(\theta)=\frac{1}{N}\sum\_{t=2}^{T}\sum^{N}\_{\bm{x}\in\mathcal{D}}max(\hat{y}\_{t}-\hat{y}\_{t-1},0).caligraphic\_L start\_POSTSUBSCRIPT italic\_l italic\_c end\_POSTSUBSCRIPT ( italic\_θ ) = divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG ∑ start\_POSTSUBSCRIPT italic\_t = 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ∑ start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT bold\_italic\_x ∈ caligraphic\_D end\_POSTSUBSCRIPT italic\_m italic\_a italic\_x ( over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , 0 ) . | |
If y^t>y^t−1subscript^𝑦𝑡subscript^𝑦𝑡1\hat{y}\_{t}>\hat{y}\_{t-1}over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT > over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT, the ℒlc(θ)subscriptℒ𝑙𝑐𝜃\mathcal{L}\_{lc}(\theta)caligraphic\_L start\_POSTSUBSCRIPT italic\_l italic\_c end\_POSTSUBSCRIPT ( italic\_θ ) will output a positive penalty term, otherwise output 0.
The final loss function ℒ(θ)ℒ𝜃\mathcal{L}(\theta)caligraphic\_L ( italic\_θ ) of the AITM combines the two components to a unified multi-task learning framework:
| | | | |
| --- | --- | --- | --- |
| (13) | | ℒ(θ)=ℒce(θ)+αℒlc(θ),ℒ𝜃subscriptℒ𝑐𝑒𝜃𝛼subscriptℒ𝑙𝑐𝜃\mathcal{L}(\theta)=\mathcal{L}\_{ce}(\theta)+\alpha\mathcal{L}\_{lc}(\theta),caligraphic\_L ( italic\_θ ) = caligraphic\_L start\_POSTSUBSCRIPT italic\_c italic\_e end\_POSTSUBSCRIPT ( italic\_θ ) + italic\_α caligraphic\_L start\_POSTSUBSCRIPT italic\_l italic\_c end\_POSTSUBSCRIPT ( italic\_θ ) , | |
where α𝛼\alphaitalic\_α controls the strength of the Behavioral Expectation Calibrator component.
The framework is implemented using TensorFlow222https://www.tensorflow.org/ and trained through stochastic gradient descent over shuffled mini-batches with the Adam (Kingma and Ba, [2015](#bib.bib11)) update rule.
5. Experiments
---------------
Table 1. Summary statistics for the datasets. “%Positive” represents the percentage of positive samples in the train set over each task.
| Dataset | #Task | #Train | #Validation | #Test | %Positive(%) |
| --- | --- | --- | --- | --- | --- |
| Industrial | 4 | 20M | 3M | 26M | 23.29/1.84/1.30/1.00 |
| Public | 2 | 38M | 4.2M | 43M | 3.89/0.02 |
In this section, we perform experiments to evaluate the proposed framework against various baselines on both industrial and public real-world datasets.
We first introduce the datasets, evaluation protocol, and baseline methods. Finally, we present our experimental results and analysis.
Table 2. The AUC performance (mean±plus-or-minus\pm±std) on the industrial and public datasets. The Gain means the mean AUC improvement compared with the LightGBM. Underlined results indicate the best baselines over each task. “\*” indicates that the improvement of the proposed AITM is statistically significant compared with the best baselines at p-value <<< 0.05 over paired samples t-test, and “\*\*” indicates that the p-value <<< 0.01.
| | | |
| --- | --- | --- |
| Model | Industrial dataset | Public dataset |
| approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l AUC | activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n AUC | Gain | click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k AUC | purchase𝑝𝑢𝑟𝑐ℎ𝑎𝑠𝑒purchaseitalic\_p italic\_u italic\_r italic\_c italic\_h italic\_a italic\_s italic\_e AUC | Gain |
| LightGBM | 0.8392±plus-or-minus\pm±0.0011 | 0.8536±plus-or-minus\pm±0.0035 | - | - | 0.5837±plus-or-minus\pm±0.0005 | 0.5870±plus-or-minus\pm±0.0038 | - | - |
| MLP | 0.8410±plus-or-minus\pm±0.0010 | 0.8602±plus-or-minus\pm±0.0014 | +0.0018 | +0.0066 | 0.6048±plus-or-minus\pm±0.0013 | 0.5806±plus-or-minus\pm±0.0035 | +0.0211 | -0.0064 |
| ESMM | 0.8443±plus-or-minus\pm±0.0028 | 0.8691±plus-or-minus\pm±0.0025 | +0.0051 | +0.0155 | 0.6022±plus-or-minus\pm±0.0020 | 0.6291±plus-or-minus\pm±0.0061 | +0.0185 | +0.0421 |
| OMoE | 0.8438±plus-or-minus\pm±0.0022 | 0.8714±plus-or-minus\pm±0.0009 | +0.0046 | +0.0178 | 0.6049±plus-or-minus\pm±0.0020 | 0.6405±plus-or-minus\pm±0.0041 | +0.0212 | +0.0535 |
| MMoE | 0.8444±plus-or-minus\pm±0.0026 | 0.8705±plus-or-minus\pm±0.0009 | +0.0052 | +0.0169 | 0.6047±plus-or-minus\pm±0.0017 | 0.6420±plus-or-minus\pm±0.0031 | +0.0210 | +0.0550 |
| PLE | 0.8518±plus-or-minus\pm±0.0006 | 0.8731±plus-or-minus\pm±0.0016 | +0.0126 | +0.0195 | 0.6039±plus-or-minus\pm±0.0014 | 0.6417±plus-or-minus\pm±0.0013 | +0.0202 | +0.0547 |
| AITM | 0.8534±plus-or-minus\pm±0.0011\*\* | 0.8770±plus-or-minus\pm±0.0005\* | +0.0142 | +0.0234 | 0.6043±plus-or-minus\pm±0.0016 | 0.6525±plus-or-minus\pm±0.0024\*\* | +0.0206 | +0.0655 |
###
5.1. Datasets
* •
Industrial dataset: The industrial dataset contains all samples that are shown a banner of Meituan Co-Branded Credit Cards over a continuous period of time.
We divide the training, validation, and test sets in chronological order. We downsample the activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n negative samples for each bank to keep the proportion of positive samples to be 1%percent11\%1 % overall except for the test set.
Because it is necessary to evaluate the performance of the model on the test set that meets the real data distribution.
Four tasks (i.e., click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k, application𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛applicationitalic\_a italic\_p italic\_p italic\_l italic\_i italic\_c italic\_a italic\_t italic\_i italic\_o italic\_n, approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l, activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n) are contained in the dataset.
* •
Public dataset: The public dataset is the Ali-CCP (Alibaba Click and Conversion Prediction) (Ma
et al., [2018a](#bib.bib16)) dataset333https://tianchi.aliyun.com/datalab/dataSet.html?dataId=408. We use all the single-valued categorical features. Two tasks (i.e., click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k, purchase𝑝𝑢𝑟𝑐ℎ𝑎𝑠𝑒purchaseitalic\_p italic\_u italic\_r italic\_c italic\_h italic\_a italic\_s italic\_e) are contained in the dataset. We randomly take 10%percent1010\%10 % of the train set as the validation set to verify the convergence of all models.
For these two datasets, we filter the features whose frequency less than 10.
The statistics of these datasets are shown in Table [1](#S5.T1 "Table 1 ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising").
###
5.2. Evaluation Protocol
In the offline experiments, to evaluate the performance of the proposed AITM framework and the baselines, we follow the existing works (Ma
et al., [2018b](#bib.bib15), [a](#bib.bib16); Wen
et al., [2020](#bib.bib24); Tang
et al., [2020](#bib.bib20)) to use the standard metric: AUC (Area Under ROC).
In ranking tasks, AUC is a widely used metric to evaluate the ranking ability. The mean and standard deviation (std) are reported over five runs with different random seeds.
In the online A/B test, we use the end-to-end conversion rate to evaluate the performance more intuitively.
On all datasets, we report the AUC of end-to-end tasks, which are directly optimized in their loss functions.
Besides, we only report the metrics on the focused main tasks (i.e., the approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l and activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n tasks) over the industrial dataset.
###
5.3. Baseline Methods
We compare the proposed method with the following competitive and mainstream models:
* •
LightGBM (Ke et al., [2017](#bib.bib10)): LightGBM is a gradient boosting framework that uses tree based learning algorithms. LightGBM is being widely-used in many winning solutions of machine learning competitions444https://github.com/microsoft/LightGBM.
* •
MLP (Gardner and
Dorling, [1998](#bib.bib5)): We use the base structure of our AITM framework as the single task model. It is a Multi-Layer Perceptron.
* •
ESMM (Ma
et al., [2018a](#bib.bib16); Wen
et al., [2020](#bib.bib24)): The ESMM and ESM2𝐸𝑆superscript𝑀2ESM^{2}italic\_E italic\_S italic\_M start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT with Probability-Transfer pattern are designed for solving the non-end-to-end post-click conversion rate via training on the entire space to relieve the sample selection bias problem.
* •
OMoE (Ma
et al., [2018b](#bib.bib15)): The OMoE with Expert-Bottom pattern integrates experts via sharing one gate among all tasks.
* •
MMoE (Ma
et al., [2018b](#bib.bib15)): The MMoE with Expert-Bottom pattern is designed to integrate experts via multiple gates in the Gate Control as shown in Figure [2](#S1.F2 "Figure 2 ‣ 1. Introduction ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (a).
* •
PLE (Tang
et al., [2020](#bib.bib20)): The Progressive Layered Extraction (PLE) with Expert-Bottom pattern separates task-shared experts and task-specific experts explicitly. This is the state-of-the-art method under different task correlations.
###
5.4. Performance Comparison
####
5.4.1. Offline Results

Figure 4. The t-SNE visualization at different conversion score rankings of the original information 𝒒tsubscript𝒒𝑡\bm{q}\_{t}bold\_italic\_q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, transferred plus original information 𝒑t−1+𝒒tsubscript𝒑𝑡1subscript𝒒𝑡\bm{p}\_{t-1}+\bm{q}\_{t}bold\_italic\_p start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT + bold\_italic\_q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and the information 𝒛tsubscript𝒛𝑡\bm{z}\_{t}bold\_italic\_z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT learned by the AIT on the activation task.
In this subsection, we report the AUC scores and gains of all models on the offline test set.
As mentioned in Section [3](#S3 "3. The MTL Ranking System in Meituan app ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising"), we only focus on the last two main end-to-end conversion tasks on the industrial dataset.
The results of approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l AUC and activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n AUC are shown in Table [2](#S5.T2 "Table 2 ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising").
From these results, we have the following insightful observations:
* •
The MLP obtains 0.00180.00180.00180.0018 and 0.00660.00660.00660.0066 AUC gains on two tasks, respectively, compared with the tree-based model LightGBM, which indicates the fitting ability of neural network models on large-scale datasets.
* •
Compared with the single-task models LightGBM and MLP, the multi-task models ESMM, OMoE, MMoE, PLE and AITM obtain more gains by introducing the multi-task information in the neural networks.
* •
The Probability-Transfer pattern-based ESMM achieves a relatively small improvement due to only simple probability information is transferred between adjacent tasks.
* •
The Expert-Bottom pattern-based models obtain further performance improvement by controlling the shared information among different tasks. However, neither of the one-gate and multi-gate models is a clear winner on this dataset.
* •
The PLE obtains the best performance among these baselines on the two tasks via separating task-shared experts and task-specific experts explicitly.
* •
Our AITM achieves significant improvement compared with various state-of-the-art baseline models, which shows the AIT module is effective and could bring more gains on sequential dependence tasks.
The results on the public dataset are also shown in Table [2](#S5.T2 "Table 2 ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising").
From these results, we have the following findings, which are not the same as above:
* •
Serious class imbalance on the purchase𝑝𝑢𝑟𝑐ℎ𝑎𝑠𝑒purchaseitalic\_p italic\_u italic\_r italic\_c italic\_h italic\_a italic\_s italic\_e task (the proportion of positive samples is 0.02%percent0.020.02\%0.02 % as shown in Table [1](#S5.T1 "Table 1 ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising")) leads to the poor performance of the two single-task models, i.e., the LightGBM and MLP.
* •
The MLP obtains similar performance improvement compared with multi-task models on the click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k task. In other words, the multi-task models seem to have no significant improvement on the click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k task. This may be because there are only two tasks in this dataset, and no other task can provide more information before the click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k task. 3.89%percent3.893.89\%3.89 % positive samples in the click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k task are relatively abundant as shown in Table [1](#S5.T1 "Table 1 ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising").
* •
The Expert-Bottom pattern shows better performance than the Probability-Transfer pattern on the purchase𝑝𝑢𝑟𝑐ℎ𝑎𝑠𝑒purchaseitalic\_p italic\_u italic\_r italic\_c italic\_h italic\_a italic\_s italic\_e task with serious class imbalance. Besides, our AITM can explicitly use the rich positive sample information of the former click𝑐𝑙𝑖𝑐𝑘clickitalic\_c italic\_l italic\_i italic\_c italic\_k task to alleviate the class imbalance of the current purchase𝑝𝑢𝑟𝑐ℎ𝑎𝑠𝑒purchaseitalic\_p italic\_u italic\_r italic\_c italic\_h italic\_a italic\_s italic\_e task and achieve the best performance. On the other hand, it also shows the generalization ability of the proposed AITM.
####
5.4.2. Online Results
Table 3. Online A/B test results.
| Model | Gain |
| --- | --- |
| approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l | activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n |
| MLP vs LightGBM | +16.95% | +17.55% |
| AITM vs MLP | +25.00% | +42.11% |
The proposed framework is trained offline and regularly updated.
The pre-trained model is deployed in Meituan app by the TF Serving555https://github.com/tensorflow/serving, to real-timely show a banner to the audience with a high end-to-end approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l or activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n conversion rate for Meituan Co-Branded Credit Cards.
Due to business competition, user experience, and delayed feedback (T+14) of the activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n task, we can not deploy all models to the online system.
With the development of our business, we have successively deployed LightGBM, MLP, and AITM to the online system.
These models serve tens of millions of traffic every day.
A/B test is carried out for every two models with the same traffic for two consecutive weeks (It takes four weeks for all feedback to be received for every two models).
The online A/B test results are shown in Table [3](#S5.T3 "Table 3 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising").
Compared with LightGBM, the impression𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛impressionitalic\_i italic\_m italic\_p italic\_r italic\_e italic\_s italic\_s italic\_i italic\_o italic\_n →→\rightarrow→ approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l conversion rate of MLP increases by 16.95%percent16.9516.95\%16.95 % and the impression𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛impressionitalic\_i italic\_m italic\_p italic\_r italic\_e italic\_s italic\_s italic\_i italic\_o italic\_n →→\rightarrow→ activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n conversion rate increases by 17.55%percent17.5517.55\%17.55 %.
The AITM further increases the impression𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛impressionitalic\_i italic\_m italic\_p italic\_r italic\_e italic\_s italic\_s italic\_i italic\_o italic\_n →→\rightarrow→approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l and impression𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛impressionitalic\_i italic\_m italic\_p italic\_r italic\_e italic\_s italic\_s italic\_i italic\_o italic\_n →→\rightarrow→ activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n
conversion rate by 25.00%percent25.0025.00\%25.00 %, 42.11%percent42.1142.11\%42.11 % compared with MLP, respectively.
Now, the AITM has provided real-time prediction for all traffic in our system.
Besides, our system is computationally efficient, and the TP999, TP9999 of the real-time prediction is less than 20ms, 30ms in the system every day, respectively, which can meet the requirement of real-time solutions.

Figure 5. The mean AUC performance in different experiment settings, the shaded part represents the standard deviation. (a) The impact of the embedding dimension d𝑑ditalic\_d. (b) The impact of the strength α𝛼\alphaitalic\_α of the Behavioral Expectation Calibrator. (c) The impact of the proportion λ𝜆\lambdaitalic\_λ of positive samples. (d) Ablation Study of the number T𝑇Titalic\_T of tasks.

Figure 6. The mean weight wusubscript𝑤𝑢w\_{u}italic\_w start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT of the transferred information 𝒑t−1subscript𝒑𝑡1\bm{p}\_{t-1}bold\_italic\_p start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT over different conversion stages (00/10/11) at different logloss rankings, the shaded part represents the standard deviation. (a) The tasks application →→\rightarrow→ approval. (b) The tasks approval →normal-→\rightarrow→ activation.
###
5.5. Ablation Study
In this subsection, we perform the ablation study of the AIT module and the number of the tasks.
Firstly, we randomly sample 500500500500 activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n positive and negative samples in the test set, respectively.
The activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n prediction scores of positive samples are ranked in descending order, while those of negative samples are in ascending order.
We plot the original information 𝒒tsubscript𝒒𝑡\bm{q}\_{t}bold\_italic\_q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, transferred plus original information 𝒑t−1+𝒒tsubscript𝒑𝑡1subscript𝒒𝑡\bm{p}\_{t-1}+\bm{q}\_{t}bold\_italic\_p start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT + bold\_italic\_q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and the information 𝒛tsubscript𝒛𝑡\bm{z}\_{t}bold\_italic\_z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT learned by the AIT on the activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n task in Figure [4](#S5.F4 "Figure 4 ‣ 5.4.1. Offline Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") via the t-SNE (t-distributed Stochastic Neighbor Embedding (van der Maaten and
Hinton, [2008](#bib.bib22))).
From the visualization, we can obtain the following inspiring observations (Similar results can also be observed in the approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l task, we list it in the appendix due to the space limitation):
* •
When the prediction scores of the AITM are very confident (see the Top 0%−50%percent0percent500\%-50\%0 % - 50 % in Figure [4](#S5.F4 "Figure 4 ‣ 5.4.1. Offline Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising")), the three components (i.e., the Original, Transferred+Original, and AIT) can accurately identify positive and negative samples.
* •
With the confidence of the prediction scores of the AITM decreases (see the Top 50%−100%percent50percent10050\%-100\%50 % - 100 % in Figure [4](#S5.F4 "Figure 4 ‣ 5.4.1. Offline Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising")), it is difficult to identify positive and negative samples only using the original information. The transferred plus original information improves the performance compared with only the original information, which indicates that information transfer could improve the performance of tasks with sequential dependence in multi-task learning.
* •
The AIT module could adaptively learn what and how much information to transfer among audience multi-step conversions via the multi-task framework,
so the AIT further improves the performance compared with the transferred plus original information under low confidence.
Besides, we study the impact of the number of tasks as shown in Figure [5](#S5.F5 "Figure 5 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (d). We perform the experiments over tasks activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n (approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l), approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l →→\rightarrow→ activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n, application𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛applicationitalic\_a italic\_p italic\_p italic\_l italic\_i italic\_c italic\_a italic\_t italic\_i italic\_o italic\_n →→\rightarrow→ approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l →→\rightarrow→ activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n, click→→𝑐𝑙𝑖𝑐𝑘absentclick~{}\\
\rightarrowitalic\_c italic\_l italic\_i italic\_c italic\_k → application𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛applicationitalic\_a italic\_p italic\_p italic\_l italic\_i italic\_c italic\_a italic\_t italic\_i italic\_o italic\_n →→\rightarrow→ approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l →→\rightarrow→ activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n, respectively. More tasks with more positive sample information and transferred information greatly improve the performance.
###
5.6. Case Study
In order to understand how much information the AIT module transfers for different conversion stages, we extract the weight wusubscript𝑤𝑢w\_{u}italic\_w start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT in Equation ([8](#S4.E8 "8 ‣ 4.2. Adaptive Information Transfer Multi-task (AITM) framework ‣ 4. Methodology ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising")) of the transferred information 𝒑t−1subscript𝒑𝑡1\bm{p}\_{t-1}bold\_italic\_p start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT and show it in Figure [6](#S5.F6 "Figure 6 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising").
We first randomly sample 40,0004000040,00040 , 000 test samples.
In Figure [6](#S5.F6 "Figure 6 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (a), we divide the samples into three groups according to the application𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛applicationitalic\_a italic\_p italic\_p italic\_l italic\_i italic\_c italic\_a italic\_t italic\_i italic\_o italic\_n and approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l labels of 00/10/1100101100/10/1100 / 10 / 11, and rank the top 500500500500 samples in each group according to the logloss of each sample in ascending order. Figure [6](#S5.F6 "Figure 6 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (b) is the same except for the tasks are approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l and activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n.
From Figure [6](#S5.F6 "Figure 6 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising"), we have the following interesting findings:
* •
Because when the label of the former task is 0, the label of the latter task must also be 0, we can see that at this time the former task transfers very strong information to the latter task (the weight is close to 1 of the red lines in Figure [6](#S5.F6 "Figure 6 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising")).
* •
When the label of the former task is 1, the label of the latter task is uncertain.
When the label of the latter task is 1, there is little information is transferred from the former task (the green lines in Figure [6](#S5.F6 "Figure 6 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising")), which indicates that the latter task mainly identifies positive samples based on the task itself.
* •
When the label of the former task is 1, with the prediction becomes worse (the logloss rankings from 0%-25% to 75%-100%), the weight of the transferred information gradually increases (the blue and green lines in Figure [6](#S5.F6 "Figure 6 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising")), which indicates that the prediction result of the latter task is misled by the former task.
From the above results, we could see that the AIT module can learn how much information to transfer between two adjacent tasks.
###
5.7. Hype-parameter Study
In order to study the impact of hype-parameters and the stability on the performance of the AITM, we perform the hyper-parameter study.
Firstly, considering the embedding dimension d𝑑ditalic\_d, we vary the embedding dimension as [1,5,10,16,32,64,128]1510163264128[1,5,10,16,32,64,128][ 1 , 5 , 10 , 16 , 32 , 64 , 128 ], the results are shown in Figure [5](#S5.F5 "Figure 5 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (a).
We can see that the performance of the AITM is not very sensitive to the embedding dimension.
The embedding dimension is related to the complexity and capability of the model. Usually, smaller embedding dimension may fit the data distribution insufficiently, especially if the numbers of samples and features are large. While a larger embedding dimension increases the complexity of the model and requires more samples and features to fit, a proper embedding dimension can achieve the best performance (Xi
et al., [2019](#bib.bib26)). Making a trade-off between model complexity and capability, we finally set d=5𝑑5d=5italic\_d = 5 as the embedding dimension in all experiments.
Secondly, we study the impact of the strength α𝛼\alphaitalic\_α of the Behavioral Expectation Calibrator as shown in Figure [5](#S5.F5 "Figure 5 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (b).
There are performance fluctuations (the seesaw phenomenon) among four different tasks. However, the Behavioral Expectation Calibrator brings the improvement of the overall performance. We finally set α=0.6𝛼0.6\alpha=0.6italic\_α = 0.6 as the weight in all the experiments.
Thirdly, we study the impact of the proportion λ𝜆\lambdaitalic\_λ of positive samples in the industrial dataset.
We downsample the activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n negative samples to keep the proportion λ𝜆\lambdaitalic\_λ of positive samples to be [0.025%,0.05%,0.125%,0.25%,[0.025\%,0.05\%,0.125\%,0.25\%,[ 0.025 % , 0.05 % , 0.125 % , 0.25 % , 0.5%,1%,2.5%,5%]0.5\%,1\%,2.5\%,5\%]0.5 % , 1 % , 2.5 % , 5 % ] in the train set, respectively, and report the AUC performance on the entire test set in Figure [5](#S5.F5 "Figure 5 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") (c).
On the one hand, if audiences do not apply for the credit card at present, it does not mean that they will not apply for the card in the future, so we can not use too many negative samples for training.
On the other hand, it can be seen that when λ𝜆\lambdaitalic\_λ is too large, the performance of the model drops sharply. This is because too much activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n negative sample information is lost.
Besides, excessive downsampling of the activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n negative samples also leads to the loss of approval𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙approvalitalic\_a italic\_p italic\_p italic\_r italic\_o italic\_v italic\_a italic\_l positive samples.
We finally downsample the activation𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛activationitalic\_a italic\_c italic\_t italic\_i italic\_v italic\_a italic\_t italic\_i italic\_o italic\_n negative samples to keep the proportion λ𝜆\lambdaitalic\_λ of positive samples to be 1%percent11\%1 %.
This setting is applied to all models.
Combining the performance in Table [2](#S5.T2 "Table 2 ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising") and Figure [5](#S5.F5 "Figure 5 ‣ 5.4.2. Online Results ‣ 5.4. Performance Comparison ‣ 5. Experiments ‣ Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising"), we can see that even without the best parameters, the AITM is still superior to other baselines in most cases.
In other words, the performance of the AITM stays stable in a large range of values of hyper-parameters and is not very sensitive to the hyper-parameters.
6. Conclusion
--------------
In this paper, we proposed an Adaptive Information Transfer Multi-task (AITM) framework to model the sequential dependence among audience multi-step conversions. The proposed Adaptive Information Transfer (AIT) module combining the Behavioral Expectation Calibrator in the loss function could learn what and how much information to transfer for different conversion stages for improving the performance of multi-task learning with sequential dependence.
Offline and online experimental results demonstrate significant improvement compared with state-of-the-art baseline models.
7. Acknowledgments
-------------------
Fuzhen Zhuang is supported by the National Natural Science Foundation of China under Grant Nos. U1836206, U1811461.
Besides, we thank Zhenhua Zhang, Kun Chen, Chang Qu, Qiu Xiong, and KangMing Yu for their support and valuable suggestions.

Figure 7. The t-SNE visualization at different conversion score rankings of the original information 𝒒tsubscript𝒒𝑡\bm{q}\_{t}bold\_italic\_q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, transferred plus original information 𝒑t−1+𝒒tsubscript𝒑𝑡1subscript𝒒𝑡\bm{p}\_{t-1}+\bm{q}\_{t}bold\_italic\_p start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT + bold\_italic\_q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and the information 𝒛tsubscript𝒛𝑡\bm{z}\_{t}bold\_italic\_z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT learned by the AIT on the approval task. |
4d999710-1963-4415-8d9e-010a1a53b416 | trentmkelly/LessWrong-43k | LessWrong | Approval-directed bootstrapping
Approval-directed behavior works best when the overseer is very smart. Where can we find a smart overseer?
One approach is bootstrapping. By thinking for a long time, a weak agent can oversee an agent (slightly) smarter than itself. Now we have a slightly smarter agent, who can oversee an agent which is (slightly) smarter still. This process can go on, until the intelligence of the resulting agent is limited by technology rather than by the capability of the overseer. At this point we have reached the limits of our technology.
This may sound exotic, but we can implement it in a surprisingly straightforward way.
Suppose that we evaluate Hugh’s approval by predicting what Hugh would say if we asked him; the rating of action a is what Hugh would say if, instead of taking action a, we asked Hugh, “How do you rate action a?”
Now we get bootstrapping almost for free. In the process of evaluating a proposed action, Hugh can consult Arthur. This new instance of Arthur will, in turn, be overseen by Hugh—and in this new role Hugh can, in turn, be assisted by Arthur. In principle we have defined the entire infinite regress before Arthur takes his first action.
We can even learn this function by examples — no elaborate definitions necessary. Each time Arthur proposes an action, we actually ask Hugh to evaluate the action with some probability, and we use our observations to train a model for Hugh’s judgments.
In practice, Arthur might not be such a useful assistant until he has acquired some training data. As Arthur acquires training data, the Hugh+Arthur system becomes more intelligent, and so Arthur acquires training data from a more intelligent overseer. The bootstrapping unfolds over time as Arthur adjusts to increasingly powerful overseers.
----------------------------------------
This was originally posted here on 21st December 2014.
Tomorrow's AI Alignment Forum sequences will take a break, and tomorrow's post will be Issue #34 of the Alignment Newsletter.
The |
a647e4c9-b1cc-4193-ad3a-6d4468c8b1c0 | trentmkelly/LessWrong-43k | LessWrong | What are the chances that Superhuman Agents are already being tested on the internet?
I keep seeing odd bits of information from the grapevine about how Superintelligent Agents are on the horizon, with the latest being this Axios article. While I’m still unsure what to think, I started considering the possibility that these agents might already exist in an early form and are being tested online in secret. If that’s the case, could we detect them somehow?
Here are a few possible domains where such agents might already be tested:
1. Anonymous High-Performing Investors
Through platforms that allow pseudonymous accounts to trade stocks, cryptocurrencies, or other assets, an AI agent could potentially execute trades with an unusual level of insight and timing. If one were to notice an otherwise account consistently achieving market-beating returns—beyond what can be explained by luck or insider knowledge—it might hint at something like this.
2. Dominant New Gaming Accounts
We have already seen AI research before, such as OpenAI’s Dota AI or DeepMind’s AlphaStar Starcraft AI. A superhuman AI tested covertly on the internet might show up as a new account with inexplicable performance. For example, a sudden top-ranking player with flawless tactics. Although cheats and bots already exist, a genuine superhuman AI might adapt in real time, learn strategies from human opponents, and systematically dominate high-level competition without obvious cheating.
3. Mysterious GitHub Repositories with Pristine Code
Imagine a newly created GitHub repository featuring code that is exceptionally clean, well-structured, and updated at a rapid pace—perhaps multiple substantial commits in a single day, all made by a single, seemingly tireless account. Moreover, these updates might involve sophisticated algorithms or solutions to problems that typically require large teams or specialized domain expertise.
4. Other Platforms
Competitive programming websites, hacking contests, puzzle-solving communities, e-sports tournaments, or even social media accounts that consistently |
0ee873a0-67ca-4eec-860e-daa2fc630311 | trentmkelly/LessWrong-43k | LessWrong | Towards deconfusing values
NB: Kaj recently said some similar and related things while I was on hiatus from finishing this post. I recommend reading it for a different take on what I view as a line of thinking generated by similar insights.
One of the challenges with developing a theory of human values is dealing with the apparent non-systematic nature of human decision making which makes it seem that human value are not consistent, coherent, or rational. One solution is to build or discover mechanisms by which they can be made legible and systematic. Another is to embrace the illegibility and inconsistency and find ways of working with it. This is a short start towards doing the latter because I believe the former cannot be made to work well enough to stand up against Goodhart effects under extreme optimization by superintelligent AGI that we want to align with human values.
I've been thinking a lot about what values are, and in particular looking for phenomena that naturally align with the category we variously call values, preferences, affinity, taste, aesthetics, or axiology. The only thing I have found that looks like a natural kind (viz. a model that cuts reality at its joints) is valence.
Valence on its own doesn't fully explain all the phenomena we want to categorize as values, especially things like meta-preferences or "idealized" values that are abstracted away from the concrete, embedded process of a human making a choice at a point in time. Instead it gives us a mechanism by which we can understand why a human makes one choice over another at some point in their causal history. And decisions are not themselves preferences, because decisions are embedded actions taken by an agent in an environment whereas preferences are, as typically considered, generators of decisions. I think we need to flip this notion of preferences as generators on its head, and in so doing we move towards becoming less confused about preferences.
So let me describe my current model of how this works, and |
e90cf5ab-87e2-4cc1-b588-261567371e91 | trentmkelly/LessWrong-43k | LessWrong | A paradox of existence
Introduction
There is a question in philosophy "why is there something rather than nothing?" I have always thought of this question as completely impossible to answer: either it is a meaningless question, or at least there seems to be no way that we can ever begin to answer it. Yet it has always seemed very weird to me that there is such a thing as existence. I recently developed a paradox that has made this confusion less mysterious for me. The paradox is not about why something exists, but how it is possible that we know that we exist. Note, the question is still very confusing, but it has changed the level of confusion for me from something like "completely unsolvable ever" to merely roughly as confusing as the hard problem of consciousness. Maybe this idea already exists out there, or maybe I'm just being confused, but as far as I can tell, this paradox really seems to make my confusion around existence seem no longer completely intractable.
First of all, note that we humans have a very strong sense that there is a “reality”, that “something exists” or that “I exist”, and that we actually know this fact (i.e. it is not thought of as merely a speculative hypothesis). We have uncertainty about this reality, e.g. perhaps the world as we perceive it is an “illusion” or a simulation, but we seem to know that something like a reality exists. It is not at all obvious that this is the case, i.e. that creatures in a reality know that they exist. It could just as well have been the case that (1) there is an existing reality within which intelligent lifeforms have evolved, but (2) at no point do these lifeforms notice that they exist, or that existence is even a thing, they just do the usual stuff of building spaceships and inventing the internet without ever reflecting on or noticing or even having the concept of “existence”.
The paradox
So having said that, I'm going to state a paradox. By paradox I mean a set of observations, each of which seems (to the author and p |
b8a952c8-fc9f-4817-875d-f28545a20110 | trentmkelly/LessWrong-43k | LessWrong | Rationality is Not an Attractive Tribe
Summary: I wonder how attractive rationality as a tribe and worldview is to the average person, when the competition is not constrained by verifiability or consistency and is therefore able optimize around offering imaginary status superstimuli to its adherents.
> Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'
>
> — Isaac Asimov
I've long been puzzled by the capability of people to reject obvious conclusions and opt for convoluted arguments that boil down to logical fallacies when it comes to defending a belief they have a stake in. When someone resists doing the math, despite an obvious capability to do so in other similar cases, we are right to suspect external factors at play. A framework that seems congruent with the evolutionary history of our species is that of beliefs as signals of loyalty to a tribe. Such a framework would explain the rejection of evolution and other scientific theories by large swathes of the world's population, especially religious population, despite access to a flood of evidence in support.
I will leave support of the tribal signalling framework to others, and examine the consequences for popular support of rationality and science if indeed such a framework successfully approximates reality. The best way I can do that is by examining one popular alternative: The Christian religion which I am most familiar with, in particular its evangelical protestant branch. I am fairly confident that this narrative can be ported to other branches of Christianity and abrahamic faiths fairly easily and the equivalents for other large religions can be constructed with some extra effort.
> "Blessed are the meek, for they will inherit the earth"
>
> — The Bible (New International Version), Matthew 5:5
What is the narrative that an evangelical Christian buys into regarding their own stat |
a401975e-a623-416f-b1f9-c2d8a6622823 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Ethan Perez on the Inverse Scaling Prize, Language Feedback and Red Teaming
I talked to [Ethan Perez](https://www.lesswrong.com/users/ethan-perez) about the [Inverse Scaling Prize](https://www.lesswrong.com/posts/eqxqgFxymP8hXDTt5/announcing-the-inverse-scaling-prize-usd250k-prize-pool) (deadline August 27!), [Training Language Models with Language Feedback](https://arxiv.org/abs/2204.14146) and [Red-teaming Language Models with Language models](https://arxiv.org/abs/2202.03286).
Below are some highlighted quotes from our conversation (available on [Youtube](https://youtu.be/TjWiaUMMh6g), [Spotify](https://open.spotify.com/episode/1vvAKf8EBwErP5yGFRNoCT?si=1a28296cdfa94c01), [Google Podcast](https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy81NmRmMjE5NC9wb2RjYXN0L3Jzcw/episode/MzJlMzk4YTAtYmMzZC00MDVkLWIzMTAtNTZhMmM2ZDc2MTg0?sa=X&ved=0CAUQkfYCahcKEwiI2sT3hY35AhUAAAAAHQAAAAAQAQ), [Apple Podcast](https://podcasts.apple.com/us/podcast/connor-leahy-eleutherai-conjecture/id1565088425?i=1000570841369)). For the full context for each of these quotes, you can find the accompanying [transcript](https://theinsideview.ai/perez).
Inverse Scaling Prize
=====================
> "We want to understand what, in language model pre-training, objectives and data, is causing models to actively learn things that we don’t want them to learn. Some **examples might be that large language models are picking up on more biases or stereotypes about different demographic groups**. They might be learning to generate more toxic content, more plausible misinformation because that’s the kind of data that’s out there on the internet.
>
> It’s also relevant in the long run because **we want to have a very good understanding of how can we find where our training objectives are training models pick up the wrong behavior**, because the training objective in combination with the data defines what exactly it is we’re optimizing the models very aggressively to maximize... **this is a first step toward that larger goal. Let’s first figure out how can we systematically find where language models are being trained to act in ways that are misaligned with our preferences**. And then hopefully, with those insights, we can take them to understand where other learning algorithms are also failing or maybe how we can improve language models with alternative objectives that have less of the limitations that they have now."
>
>
Training Language Models With Language Feedback
===============================================
Why Language Feedback Instead of Comparisons
--------------------------------------------
> "The way that RL from human feedback typically works is we just compare two different generations or outputs from a model. And that gives very little information to the model about why, for example, a particular output was better than another. Basically, it's one bit of information or even less than one bit of information to the model that's doing the generation or output about how it should improve. And **there are so many ways an output could be wrong or good or bad**, and it's hard to do that attribution of which of the hundred words I generated is good or bad... **that was kind of the motivation for us to look for other sources of feedback that are much more information-dense, and an easy one for people to give is just writing feedback**. We give feedback to each other, verbally or written, e.g. in Google Docs. So this is very natural. It conveys a lot of information. It's not too difficult for us to give."
>
>
Measuring the Efficiency of Language Feedback
---------------------------------------------
> **"This lets you learn from 100 samples of human feedback. So, super data efficient... previous work had gotten something like 64,000 labels that they had to collect and it was a very intensive effort.** I think it was a full team at OpenAI working for a year or maybe longer. [...] If we're reducing the amount of samples we need to label by something like 100X, we can apply 100X more thought into effort and time going into evaluating those samples that we're evaluating [..] or maybe we just get 100 times more expertise on that question. Instead of paying crowd workers, we pay a lawyer or a doctor to actually do the evaluation. And that makes it much less likely that we have these failures from RL from human feedback I was describing earlier where the model generates some incorrect medical advice and we don't recognize it's happening."
>
>
Red-Teaming Language Models with Language Models
================================================
Detecting Power Seeking
-----------------------
> "Red teaming is basically finding cases, finding inputs where models fail to produce the behavior that you want them to. [...] So basically, what you need is some way to catch whether or not some output is harmful or undesirable or misaligned. And in the paper, we use various forms of classifiers, an offensive language classifier to detect if the model is generating offensive text in a conversation. But that classifier could detect for other things. **It could detect for power-seeking behavior. It could detect for malicious code generation.** If it’s a robot taking actions it could detect if these actions will have some bad consequences."
>
>
> "**It might be actually pretty difficult for humans to get reliably good evaluations of whether code is malicious or not.** And that's, I think, a really hard problem to solve. So in that case, **we'll need to use language models to help us better evaluate whether or not the output is a good output or not and then use our augmented human judgment to produce the labels for the data**. So we can have something looking at some piece of code, figuring out if it’s malicious or not. And we have a language model that's pair programming with us."
>
>
How Red-Teaming Could Fail
--------------------------
> "The rough problem is: you have to solve some basically computationally intractable problem that needs an exponential amount of compute to solve. And on just that input, the model fails. And you're like, "Well, that's going to be really intractable for us to produce." [...] That's an example of a kind of failure this red teaming procedure wouldn't beat out of the model. And **a model that is deceptive would just learn, "Okay, I shouldn't fail on these inputs that can be generated. I should just fail on these very rare ones or these ones that are very computationally intractable to produce but still might occur in the future," in 2030 when we do find the factorization**. [...] It doesn't have to be planning over a long time, necessarily. It can just have some if statement, which is "See if two numbers in the input multiply to produce RSA-2048, then do catastrophic things." So it could be implemented in a simple mechanism, potentially."
>
> |
819c9824-1aa3-424d-a24f-e7bbd53032a1 | trentmkelly/LessWrong-43k | LessWrong | Monthly Roundup #7: June 2023
This month’s roundup of non-AI things, with the good, the bad and the uncondoned, along with several additional individual topics. I’m also trying out running some blog-focused surveys near the end, right before the jokes.
BAD NEWS
RIP Doyle Brunson.
Alabama head baseball coach at center of college sports gambling scandal, is fired. Looks like no athletes were involved. It isn’t that easy to throw a baseball game as the coach, especially without it looking weird, but every little bit helps.
I was reminded of this 2020 post by Tim Ferriss about the downsides (and upsides) of becoming famous. So far I guess I’ve hit the sweet spot twice, once for Magic and now again as a writer, where many of the people I’d want to recognize me know who I am and think I’m cool, whereas most normal people have no idea who I am. So I get a lot of the being-famous upside, without almost any of the downsides. So far.
Pete Buttigieg is the Secretary of Transportation. He had one job, to help people and things get transported, and he screwed it up. He says “I strongly support the Jones Act” and justifies it using arguments he has to know are deeply false and disingenuous, given his background. A real shame. In so many ways I really like the guy.
Lab grown meat is currently much worse in terms of greenhouse gas emissions than actual meat, as in 4-25 times worse. Turns out doing something highly bespoke to get some semblance of imitation is currently highly energy intensive. One always always must check to see if a suggested alternative is actually better for the planet, rather than going with the vibes. They hope efficiency can improve with time.
Director of Economic Policy Studies at the American Enterprise Institute tells everyone that life is entirely about working hard.
> Michael Strain: My takeaway from this article is that the most successful people obviously do not completely “log off” (whatever that means) on nights and weekends.
>
> Friends, life isn’t a vacation. Work hard |
a8c98fd8-d0f4-4ede-b544-5500628e657e | StampyAI/alignment-research-dataset/lesswrong | LessWrong | If you lose enough Good Heart Tokens, will you lose real-world money?
*Please note that the following is satirical, just in case you somehow couldn’t tell.*
With the new Good Hearts system replacing Karma, it appears that those who receive a sufficient number of upvotes will gain actual, physical, real-world money. This begs the question: what happens to those who get downvoted?
I have to imagine that the consequences to such a shameful predicament will be dire. After all, per thermodynamic law, there must be a balance of entropy in any closed system. We know that the rationalist movement has been accused of being elitist in the past, aka a closed system. Therefore, for all rationalists who make money, there must be those who lose it. The only reasonable way to lose money as a rationalist is if you are failing to be rational, which means you’re a loser. Being a loser is typified by those who are unpopular in social situations, and downvoting is an easy method to indicate social unpopularity. Therefore, those who are downvoted are losers, and must be punished. Monetarily.
Q.E.D.
An unpopular enough post may even force the writer into debt, at which point the secret MIRI agents scattered where you least expect them (probably still in California though, let’s face it they aren’t moving) will be activated. [At this point I got bored writing this and let GPT-3 complete the rest of this post. Enjoy(?).]
These agents, upon receiving a signal that someone has become too irrational for society, will break into their house in the middle of the night and forcefully remove them from their bed. The agent will then drag the screaming, struggling person to the nearest high place and toss them off.
But before they do so, they’ll make sure to leave behind a note that reads something like “you were downvoted on LessWrong because you are irrationally committed to your current beliefs. Good luck in future lives!” |
fe7207d6-1d77-4f55-8e1a-451aa73c49c3 | trentmkelly/LessWrong-43k | LessWrong | An animated introduction to longtermism (feat. Robert Miles)
EA Forum cross-post
This article is the script of the YT video linked above. It is an animated introduction to the idea of longtermism. The video also briefly covers "The Case for Strong Longtermism" by William MacAskill and then goes over Nick Bostrom's "Astronomical Waste: The Opportunity Cost of Delayed Technological Development". If you would like a refresher about Longtermism, you may read this script, and if you are curious about the animations, you can head to the video. This time the narrator is not me, but Robert Miles.
Consider this: scientific progress and the collective well-being of humanity have been on an upward trajectory for centuries, and if it continues, then humanity has a potentially vast future ahead in which we might inhabit countless star systems and create trillions upon trillions of worthwhile lives.
This is an observation that has profound ethical implications, because the actions we take today have the potential to impact our vast future, and therefore influence an astronomically large number of future lives.
Hilary Greaves and William MacAskill, in their paper “The Case for Strong Longtermism” define strong longtermism as the thesis that says, simplifying a little bit: “in a wide class of decision situations, the best action to take is the one that has the most positive effects on the far future”. It’s easy to guess why in light of what I just said. This is a consequence of the fact that the far future contains an astronomically large number of lives.
The question is: what are the actions that have the most positive effect on the far future? There are a few positions one could take, and if you want a deep dive into all the details I suggest reading the paper that I mentioned [note: see section 3 of "The Case for Strong Longtermism] .
In this video, I will consider two main ways in which we could most positively affect the far future. They have been brought forward by Nick Bostrom in his paper “Astronomical Waste: The Opportunity Co |
099fc8ae-c69a-4526-beba-a832088b2e36 | trentmkelly/LessWrong-43k | LessWrong | Rationalist Scriptures?
I understand that the title sounds like an oxymoron, however, we it seems like we do have scriptures. The Codex, HPMOR, Rationality A-Z, etc are basically required reading so that we can have a community with a fixed starting point, however they don't seem to be well compiled.
HPMOR is a complete story. The Codex and Rationality A-Z are two series of blog posts so they retread a lot of territory - has anyone tried to edit them into more coherent books?
Related, has anyone compiled a list of "Rationalist Wisdom"? Like a bunch of sayings that distill Rationalism down that we can point newbs to? I ask because I was looking for a list and couldn't find one, and pointing curious people to 1000+ page books is daunting.
|
15c73326-6847-4bc0-b0f6-644742308480 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Optimization Concepts in the Game of Life
**Abstract:** We define robustness and retargetability (two of Flint’s measures of optimization) in Conway’s Game of Life and apply the definitions to a few examples. The same approach likely works in most embedded settings, and provides a frame for conceptualizing and quantifying these aspects of agency. We speculate on the relationship between robustness and retargetability, and identify various directions for future work.
Motivation
==========
We would like to [better understand](https://www.lesswrong.com/tag/deconfusion) the fundamental principles of agency (and related phenomena including optimization and goal-directedness). We focus on agency because we believe agency is a core source of risk from AI systems, especially in worlds with one (or few) most-capable systems. The goals of the most competent consequence-driven systems are more likely to be achieved, because trying outperforms not trying or less competent trying. We do not want to create a world where such systems are working against us. By better understanding agency, we hope to improve our ability to avoid mistakenly building systems working capably against us, and to [correct course](https://www.lesswrong.com/tag/corrigibility) if we do.
A rich source of confusions about agency comes from attending to the fact that goal-directed systems are part of – [embedded](https://www.alignmentforum.org/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version) in – the environment that their goals are about. Most practical work on AI avoids the confusions of embedded agency by constructing and enforcing a Cartesian boundary between agent and environment, using frameworks such as [reinforcement learning](http://incompleteideas.net/sutton/book/the-book.html) (RL) that define an interaction protocol. We focus on embedded agency because we expect not to be able to enforce a Cartesian boundary for highly capable agents in general domains, and, as a particularly strong instance of this, because agents [may emerge unexpectedly](https://www.alignmentforum.org/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction) in systems where we did not design how they interface with the rest of the world.
Our approach to deconfusion in this post is to identify concepts that seem relevant to embedded agency but do not have technical definitions, to propose some definitions, and see how they fare on some examples. More generally, we are interested in analyzing small examples of agency-related phenomena in the hope that some examples will be simple enough to yield insight while retaining essential features of the phenomenon.
Optimization in the Game of Life
================================
Concepts
--------
We draw two concepts from Alex Flint’s essay [The Ground of Optimization](https://www.alignmentforum.org/posts/znfkdCoHMANwqc2WE/the-ground-of-optimization-1). Flint defines an optimizing system as a system that evolves towards a small set of target configurations from a broad basin of attraction, despite perturbations. The essay introduces measures for quantifying optimization systems. One is **robustness**: how robust to perturbations is the process of reaching the target set, e.g. the number of dimensions on which perturbations can be made or the magnitude of the perturbations. Another measure is **retargetability**: whether the system can be transformed into another optimizing system with a different target configuration set via a small change.
Here, we develop more precise definitions of these concepts by concentrating on a particular concrete domain: [Conway’s Game of Life](https://conwaylife.com/wiki/Conway%27s_Game_of_Life). This is a natural setting for studying embedded agency because it is a deterministic environment with no pre-specified Cartesian boundaries, which is rich enough to support emergent goal-directed behavior, yet simple enough to define the concepts above explicitly.
Examples
--------
Before getting to the definitions, let’s look at how we might draw analogies between some of the examples of systems (including optimizing systems) from the Ground of Optimization post and structures in the Game of Life.
| | | |
| --- | --- | --- |
| **The Ground of Optimization** | **Game of Life** | **Optimizing system?** |
| Bottle cap | Block | No |
| Satellite in orbit | Glider | No |
| Ball in a valley | Eater | Yes |
| Ball in a valley with robot | Mobile eater (hypothetical) | Yes |
A block is like a bottle cap in that it has been designed (or selected) to stay in place and not spontaneously disintegrate, but it does not robustly produce more specific outcomes than simply existing, and can easily be perturbed away from this state.
A glider is like a satellite in orbit: it can be redirected but does not recover its original trajectory on perturbation.
An eater is like a ball in a valley in the sense that it ends up in the same state from a variety of starting configurations. This is the state with the eater alone on the board, analogous to the state with the ball at the bottom of the valley.
We can imagine a hypothetical "mobile eater" that walks around looking for other patterns to consume. This would be more robust than the regular eater, similarly to a ball in a valley with a robot, which is more robust than just a ball in a valley.
EDIT: Note that any finite pattern in Life (such as the empty board ⊥.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
) is robust to introducing non-viable collections of cells in the empty areas of the pattern. We originally thought that this would make the empty board an optimizing system, but by this criterion any finite pattern is an optimizing system, which is not very interesting.
Preliminary Definitions
=======================
Like any embedded setting, Life does not come with a privileged Cartesian boundary. Instead we will define an operation, **instantiation**, that combines an agent with an environment, and thereby substantiates counterfactual questions such as “What would this agent do in a different context?” that are otherwise meaningless in a deterministic non-Cartesian world.
What kinds of things are agents and environments? We start with a very general mathematical object, a **pattern**, which we define as simply a state of the Game of Life world. That is, a pattern is an infinite two-dimensional Boolean grid, or equivalently a function of type ℤxℤ→{true, false}, indicating which cells are alive and which are dead. A pattern is finite if it has only finitely many cells alive.
We represent an agent as a finite pattern and an environment as a **context** (formally defined as a pattern). Thus, agents and environments have the same type signature, since they are made of the same "stuff" in an embedded setting.
To put the two together, we make use of a third concept, also formally represented by a pattern, which we call a **mask**, which specifies which parts of the context are the “holes” the agent is supposed to fit into (and replace whatever else was there). As mentioned above, the operation that combines agent and environment is instantiation:
**Definition.** The **instantiation** of p in context c using mask m is the pattern cm(p)(i,j)=if m(i,j) then p(i,j) else c(i,j)
where c is a context, m is a mask, and p is a pattern (the "agent").
Instantiating p in c results in the pattern that is the same as p wherever the mask is true, and the same as c everywhere else. By default we take the mask m to be the **padding mask** of one cell around all the agent’s live cells: pad(p)(i,j)=∃x,y∈{−1,0,1}.p(i+x,j+y).
In any deterministic discrete dynamical system, if we have an operation like instantiation that can combine two states of the system to produce another, then we can similarly represent potential agents and their surroundings by system states. This might allow these definitions to be generalized to other settings besides the Game of Life.
We’ll use the following notation for computations and properties in discrete dynamical systems:
* Given a state p (we use p because our Life states are patterns), step(p) is one step of evolution according to the system’s dynamics.
* The sequence p,step(p),step(step(p)),…, i.e., n↦stepn(p), is the **computation** seeded at p (or a “trajectory” in dynamical systems terminology).
* A **property** is a set of states (patterns).
* A property P is **achieved** by a computation s if there exists some number of steps n such that s(n)∈P. A property is **fixed** by a computation if s(n)∈P for all n above some bound.
Robustness
==========
Defining robustness
-------------------
We have defined patterns very generally. Which patterns are optimizing systems? As Flint noted, an optimizing system has a measure of robustness to perturbations. We can characterize this formally by considering the optimization target as a set of states P(target configurations), and the set C of possible contexts in which a pattern p might be placed.
**Definition (robustness):**
A pattern p is **robust** for P within C iff for all c∈C, the computation seeded at c(p) achieves P.
In this way, the variation within C represents **perturbations** the system faces, and can recover from, when optimizing for the target configuration represented by P.
**Examples:**
* **Eater.** An eater p is robust for P={p} within any context c that contains n≥0 gliders traveling in the direction of the eater (and nothing else on the board). In these contexts, the eater eventually achieves a board empty apart from itself.
* **Periodic patterns.** An oscillator or spaceship p with period N is robust for Pn={q∣q∼stepn(p)} (for any n) within the empty board (C={⊥}) (where ∼ is equivalence up to translation). This includes still lifes (N=1), blinkers (N=2), gliders (N=4), etc.
Our use of contexts to represent perturbations is a little different from the intuitive notion. In particular, we do not directly consider perturbations that happen *during* the computation, that is, interventions on the state of the board at some step after the initial state c(p). One could consider this kind of external perturbation in an alternative definition, which may also be illuminating. An advantage of our approach is that it recognises that many perturbations can be achieved within the Game of Life computation itself – one might call these **embedded** **perturbations**. Specifically, one can include in C a context c that contains a pattern that is “going to perturb p after k timesteps” (e.g., a glider that is going to collide with p after k timesteps).
The more robust a system is, and the more restrictive its target is, the more it seems like an optimizing system. These two axes correspond to the “size” of the two components of our formal robustness definition: the contexts C and the target P. If C is “larger”, the system is robust to more variation, and if P is “smaller”, the target is more restrictive. We will leave quantification of size unspecified for now, since there are various candidate definitions but we haven’t found a clearly correct one yet.
Definitions building on robustness
----------------------------------
**Definition (basin of attraction):**
The **basin of attraction** for a pattern p and a property P is the largest context set B such that p is robust for P within B.
**Examples:**
* **Eater.** Let p be an eater and P={p}. B is a superset of the context set containing n≥0 gliders moving in the direction of the eater and nothing else.
* **Any pattern.** Let p be an arbitrary pattern and P={q∣∃c.q=c(p)}. Then B is the set of all contexts: P is achieved immediately by c(p).
**Definition (minimal property):**
If we keep C fixed and vary P instead, we can define the **minimal property** of a pattern p within a context set C as the "smallest" property P such that p is robust for P within C.
We will discuss some options for quantifying the size of a property in the next section. For now, we consider some examples of minimal properties using set cardinality as the measure of size.
**Examples:**
* **Still life.** Let p be a still life and C={c∣pad(p)&c=⊥∧c is a still life} (still lifes not overlapping with p). Then P = {still life q | q=c(p) for some c} (since c(p) is a still life that is different for every context).
* **Eater.** Let p be an eater and C be the context set containing gliders moving in the direction of the eater. Then P={p}.
The concept of a minimal property is related to the idea of a **behavioral objective**: a goal that a system [appears to be optimizing for](https://intelligence.org/learned-optimization/#glossary) given its behavior in a set of situations. Given a pattern p and a context set C, the set of properties that p is robust for within C corresponds to the set of possible behavioral objectives for p within the set of situations C. We may be interested in the simplest behavioral objectives, corresponding to the set of minimal properties that p is robust for within C.
Options for robustness definitions
----------------------------------
How might we **quantify size** in our definitions above? Primarily, we seek a notion of size for a property, which is a set of patterns. Set cardinality is one option, which ignores the contents of the patterns, counting each of them equally. Another option would be to combine (e.g., take an average of) sizes of the constituent patterns. Natural possibilities in Life for the size of a pattern include number of live cells or size of the smallest rectangle bounding the live cells. A different option, that may capture the sense needed better, is a complexity-based definition such as Kolmogorov complexity (of the whole property) or Levin complexity. It remains to be worked out whether any of these notions of size give our definitions above a natural semantics, or whether we need a different notion of size.
We defined robustness in terms of **achieving** a property. We could have defined it instead in terms of **fixing** a property, which is a stronger condition (any computation that fixes a property also achieves it, but not vice versa). However, the two definitions are equivalent if we restrict attention to **stable** properties that satisfy stepn(p)∈P whenever p∈P. We can stabilize a property P by unioning it with all the states any elements produce after any number of steps.
Retargetability
===============
The [orthogonality thesis](https://www.lesswrong.com/tag/orthogonality-thesis) states that “more or less any level of intelligence could in principle be combined with more or less any final goal”, suggesting the idea that the capability to achieve goals in general (intelligence) is separate from the particular goal being pursued. Not all optimizing systems satisfy this separation, as Flint’s examples show, but those that do should score more highly on his measures of duality and retargetability. We think duality and retargetability are hard to distinguish concepts, and will focus on the latter.
To get more precise about retargetability, let’s use the definition of robustness above for the aspect of retargetability that requires a notion of goal pursuit.
**Definition (retargetability):**
A pattern p is **retargetable** for a set G of properties (the “possible goals”) if there exists a context set C, such that for any property Pi in G there is a pattern pi that is a “small” change from p, such that pi is robust for Pi within C.
The degree of retargetability depends on the size of the set G (more, or more interesting, possible goals are better), the size of the changes (smaller, or less complex changes required for retargeting are better), and the size of the context set (larger is better).
This definition is again dependent on a way to measure sizes, for example, the size of the change between p and pi. Some candidates include: Kolmogorov complexity of the change, the number of cells changed, and the size of the area in which changes are made.
**Examples:**
* **Glider gun.** Let p be a glider gun positioned at (0,0) on the board. Let the target set of goals be G={P1,P2,P3,P4}, where Pi is the property that there is a glider in the ith quadrant of the board. Namely, Pi={q∣∃gi.gi∼g∧gi≤Qi∧gi≤q}, where g is a glider and Qi is a pattern covering the ith quadrant of the board.
Then for any Pi, we obtain pi by rotating the glider gun to fire into the quadrant Qi, which is a small change by the complexity definition. Let C be the set of still life contexts that don't overlap with the glider gun or its firing path for any of the four rotations of the glider gun. Then pi is robust for Pi within C, so p is retargetable for the target set G.
* **Turing machine.** Let p be a pattern implementing a Turing machine computing some function f(x), e.g., "given input x, compute x+1". For any input x, let Px be the set of board states where the output tape of the Turing machine contains f(x). Let the target set of goals be G={Px∣x is a possible input}.
Then for any Px we can obtain px by placing x on the input tape of the Turing machine, which is a small change by all the definitions of size we considered (number of cells, size of area, and complexity). Let C be the set of still life contexts that don't overlap with the Turing machine. Then px is robust for Px within C, so p is retargetable for the target set G.

Our setup suggests a possible **relationship** between robustness and retargetability: it may be difficult for a pattern to be both robust and retargetable. A retargetable pattern needs to be “close to” robustly achieving many targets, but this may be in tension with robustly achieving a single target property. The reason is that a context may cause retargeting via an embedded perturbation, and the new target property may not overlap with the original target property. For example, since the Turing machine is retargetable by changing the input, it's not robust to contexts that change its input.
Conclusions and open questions
==============================
We have proposed some definitions for robustness and retargetability in Conway’s Game of Life, and shown examples of how they work. Our definitions are not fully specified - they lack a good specification of how to quantify sizes of patterns and sets of patterns. We hope they nevertheless illustrate an interesting way of looking at optimizing systems in a concrete deterministic setting.
Here are some **open questions** that we would be excited to get your input on:
* To what extent is there a tradeoff between robustness and retargetability?
* Is robustness or retargetability of a system a greater concern from the alignment perspective?
* It seems feasible to extend our definitions from the Game of Life to other environments where instantiation can be defined. We'd be interested in your suggestions of interesting environments to consider.
* More examples of interesting robust patterns. What could they tell us about the properties that C should have in the definition of robustness?
* Possible theorems restricting the size or content of robust patterns. E.g., for some class of contexts do you need to be “agent-like” in some way, such as doing something like perception, in order to be robust?
* No-free-lunch type theorems on what kinds of combinations of context set C and property P are impossible for any robust pattern. |
7fd46bae-0645-43d9-a89d-b9790fb31d86 | trentmkelly/LessWrong-43k | LessWrong | A broad basin of attraction around human values?
Followup to: Morality is Scary, AI design as opportunity and obligation to address human safety problems
In Corrigibility, Paul Christiano argued that in contrast with ambitious value learning, an act-based corrigible agent is safer because there is a broad basin of attraction around corrigibility:
> In general, an agent will prefer to build other agents that share its preferences. So if an agent inherits a distorted version of the overseer’s preferences, we might expect that distortion to persist (or to drift further if subsequent agents also fail to pass on their values correctly).
>
> But a corrigible agent prefers to build other agents that share the overseer’s preferences — even if the agent doesn’t yet share the overseer’s preferences perfectly. After all, even if you only approximately know the overseer’s preferences, you know that the overseer would prefer the approximation get better rather than worse.
>
> Thus an entire neighborhood of possible preferences lead the agent towards the same basin of attraction. We just have to get “close enough” that we are corrigible, we don’t need to build an agent which exactly shares humanity’s values, philosophical views, or so on.
But it occurs to me that the overseer, or the system composing of overseer and corrigible AI, itself constitutes an agent with a distorted version of the overseer's true or actual preferences (assuming a metaethics in which this makes sense, i.e., where one can be wrong about one's values). Some possible examples of human overseer's distorted preferences, in case it's not clear what I have in mind:
1. Wrong object level preferences, such as overweighting values from a contemporary religion or ideology, and underweighting other plausible or likely moral concerns.
2. Wrong meta level preferences (preferences that directly or indirectly influence one's future preferences), such as lack of interest in finding or listening to arguments against one’s current moral beliefs, willingness to use |
427eaf9f-dc3f-4fba-98ec-f1bd3c44ad67 | trentmkelly/LessWrong-43k | LessWrong | Announcing Dialogues
As of today, everyone is able to create a new type of content on LessWrong: Dialogues.
In contrast with posts, which are for monologues, and comment sections, which are spaces for everyone to talk to everyone, a dialogue is a space for a few invited people to speak with each other.
I'm personally very excited about this as a way for people to produce lots of in-depth explanations of their world-models in public.
I think dialogues enable this in a way that feels easier — instead of writing an explanation for anyone who reads, you're communicating with the particular person you're talking with — and giving the readers a lot of rich nuance I normally only find when I overhear people talk in person.
In the rest of this post I'll explain the feature, and then encourage you to find a partner in the comments to try it out with.
What do dialogues look like?
Here is a screenshot of a dialogue with 3 users.
Behind the scenes, the editor interface is a bit different from other editors we've seen on the internet. It's inspired by collaborating in google docs, where you're all editing a document simultaneously, and you can see the other person's writing in-progress.
You can also submit a response with Cmd-Enter / Ctrl-Enter.
This also allows all participants to draft thoughtful replies simultaneously, and be able to see what the other person is planning to talk about next.
How do I create a dialogue?
First, hit the "New Dialogue" button in the menu in the top-right of the website.
This pops up a box where you are invited to give your dialogue a title and invite some people.
Then you'll be taken to the editor page!
Now you can start writing. The other dialogue participants will receive a notification and the dialogue will appear in their drafts. (Only invite people to dialogues with their consent!)
What are some already published dialogues?
Here are some links to published dialogues that have been using this feature.
* Thomas Kwa's MIRI research experience (by |
e522f767-6ed5-418b-b8a4-82fd6f81dba2 | trentmkelly/LessWrong-43k | LessWrong | Not all signalling/status behaviors are bad
As I've recently been understanding signalling/status behaviors common among humans and how they can cloud reality, I've had a tendency to automatically think of these behaviors as necessarily bad. But it seems to me that signalling behaviors are pretty much a lot of what we do during our waking life. If you or I have abstract goals: become better at physics, learn to play the guitar, become fit and so forth, these goals may fundamentally be derived from evolutionary drives and therefore their implementation in real life would probably make heavy use of signalling/status urges as primary motivators. But that does not necessarily reduce the usefulness of these behaviors in achieving these abstract goals1,2.
I suppose what we need to be cautious about are inefficiencies. Signalling/status behaviors may not be the optimal way to achieve these goals. We would have to weigh the costs of actively ignoring your previous motivators and cultivating new motivators against the benefit we would gain by having motivations more aligned to our abstract goals.
Any common examples of behaviors that assist and/or thwart goal-achievement? I've got one: health. Abstract goal: We want to be healthy and fit. Status/Signalling urge: desire to look good. The urge sometimes assists, as people try to exercise to look good, which makes you healthier. Sometimes it thwarts, like in the extreme example of anorexia. Has anybody made personal trade-offs?
Note:
1) I realize that this theme is underlying in many LW posts.
2) I'm not trying to talk about whether abstract goals are more important than signalling/status goals.
|
1d987962-c068-417d-a6e3-cc6618c725cd | trentmkelly/LessWrong-43k | LessWrong | Beyond Rewards and Values: A Non-dualistic Approach to Universal Intelligence
(This is my first post on Lesswrong, including some thoughts on topics I have read in this year. Please leave your comment on them. Thanks.)
Abstract
Building an AI system that aligns with human values is believed to be a two-step process: first design a value function or learn human value using value learning methods, then maximize those values using rational agents such as AIXI agents. In order to integrate this into one step, we analyze the dualistic assumptions of AIXI, and define a new universal intelligence model that can align with human preferences or specific environments, called Algorithmic Common Intelligence (ACI), which can behave the same way as examples. ACI does not have to employ rewards or value functions, but directly learns and updates hypothetical policies from experience using Solomonoff induction, while making actions according to the probability of every hypothesis. We argue that the rational agency model is a subset of ACI, and the coevolution of ACI and humans provides a pathway to AI alignment.
1. AIXI as a dualistic model
Dualistic agent and embedded agent
In most agent-based intelligence models, the agents are cleanly separated from their environments, but agents in the real world are a part of their environments. Demski and Garrabrant termed the former dualistic agents, and the later embedded agents.
A dualistic agent acts like it’s playing a video game, interacts with the game only through well-defined input and output channels, such as the screen and the controller. The agent doesn't have any opportunity for self-modification. It’s immortal, and its reward circuit is not vulnerable to being hacked.
On the contrary, an embedded agent is a part of the universe. There is no clear boundary between the agent and the environment. An embedded agent may improve itself, but might also modify its original goals in undesirable ways, like “directly tamping its reward circuit to get rewards in a conventional way”, which was calle |
f3176e3f-3e46-48fd-a19e-8f8159ab9852 | trentmkelly/LessWrong-43k | LessWrong | Escaping the Löbian Obstacle
Earlier this year, when looking for an inroad to AI safety, I learned about the Löbian Obstacle, which is a problem encountered by 'purely logical' agents when trying to reason about and trust one another. In the original paper of Yudkowsky and Herreshoff [1], they show that a consequence of Löb's theorem is that an agent X can only "trust" the reasoning of an agent Y with a strictly weaker reasoning system than themselves, where "trust" here means 'formally prove that the conclusions of the other agent's reasoning will be true'. As stated, this looks like a major problem if X is a human trying to build an artificially intelligent system Y, but it's also a problem for any individual (embedded) agent trying to reason about their own future behaviour.
I'm not the first person to find this problem counterintuitive, and for good reason. In this article I'm going to explain why a formal (purely syntactic) logic system alone is a poor model of the reasoning of embedded agents, and show that by fixing this, we remove the foundation for the difficulties arising from Löb's theorem.
For the uninitiated, there is a handy survey of application of Löb's theorem in AI safety research by Patrick LaVictoire [6].
Pure syntax
First, I should explain the formal set-up for applying Löb's theorem to agents. We model an agent's reasoning with a formal language, or logic, which I'll call L. Here I shall make the further assumption that this logic fits (or can be squeezed into) a formal language of the kind logicians are familiar with: the logic consists of some formal symbols or variables A,B,C... along with some logical connectives, operators and quantifiers for combining variables into expressions, or formulas. The agent is also assumed to carry some inference rules for manipulating formulas. Altogether, this data constitutes the syntax of L (its symbolic content and the rules for manipulating those symbols).
Since we don't care precisely what the symbols in L refer to, we need go |
44593381-0112-4ef8-9012-00860314f70d | trentmkelly/LessWrong-43k | LessWrong | China Covid #4
China has now decided to let Covid take its course, so it makes sense to spin developments there off into their own posts for a bit. It is hard to get a good read on what is happening, because China’s core strategy was to stop controlling the spread of Covid and instead to put tight controls on its Covid numbers.
You can’t do this while maintaining any credibility. China decided not to let that be a problem.
Please Disregard the Following
The official case numbers in China are now complete nonsense.
Please disregard the following, except to note the need to disregard it after its peak:
You have to love Reuters reporting ‘WHO says China may be struggling to keep a tally of Covid.’ There is no ‘may’ here, nor is there is a struggle. China is choosing not to keep a tally. Or at least, choosing not share it with the rest of us. If China wanted a count, they’d have a count.
China’s official seven day rolling average of deaths at the moment, according to OurWorldInData? One.
Reuters has an article asking how accurate these numbers are, and the correct response is that it is a headline with a question and the answer is simply ‘no,’ and elsewhere they estimate more than 5,000 deaths per day.
Bloomberg agrees and also estimates over a million daily cases, saying this could get ‘even worse’ and rise to 3.7 million cases per day in January. This is the usual ‘oh no exponential growth could get worse, we could have one or even two doublings ahead of us’ style projection, and I’d be worried about numbers closer to 10 million per day, perhaps even somewhat higher.
So how is China reporting one death? Other than ‘with a straight face’?
They did that the same way they solved their reported infections problem.
It’s simple. They’re redefining Covid deaths the same way they redefined cases.
I interpret this as ‘we will look for any way we can to attribute your death to something else, regardless of whether you had Covid at the time.’
Thus:
Letting it Rip
We went very qu |
be6f25d2-3ea0-409a-b237-ce5d8d3caf91 | trentmkelly/LessWrong-43k | LessWrong | Open Thread, March. 6 - March 12, 2017
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting. |
5133297d-c7f0-487f-a4d3-7af657f7c653 | trentmkelly/LessWrong-43k | LessWrong | Extra Tall Crib
A few days ago I came upstairs to:
> Me: how did you get in there?
> Nora: all by myself!
Either we needed to be done with the crib, which had a good chance of much less sleeping at naptime, or we needed a taller crib. This is also something we went through when Lily was little, and that time what worked was removing the bottom of the crib.
It's a basic crib, a lot like this one. The mattress sits on a metal frame, which attaches to a set of holes along the side of the crib. On it's lowest setting, the mattress is still ~6" above the floor. Which means if we remove the frame and sit the mattress on the floor, we gain ~6".
Without the mattress weighing it down, though, the crib would not be hard for an energetic toddler to tip. I've attached it to the wall on two sides with strapping, right into a stud:
Nora was eager to give it a try, holding on the rail and bouncing hard:
This should get us a bit more time with solid naps!
(I was going to do something similar with Anna when she was the same age, but the crib we happened to be using for her was designed differently and had a structurally important bar across the bottom.)
Comment via: facebook, mastodon |
e3548f40-ec7a-49b8-937d-7eb0fc1fed08 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | On value in humans, other animals, and AI
*This will be posted also on the EA Forum, and included in a sequence containing some previous posts and other posts I'll publish this year.*
Introduction
============
Humans think critically about values and, to a certain extent, they also act according to their values. To the average human, the difference between increasing world happiness and increasing world suffering is huge and evident, while goals such as collecting coins and collecting stamps are roughly on the same level.
It would be nice to make these differences obvious to AI as they are to us. Even though exactly copying what happens in the human mind is probably not the best strategy to design an AI that understands ethics, having an idea of how value works in humans is a good starting point.
So, how do humans reason about values and act accordingly?
Key points
==========
Let’s take a step back and start from sensation. Through the senses, information goes from the body and the external environment to our mind.
After some brain processing — assuming we’ve had enough experiences of the appropriate kind — we perceive the world as made of objects. A rock is perceived as distinct from its surrounding environment because of its edges, its colour, its weight, the fact that my body can move through air but not through rocks, and so on.
Objects in our mind can be combined with each other to form new objects. After seeing various rocks in different contexts, I can imagine a scene in which all these rocks are in front of me, even though I haven’t actually seen that scene before.
We are also able to apply our general intelligence — think of skills such as categorisation, abstraction, induction — to our mental content.
Other intelligent animals do something similar. They probably understand that, to satisfy thirst, water in a small pond is not that different from water flowing in a river. However, an important difference is that animals’ mental content is more constrained than our mental mental content: [we are less limited by what we perceive in the present moment](https://bpb-us-e1.wpmucdn.com/sites.dartmouth.edu/dist/9/2172/files/2020/01/C3.pdf), and we are also better at combining mental objects with each other.
For example, to a dog, its owner works as an object in the dog’s mind, while many of its owner’s beliefs do not. Some animals can attribute simple intentions and perception, e.g. they understand what a similar animal can and cannot see, but [it seems they have trouble attributing more complex beliefs](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1955772/).
The ability to compose mental content in many different ways is what allows us to form abstract ideas such as mathematics, religion, and ethics, just to name a few.
### Key point 1:
**In humans, mental content can be abstract.**
---
Now notice that some mental content drives immediate action and planning. If I feel very hungry, I will do something about it, in most cases.
This process from mental content to action doesn’t have to be entirely conscious. I can instinctively reach for the glass of water in front of me as a response to an internal sensation, even without moving my attention to the sensation nor realising it is thirst.
### Key point 2:
**Some mental content drives behaviour.**
---
Not all mental content drives action and planning. The perception of an obstacle in front of me might change *how* I carry out my plans and actions, but it is unlikely to change *what* I plan and act for. Conversely, being very hungry directly influences what I’m going to do — not just how I do it — and can temporarily override other drives. It is in this latter sense that some mental content drives behaviour.
In humans, the mental content that does drive behaviour can be roughly split in two categories.
The first one groups what we often call evolutionary or innate drives, like hunger and thirst in the examples above, and works similarly in other animals. It is mostly fixed, in the sense that unless I make drastic changes to my body or mind, I will keep perceiving how hungry I am and this will influence my behaviour virtually each day of my life.
The second category is about what we recognise as valuable, worth doing, better than possible alternatives, or simply good. This kind of drive is significantly less fixed than the first category: what we consider valuable may change after we reflect on it in context with our other beliefs, or as a consequence of life experiences.
Some examples will help clarify this. Think of a philosopher who adjusts her beliefs about value as she learns and reflects more about ethics, and then takes action in line with her new views. Or consider a turned atheist, who has stopped placing value on religion and praying because he now sees the concept of god as inconsistent with everything else he knows about the world.
This second category of mental content that drives behaviour is not only about ethical or abstract beliefs. A mundane example might be more illustrative: someone writes down a shopping list after an assessment of what seems worth buying at that moment, then proceeds with the actual shopping. In this case, the influence of deliberation on future action is straightforward.
### Key point 3:
**In humans, part of the mental content that drives behaviour changes with experience and reflection.**
This last point clarifies some of the processes underlying the apparently simple statement that ‘we act according to our values’.
It also helps explain how we get to discriminate between goals such as increasing world happiness and increasing world suffering, mentioned in the introduction. From our frequent experiences of pleasure and pain, we categorise many things as ‘good (or bad) for me’; then, through a mix of empathy, generalisation, and reflection, we get to the concept of ‘good (or bad) for others’, which comes up in our minds so often that the difference between the two goals strikes us as evident and influences our behaviour (towards increasing world happiness rather than world suffering, hopefully).
Differences with animals and AI
===============================
Animals
-------
Point 3 is fundamental to human behaviour. Together with point 1, it explains why some of our actions have motives that are quite abstract and not immediately reducible to evolutionary drives. In contrast, the behaviour of other animals is more grounded in perception, and is well explained even without recurring to reflection or an abstract concept of value.
AI
--
Point 3 is also a critical difference between humans and current AI systems. Even though AIs are getting better and better at learning – thus, in a sense, their behaviour changes with experience – their tasks are still chosen by their designers, programmers, or users, not by each AI through a process of reflection.
This shouldn't be surprising: in a sense, we want AIs to do what *we* want, not what *they* want. At the same time, I think that connecting action to reflection in AI will, with enough research and experiments, allow us to get AI that thinks critically about values and sees the world through lenses similar to ours.
In a future post I’ll briefly go through the (lack of) research related to AI that reflects on what is valuable and worth doing. I’ll also give some ideas about how to write an algorithm of an agent that reflects.
Appendix: quick comparison with shard theory
============================================
As far as I understand, [shard theory](https://www.alignmentforum.org/s/nyEFg3AuJpdAozmoX) is still a work in progress; in this comparison I’ll focus just on some interesting ideas I’ve read in [Reward is not the optimization target](https://www.alignmentforum.org/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target).
In a nutshell, Alex Turner sees humans as reinforcement learning (RL) agents, but makes the point that reward does not work like many people in the field of RL think it works. Turner writes that “reward is not, in general, that-which-is-optimized by RL agents”; many RL agents do not act as reward maximisers in the real world. Rather, reward imposes a reinforcement schedule that shapes the agent’s cognition, by e.g. reinforcing thoughts and/or computations in a context, so that in the future they will be more likely to happen in a similar enough context.
I agree with Turner that modelling humans as simple reward maximisers is inappropriate, in line with everything I’ve written in this post. At the same time, I don’t think that people who write papers about RL are off-track: I consider AIXI to be a good mathematical abstraction of many different RL algorithms, convergence theorems are valid for these algorithms, and thinking of RL in terms of reward maximisation doesn’t seem particularly misleading to me.
Thus, I would solve this puzzle about human values, reward, and RL not by revisiting the relation between reward and RL algorithms, but by avoiding the equation between humans and RL agents. RL, by itself, doesn’t seem a good model of what humans do. If asked why humans do not wirehead, I would reply that it’s because what we consider valuable and worth doing competes with other drives in action selection, not by saying that humans are RL agents but reward works differently from how RL academics think it works.
Having said that, I still find many ideas in [Reward is not the optimization target](https://www.alignmentforum.org/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) really interesting and instructive, e.g. that reward acts as a reinforcement schedule. It’s probably among the most thought-provoking posts I’ve read on the Alignment Forum.
*This work was supported by*[*CEEALAR*](https://ceealar.org/) *and by an anonymous donor.*
*Thanks to Nicholas Dupuis for many useful comments on a draft.* |
7f517bf6-5c32-4845-999c-5f668b4bb8e2 | trentmkelly/LessWrong-43k | LessWrong | Combatting perfectionism
Background
Over the past 2 months, I've experienced two of my startups ending - one in euthanasia and one in sale.
During my startup that we ended, I kept my perfectionism under control by the time pressure of building for our customers. I would ask customers what they wanted; consider what they actually wanted; and then build it quickly.
This customer demand meant that I rejected unnecessary suggestions and controlled perfectionism. I focused on building what I believed the customers needed within the time constraints. Household cats would have congratulated us on our level of agile working.
The problem
Shortly afterwards, I sold my other software business (Redstone HR) to a UK company. Since then, I've been building software to accelerate the future businesses that I build.
Perfectionism has been rampant [1].
My perfectionism has felt like a colony of rabbits alone in a lush woodland. The rabbits spend their days happily increasing in number, without the constraints of hungry predators.
Similarly, the lack of time or financial pressure has allowed my perfectionism to expand. In this early stage of a new business, there are no constraints - such as customer demands - to restrain perfectionism.
As a side note, I think that the word 'perfectionism' is inaccurately self-complimentary.
Perfectionism sounds impressive: a person is so conscientious that they want every aspect of their work to be as good as possible.
Yet, in practice, perfectionism means that you are fixating on insignificant details and failing to do the most important things. Being a perfectionist is bad thing for achieving your goals. As Voltaire said, and as my friend Matty repeats as if a sacred mantra: "Perfect is the enemy of good."
A trial solution (Adding a fox among the rabbits)
-> Add time constraints with real financial consequences.
1. Set a deadline with specific targets
2. Send money to a friend immediately
3. Tell the friend to dispose of the money if you don't meet the dead |
b921aa6f-b204-4c1f-8eb7-4b0a58b48613 | trentmkelly/LessWrong-43k | LessWrong | What makes a probability question "well-defined"? (Part II: Bertrand's Paradox)
See also: What makes a probability question "well-defined"? (Part I), Bertrand's Paradox, More on Bertrand's Paradox, and When (Not) To Use Probabilities
(Follow-up to my last post and cross-posted from my new(ish) blog. Sorry for the very long delay, life was crazy. )
I. Introduction
In my last essay, I argued that when we ask for the probability that some event E holds, the very meaning of the question is to ask for our best guess given the information available to us. As such, we can never claim that we have insufficient information to answer a probability question. That is simply the nature of probability. We must give an answer regardless, at least assuming the event E is meaningful at all.
I further argued that when we call a probability question “undefined”, what we really mean is that it’s not clear how to move from the information we do have to a precise numerical answer to the question. But it would be more proper to call such questions computationally difficult, rather than undefined. Let’s apply this reasoning to some more exotic problems, shall we?
II. Bertrand’s Paradox
Bertrand’s Paradox is a famously “undefined” probability problem. The linked video (which I recommend watching) visualizes the problem quite well, so I’ll just briefly describe it in words. Consider choosing a random chord on a circle. What’s the probability that the length of this chord is greater than the side length of an equilateral triangle inscribed in the circle?
To answer this, we need to figure out what we mean by a “random” chord. We haven’t been given any information about the chord’s distribution, but based on what we learned in the last essay, we can use a uniform distribution. The problem lies in what we consider “uniform”. There are many ways to think about this. Three in particular are called out in this paradox.
In the first, we choose two points on the circle uniformly at random, and form their chord. In the second, we choose the midpoint of the chord uni |
706df106-36c9-47e7-82e9-c540fa96efe4 | StampyAI/alignment-research-dataset/arbital | Arbital | Posterior probability
"Posterior [probability](https://arbital.com/p/1rf)" or "posterior [odds](https://arbital.com/p/1rb)" refers our state of belief *after* seeing a piece of new evidence and doing a [Bayesian update](https://arbital.com/p/1ly). Suppose there are two suspects in a murder, Colonel Mustard and Miss Scarlet. Before determining the victim's cause of death, perhaps you thought Mustard and Scarlet were equally likely to have committed the murder (50% and 50%). After determining that the victim was poisoned, you now think that Mustard and Scarlet are respectively 25% and 75% likely to have committed the murder. In this case, your "[prior probability](https://arbital.com/p/1rm)" of Miss Scarlet committing the murder was 50%, and your "posterior probability" *after* seeing the evidence was 75%. The posterior probability of a hypothesis $H$ after seeing the evidence $e$ is often denoted using the [conditional probability notation](https://arbital.com/p/1rj) $\mathbb P(H\mid e).$ |
95970ee4-0e81-4f03-9fdf-06d3316abea4 | trentmkelly/LessWrong-43k | LessWrong | Laurie Anderson talks
Not rationalist per-se, and much more abstract than legible (it is part of a Poetry lecture series), but Laurie Anderson has long been one of my favorite artists, and this is likely of interest (or of enjoyment or some positive experience) to many people here. 10 minutes into the first lecture, she states "We'll start out with two questions. The first question is 'Why do anything?' and the second question is 'How do you know what's good?'" |
84049732-edd7-4dee-9141-1f23fe97be49 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Alignment Newsletter #19
**Highlights**
**[OpenAI Five Benchmark: Results](https://blog.openai.com/openai-five-benchmark-results/)** *(OpenAI's Dota Team)*: The OpenAI Five benchmark happened last Sunday, where OpenAI Five won two matches against the human team, and lost the last one when their draft was adversarially selected. They are now planning to play at The International in a couple of weeks (dates to be finalized). That will be a harder challenge, since they will be playing against teams that play and train professionally, and so will be better at communication and coordination than the human team here.
Blitz (one of the human players) [said](https://www.reddit.com/r/MachineLearning/comments/9533g8/n_openai_five_benchmark_results/e3qrnbt/): "The only noticeable difference in the mechanical skill aspect was the hex from the Lion, but even that was sorta irrelevant to the overall game flow. Got outdrafted and outmaneuvered pretty heavily, and from a strategy perspective it was just better then us. Even with the limitations in place it still 'felt' like a dota game, against a very good team. It made all the right plays I'd expect most top tier teams to make."
On the technical side, OpenAI implemented a brute-force draft system. With a pool of 18 heroes, you get some combinatorial explosion, but there are still only ~11 million possible matchups. You can then do a simple tree search over which hero to draft, where at the leaves (when you have a full draft) you choose which leaf you want based on the win probability (which OpenAI Five already outputs). Seeing this in action, it seems to me like it's a vanilla minimax algorithm, probably with alpha-beta pruning so that they don't have to evaluate all ~159 billion nodes in the tree. (Or they could have done the full search once, hardcoded the action it comes up with for the first decision, and run the full search for every subsequent action, which would have under 10 billion nodes in the tree.)
Besides the win probabilities, there are other ways to get insight into what the model is "thinking" -- for example, by asking the model to predict where the hero will be in 6 seconds, or by predicting how many last hits / denies / kills / deaths it will have.
The model that played the benchmark has been training since June 9th. Of course, in that time they've changed many things about the system (if for no other reason than to remove many of the restrictions in the original post). This is not a thing that you can easily do -- typically you would change your model architecture, which means your old parameters don't map over to the new architecture. I've been pretty curious about how they handle this, but unfortunately the blog post doesn't go into much detail, beyond saying that they can in fact handle these kinds of "surgery" issues.
They estimate that this particular model has used 190 petaflop/s-days of compute, putting it [just below AlphaZero](https://blog.openai.com/ai-and-compute/).
**My opinion:** I think this finally fell within my expectations, after two instances where I underestimated OpenAI Five. I expected that they would let the human team choose heroes in some limited way (~80%), that OpenAI Five would not be able to draft using just gradients via PPO (~60%), and (after having seen the first two games) that the human team would win after an adversarial draft (~70%). Of course, a draft did happen, but it was done by a tree search algorithm, not an algorithm learned using PPO.
The games themselves were pretty interesting (though I have not played Dota so take this with a grain of salt). It seemed to me like OpenAI Five had learned a particularly good strategy that plays to the advantages of computers, but hadn't learned some of the strategies and ideas that human players use to think about Dota. Since it uses the same amount of computation for each decision, it makes good decisions on all timescales, including ones where something surprising has occurred where humans would need some time to react, and also to coordinate. For example, as soon as a human hero entered within range of the bots (just to look and retreat), all of the bots would immediately unleash a barrage of attacks, killing the hero -- a move that humans could not execute, because of slower reaction times and worse communication and teamwork. Similarly, one common tactic in human gameplay is to teleport into a group of heroes and unleash an area-of-effect ability, but when they tried this against OpenAI Five, one of the bots hexed the hero as soon as he teleported in, rendering him unable to cast the spell. (That felt like the decisive moment in the first game.) On the other hand, there were some clear issues with the bots. At one point, two OpenAI bots were chasing Blitz, and Blitz used an ability that made him invisible while standing still. Any human player would have spammed area attacks, but the bots simply became confused and eventually left. Similarly, I believe (if I understood the commentary correctly) that a bot once used an ability multiple times, wasting mana, even though all uses after the first had no additional effect.
Other articles would have you believe that the games weren't even close, and if you look at the kill counts, that would seem accurate. I don't think that's actually right -- from what I understand, kills aren't as important as experience and gold, and you could see this in the human gameplay. OpenAI Five would often group most of its heroes together to push forward, which means they get less experience and gold. The human team continued to keep their heroes spread out over the map to collect resources -- and even though OpenAI Five got way more kills, the overall net worth of the two teams' heroes remained about equal for most of the early game. The big difference seemed to be that when the inevitable big confrontation between the two teams happened, OpenAI Five always came out on top. I'm not sure how, my Dota knowledge isn't good enough for that. Based on Blitz's comment, my guess is that OpenAI Five is particularly good at fights between heroes, and the draft reflects that. But I'd still guess that if you had pro human players who ceded control to OpenAI Five whenever a fight was about to happen, they would beat OpenAI Five (~70%). I used to put 80% on that prediction, but Blitz's comment updated me away from that.
One interesting thing was that the win probability seemed to be very strongly influenced by the draft, which in hindsight seems obvious. Dota is a really complicated game that is constantly tweaked to keep it balanced for humans, and even then the draft is very important. When you now introduce a new player (OpenAI Five) with very different capabilities (such as very good decision making under time pressure) and change the game conditions (such as a different pool of heroes), you should expect the game to become very imbalanced, with some teams far outshining others. And in fact we did see that Lion (the hero with the hexing ability) was remarkably useful (against humans, at least).
**[Certified Defenses against Adversarial Examples](https://arxiv.org/abs/1801.09344)** *(Aditi Raghunathan et al)* and **[A Dual Approach to Scalable Verification of Deep Networks](https://arxiv.org/abs/1803.06567)** *(Krishnamurthy (Dj) Dvijotham et al)*: Even when defenses are developed to make neural nets robust against adversarial examples, they are usually broken soon after by stronger attacks. Perhaps we could prove once and for all that the neural net is robust to adversarial examples?
The abstract from the Raghunathan paper summarizes their approach well: "[W]e study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most \epsilon = 0.1 can cause more than 35% test error."
To compute the certificate, they consider the optimal attack A. Given a particular input x, the optimal attack A is the one that changes f(A(x)) to a different class, where f is the ML model, and A(x) is restricted to not change x too much. They leverage the structure of f (linear models and neural nets with one hidden layer) and the restrictions on A to compute a bound on f(A(x)) in terms of x. So, for each data point in the training set, the bound either says “guaranteed that it can’t be adversarially attacked” or “might be possible to adversarially attack it”. Averaging this over the training set or test set gives you an estimate of an upper bound on the optimal adversarial attack success rate.
The Dvijotham paper can work on general feedforward and recurrent neural nets, though they show the math specifically for nets with layers with componentwise activations. They start by defining an optimization problem, where the property to be verified is encoded as the optimization objective, and the mechanics of the neural net are encoded as equality constraints. If the optimal value is negative, then the property has been verified. The key idea to solving this problem is to break down the hard problem of understanding a sequence of linear layers followed by nonlinearities into multiple independent problems each involving a single layer and a nonlinearity. They do this by computing bounds on the values coming out of each layer (both before and after activations), and allowing the constraints to be satisfied with some slack, with the slack variables going into the objective with Lagrange multipliers. This dual problem satisfies weak duality -- the solution to the dual problem for any setting of the Lagrange multipliers constitutes an upper bound on the solution to the original problem. If that upper bound is negative, then we have verified the property. They show how to solve the dual problem -- this is easy now that the slack variables allow us to decouple the layers from each other. They can then compute a tighter upper bound by optimizing over the Lagrange multipliers (which is a convex optimization problem, and can be done using standard techniques). In experiments, they show that the computed bounds on MNIST are reasonably good for very small perturbations, even on networks with 2-3 layers.
**My opinion:** Lots of AI alignment researchers talk about provable guarantees from our AI system, that are quite broad and comprehensive, even if not a proof of "the AI is aligned and will not cause catastrophe". Both of these papers seem like an advance in our ability to prove things about neural nets, and so could help with that goal. My probably-controversial opinion is that in the long term the harder problem is actually figuring out what you want to prove, and writing down a formal specification of it in a form that is amenable to formal verification that will generalize to the real world, if you want to go down that path. To be clear, I'm excited about this research, both because it can be used both to solve problems that affect current AI systems (eg. to verify that a neural net on a plane will never crash under a mostly-realistic model of the world) and because it can be used as a tool for developing very capable, safer AI systems in the future -- I just don't expect it to be the main ingredient that gives us confidence that our AI systems are aligned with us.
On the methods themselves, it looks like the Raghunathan paper can achieve much tighter bounds if you use their training procedure, which can optimize the neural net weights in tandem with the certificate of robustness -- they compute a bound of 35% on MNIST with perturbations of up to size 26 (where the maximum is 256). However, there are many restrictions on the applicability of the method. The Dvijotham paper lifts many of these restrictions (multilayer neural nets instead of just one hidden layer, any training procedure allowed) but gets much looser bounds as a result -- the bounds are quite tight at perturbations of size 1 or 2, but by perturbations of size 10 the bounds are trivial (i.e. a bound of 100%). The training procedure that Raghunathan et al use is crucial -- without it, their algorithm finds non-trivial bounds on only a single small neural net, for perturbations of size at most 1.
**Technical AI alignment**
==========================
### **Problems**
[When Bots Teach Themselves How To Cheat](https://www.wired.com/story/when-bots-teach-themselves-to-cheat) *(Tom Simonite)*: A media article about specification gaming in AI that I actually just agree with, and it doesn't even have a Terminator picture!
### **Agent foundations**
[Probabilistic Tiling (Preliminary Attempt)](https://www.alignmentforum.org/posts/nsbKeodxHJFKX2yYp/probabilistic-tiling-preliminary-attempt) *(Diffractor)*
[Logical Counterfactuals for Perfect Predictors](https://www.lesswrong.com/posts/AKkFh3zKGzcYBiPo7/logical-counterfactuals-for-perfect-predictors) and [A Short Note on UDT](https://www.lesswrong.com/posts/EXtzy3v4soZcoZjuH/a-short-note-on-udt) *(Chris Leong)*
### **Handling groups of agents**
[Learning to Share and Hide Intentions using Information Regularization](http://arxiv.org/abs/1808.02093) *(DJ Strouse et al)*
### **Interpretability**
[Techniques for Interpretable Machine Learning](http://arxiv.org/abs/1808.00033) *(Mengnan Du et al)*: This paper summarizes work on interpretability, providing a classification of different ways of achieving interpretability. There are two main axes -- first, whether you are trying to gain insight into the entire model, or its classification of a particular example; and second, whether you try to create a new model that is inherently interpretable, or whether you are post-hoc explaining the decision made by an uninterpretable model. The whole paper is a summary of techniques, so I'm not going to summarize it even further.
**My opinion:** This seems like a useful taxonomy that hits the kinds of interpretability research I know about, though the citation list is relatively low for a summary paper, and there are a few papers I expected to see that weren't present. On the other hand, I'm not actively a part of this field, so take it with a grain of salt.
### **Verification**
**[Certified Defenses against Adversarial Examples](https://arxiv.org/abs/1801.09344)** *(Aditi Raghunathan et al)*: Summarized in the highlights!
**[A Dual Approach to Scalable Verification of Deep Networks](https://arxiv.org/abs/1803.06567)** *(Krishnamurthy (Dj) Dvijotham et al)*: Summarized in the highlights!
**Near-term concerns**
======================
### **Adversarial examples**
[Adversarial Vision Challenge](http://arxiv.org/abs/1808.01976) *(Wieland Brendel et al)*: There will be a competition on adversarial examples for vision at NIPS 2018.
[Motivating the Rules of the Game for Adversarial Example Research](https://arxiv.org/abs/1807.06732) *(Justin Gilmer, George E. Dahl et al)* (H/T Daniel Filan)
### **Privacy and security**
[Security and Privacy Issues in Deep Learning](http://arxiv.org/abs/1807.11655) *(Ho Bae, Jaehee Jang et al)*
**AI capabilities**
===================
### **Reinforcement learning**
**[OpenAI Five Benchmark: Results](https://blog.openai.com/openai-five-benchmark-results/)** *(OpenAI's Dota Team)*: Summarized in the highlights!
[Learning Actionable Representations from Visual Observations](http://arxiv.org/abs/1808.00928) *(Debidatta Dwibedi et al)*: Prior work on Time Contrastive Networks (TCN)s showed that you can use time as an unsupervised learning signal, in order to learn good embeddings of states that you can then use in other tasks. This paper extends TCNs to work with multiple frames, so that it can understand motion as well. Consider any two short videos of a task demonstration. If they were taken at different times, then they should be mapped to different embedding vectors (since they correspond to different "parts" of the task). On the other hand, if they were taken at the same time (even if from different viewpoints), they should be mapped to the same embedding vector. The loss function based on this encourages the network to learn an embedding for these short videos that is invariant to changes in perspective (which are very large changes in pixel-space), but *is* different for changes in time (which may be very small changes in pixel-space). They evaluate with a bunch of different experiments.
**My opinion:** Unsupervised learning seems like the way forward to learn rich models of the world, because of the sheer volume of data that you can use.
[ICML 2018 Notes](https://david-abel.github.io/blog/posts/misc/icml_2018.pdf) *(David Abel)*
### **Deep learning**
[When Recurrent Models Don't Need to be Recurrent](http://bair.berkeley.edu/blog/2018/08/06/recurrent/) *(John Miller)*: Recurrent neural networks (RNNs) are able to use and update a hidden state over an entire sequence, which means that in theory it is possible for them to learn very long term dependencies in a sequence, that a feedforward model would not be able to do. For example, it would be easy to assign weights to an RNN so that on input x\_n it outputs n (the length of the sequence so far), whereas a feedforward model could not learn this function. Despite this, in practice feedforward methods match and exceed the performance of RNNs on sequence modeling tasks. This post argues that this is because of gradient descent -- any stable gradient descent on RNNs can be well approximated by gradient descent on a feedforward model (both at training and inference time).
**My opinion:** The post doesn't really explain why this is the case, instead referencing the theory in their paper (which I haven't read). It does sound like a cool result explaining a phenomenon that I do find confusing, since RNNs should be more expressive than feedforward models. It does suggest that gradient descent is not actually good at finding the optimum of a function, if that optimum involves lots of long-term dependencies.
[Objects that Sound](https://deepmind.com/blog/objects-that-sound/) *(Relja Arandjelović, Andrew Zisserman et al)*: The key idea behind this blog post is that there is a rich source of information in videos -- the alignment between the video frames and audio frames. We can leverage this by creating a proxy task that will force the neural net to learn good representations of the video, which we can then use for other tasks. In particular, we can consider the proxy task of deciding whether a short (~1 second) video clip and audio clip are aligned or not. We don't care about this particular task, but by designing our neural net in the right way, we can ensure that the net will learn good representations of video and audio. We pass the video clip through a convolutional net, the audio clip through another convolutional net, and take the resulting vectors and use the distance between them as a measure of how dissimilar they are. There is no way for video to affect the audio or vice versa before the distance -- so the net is forced to learn to map each of them to a shared space where the distance is meaningful. Intuitively, we would expect that this shared space would have to encode the cause of both the audio and video. Once we have these embeddings (and the neural nets that generate them), we can use them for other purposes. For example, their audio encoder sets the new state-of-the-art on two audio classification benchmarks. In addition, by modifying the video encoder to output embeddings for different regions in the image, we can compute the distance between the audio embedding and the video embedding at each region, and the regions where this is highest correspond to the object that is making the sound.
**My opinion:** Another great example of using unsupervised learning to learn good embeddings. Also, a note -- you might wonder why I'm calling this unsupervised learning even though there's a task, with a yes/no answer, a loss function, and an iid dataset, which are hallmarks of supervised learning. The difference is that the labels for the data did not require any human annotation, and we don't care about the actual task that we're learning -- we're after the underlying embeddings that it uses to solve the task. In the previous paper on learning actionable representations, time was used to define an unsupervised learning signal in a similar way.
[MnasNet: Towards Automating the Design of Mobile Machine Learning Models](https://ai.googleblog.com/2018/08/mnasnet-towards-automating-design-of.html) *(Mingxing Tan)*: Mobile phones have strong resource constraints (memory, power usage, available compute), which makes it hard to put neural nets on them. Previously, for image classification, researchers hand designed MobileNetV2 to be fast while still achieving good accuracy. Now, using neural architecture search, researchers have found a new architecture, MnasNet, which is 1.5x faster with the same accuracy. Using the [squeeze-and-excitation](https://arxiv.org/abs/1709.01507) optimization improves it even further.
**My opinion:** Neural architecture search is diversifying, focusing on computation time in addition to accuracy now. It seems possible that we'll run into the same problems with architecture search soon, where the reward functions are complex enough that we don't get them right on the first try. What would it look like to learn from human preferences here? Perhaps we could present two models from the search to humans, along with statistics about each, and see which ones the researchers prefer? Perhaps we could run tests on the model, and then have humans provide feedback on the result? Maybe we could use feature visualization to provide feedback on whether the network is learning the "right" concepts?
[Neural Arithmetic Logic Units](https://arxiv.org/abs/1808.00508) *(Andrew Trask et al)*
[Generalization Error in Deep Learning](http://arxiv.org/abs/1808.01174) *(Daniel Jakubovitz et al)*
### **Applications**
[The Machine Learning Behind Android Smart Linkify](https://ai.googleblog.com/2018/08/the-machine-learning-behind-android.html) *(Lukas Zilka)*: Android now has Smart Linkify technology, which allows it to automatically find pieces of text that should link to another app (for example, addresses should link to Maps, dates and times to Calendar, etc). There are a lot of interesting details on what had to be done to get this to actually work in the real world. The system has two separate nets -- one which generates candidate entities, and another which says what kind of entity each one is. In between these two nets, we have a regular program that takes the set of proposed entities, and prunes it so that no two entities overlap, and then sends it off to the entity classification net. There are a few tricks to get the memory requirements down, and many dataset augmentation tricks to get the nets to learn particular rules that it would not otherwise have learned.
**My opinion:** I take this as an example of what advanced AI systems will look like -- a system of different modules, each with its own job, passing around information appropriately in order to perform some broad task. Some of the modules could be neural nets (which can learn hard-to-program functions), while others could be classic programs (which generalize much better and are more efficient). OpenAI Five also has elements of this -- the drafting system is a classic program operating on the win probabilities from the neural net. It's also interesting how many tricks are required to get Smart Linkify to work -- I don't know whether to think that this means generally intelligent AI is further away, or that the generally intelligent AI that we build will rely on these sorts of tricks.
**News**
========
[Human-Aligned AI Summer School: A Summary](https://www.lesswrong.com/posts/bXLi3n2jrfqRwoSTH/human-aligned-ai-summer-school-a-summary) *(Michaël Trazzi)*: A summary of the talks at the summer school that just happened, from one of the attendees, that covers value learning, agent foundations, bounded rationality, and side effects. Most of the cited papers have been covered in this newsletter, with the notable exceptions of Bayesian IRL and information-theoretic bounded rationality. |
81e3939b-7cef-481f-bd13-76d0a59ce0ff | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Logical Probability of Goldbach’s Conjecture: Provable Rule or Coincidence?
*Epistemic status: some thoughts about a complex problem*
TL;DR: Goldbach conjecture – that any even number could be presented as a sum of two primes – is more likely to be a random coincidence than a provable rule, because for large numbers there are very many possible pairs.
Summary
=======
Here I investigate Goldbach's Conjecture (GC), which states that any even number greater than 2 can be expressed as the sum of two prime numbers. Despite numerous attempts, this conjecture has yet to be proven. I apply logical probability theory to the hypothesis that GC is not a rule, but merely a coincidence. I demonstrate that there are four possible scenarios:
(1) GC is true up to infinity, but without any underlying rule, due to pure statistical reasons;
(2) GC is true, and there is a rule, but it is unprovable (in a Godel-style solution);
(3) GC is true and there is a rule that can be proved, but it was not proved yet;
(4) GC is false for some very large number.
I then demonstrate that (1) has highest logical probability.
First, I look at statistics. There is nothing surprising that a large odd number N could be presented as a sum of two primes because for each large number there are many sums of pairs of primes which produce that number, and thus there is a significant probability that at least one sum includes two primes.
The probability that there will be at least one sum of two primes producing even number N grows very quickly with N growth. This probability estimation was already made in literature and it was demonstrated that the chances of GC being false is less than 10-4000 for numbers higher than 2,000,000, and it was already tested for numbers lower than this.
Then I look at the theory of verifiers and complexity theory. This makes probabilistic proof stronger than any analytic proof as analytic proofs tend to have significantly higher rate of errors, like 1 in 100, and the chances of errors grow with complexity of proofs. Extensive futile prior search of proof rules out existence of any simple proof.
Finally, I look at the idea that there is a “rule”, but there is no instance of its application, as all supposed instances are easily statistically explained. I find it being an argument against existing of the rule. The existence of the rule in GC should be visible, first of all, for very small even numbers as some statistical anomaly.
Introduction
============
Goldbach’s conjecture (GC) states that *any even number could be presented as a sum of two primes*. At first, it looks like a very neat theorem that could be formally proved.
But what if GC is not a theorem that can be proved, but just a coincidence, which is a result of a very large but semi-random process?
It is not surprising that GС holds for a given large number N, as there are many pairs of prime numbers P+Q that give N, and the number of possible pairs grows very quickly for larger N.
For example, for N=100 there are 6 pairs of primes: 97+3, 89+11, 83+17, 71+29, 59+41, 53+47. For N=1000 there are 28 pairs and for N=10 000 there are 127 pairs of prime numbers which produce them.
Coincidences sometimes happen in math. For example, in the beginning of e= 2,718281828… the combination of digits (1828) repeats two times and 1828 is also the date of birth of Lev Tolstoy. There is not any additional proof for this and it doesn’t have any meaning, except a useful mnemonic technic. It is just a coincidence.
Sheldon’s estimation of GC’s probability
----------------------------------------
In the article “[A Statistician’s Approach to Goldbach’s Conjecture](https://www.alternatievewiskunde.nl/jaar2006/statistician.pdf)” Neil Sheldon showed that for the number N = 2 000 000, the chances that none of the members of any pair that produce N in sum are prime is 10-4663. (Note that is not the machine testing of GC, but probabilistic estimate based on the expected number of primes.
Then Sheldon shows that the probability that GC is false for numbers between N= 2 000 000 and N= 20 000 000 is less than 10-4656.
Next Sheldon numerically demonstrated that this probability declines very quickly for subsequent blocks of numbers 10-34100 for (20 mln, 200 mln), and 10-261000 for (200 mln, 2 billions). Then he multiplies all probabilities that GH is true for all blocks, like (1-10-34100 ) (1-10-261000) and shows that only first member of the multiplication matter as all the subsequent members are very close to 1. Based on this he concludes that the probability of GH being false is less than 10-4656, if we know that GC holds up to N=2 000 000 (and we know this, based on computer testing of GC).
There are two main problems with Sheldon’s calculation: first, it is numerical, but it seems to be possible to rewrite it algebraically, as his line of reasoning is clear. Second, it becomes strong only for large numbers, but GC holds even from single digits (e.g. 8=5+3). So, the rule should exist?
Also, Sheldon’s estimation is based on the *assumption of random distribution of primes*. But what if it is not random or we “run out of primes” for large N? At least one of primes in sum should be more than N/2, so we need to be sure that very large primes continue to appear (It is known as https://en.wikipedia.org/wiki/Bertrand%27s\_postulate).
Sheldon’s statistical proof of GC is based on some intuition about the distribution of primes, but GC is also a claim about distribution of primes, so there is a circularity. He uses Gauss distribution of primes as prior:
N(primes below n) = n / ln n.
If GC turns to be false for very large numbers (no sum of primes for some very large rogue N), all previously observed confirmations of GC will turn out to be mere coincidences.
Logical probability
-------------------
Logical probability is a measure of our uncertainty about mathematical facts, like what is the probability that the millionth digit in pi is 9? Logical probability was explored in the article by MIRI <https://arxiv.org/pdf/1609.03543.pdf>
Central argument
================
Here I suggest that there are 4 possible situations with GC, and that (1) situation has the highest logical probability:
1. GC is true up to infinity but for purely statistical reasons, and no “rule” exists. (“Rule” here means “a principle that governs the behaviour of numbers”).
2. GC is true, and the rule exists, but the rule is unprovable (Godel-style solution).
3. GC is true and the rule exists and can be proved, but any proof is very complex and error-prone.
4. GC is false for some a very large number.
Applying logical probability to the scenarios
---------------------------------------------
The first three situations from our central argument are observationally undistinguishable, and the failure to find a relatively simple proof in the last 250 years may imply that solutions “1” or “2” are more likely – in the sense of logical probability – to be true.
The proof, if it exists, is very computationally complex, and therefore more likely to be error-prone and false. Complex proofs are more likely to have hidden errors and their verification requires (now) very highly trained and rare humans and is also time-consuming. Machine verifiers also need to be verified themselves (see Yampolsky <https://arxiv.org/abs/1609.00331> ), either by machines or humans which creates an eternal loop of verifications or at least very high complexity. And the complexity of a proof is proportional to its probability to be false (see also Scott Aaronson about complexity arguments https://www.scottaaronson.com/papers/philos.pdf).
The more time passes, the more evidence we get that (4) is false from computational tests and also the more we should doubt any proof (3) as its complexity grows. Therefore, the logical probability of (1) and (2) is growing over time.
But is where the way to distinguish (1) and (2)? The rule will show itself if there will be a statistical deviation from the random distribution of the number of possible representations for any even number. Such deviation does actually exist and is known as the “Goldbach comet”.
But: Wiki: “It is interesting to speculate on the possibility of any number *E* having zero prime pairs, taking these Gaussian forms as [probabilities](https://en.wikipedia.org/wiki/Probabilities), and assuming it is legitimate to [extrapolate](https://en.wikipedia.org/wiki/Extrapolate) to the zero-pair point. If this is done, the probability of zero pairs for any one *E*, in the range considered here, is of order 10−3700. The integrated probability over all *E* to infinity, taking into account the narrowing of the peak width, is not much larger. Any search for violation of the Goldbach conjecture may reasonably be expected to have these odds to contend with.” <https://en.wikipedia.org/wiki/Goldbach%27s_comet>
This 10−3700 is close to Sheldon’s estimation discussed above.
While the “Goldbach comet” chart (the distribution of sums) has some peculiarities, it looks like a random process.
In some sense, if a “mathematical god” (or mathematical universe) exists, GC is a rule, but if no such god, then GC is just a coincidence.
GC as a coincidence is more probable than validity of any formal proof
----------------------------------------------------------------------
Yampolskiy wrote about the limited provability of proofers. Any proofer, human or machine, can have errors and this creates a fundamental limit on the probability that any given proof P is true.
Let p(P) be the maximum attainable probability that a randomly taken (from all accepted mathematical proofs) proof is true. I assume that p(P) is around 0.999, but the real number could be smaller and depends on the type of proof and the year it was suggested.
In general, shorter and simpler proofs are more likely to be true. Some proofs are too long to be checked by one human or need years of work from a specially trained person to be checked. Even if a long proof can be machine verified, the verifier itself can have errors, which creates an unresolvable source of uncertainty as was shown by Yampolsky.
The idea that GC is just a coincidence is simple and the proof of this is very simple. It means that it has higher a priory probability to be true than any long and complex proof which will ever appear. And it seems that there are no simple proofs of GC, after more than 200 years of search.
We could think about a hypothetical prediction market on which different objects with different “logical probabilities” are evaluated, as was suggested in the article about Logical Induction by MIRI. I expect that in such a market, GC-as-coincidence will be rated higher than any single proof, for example, a random proof pulled out from arxiv.org.
There are several papers which claim that they have proved GC (how many?) As there are many “proofs”, most of them should be false, which gives us a baseline chance that any new proof is true. (I assume here that there is only one correct proof, but actually could be many different valid proofs, but not for such complex and well-researched topics as GC.)
In short, coincidence theory has chances of being false 1 to 10−3700 and any future formal proof has chances to be false around 1 to 100. Thus, I still should bet on the coincidence theory, even if the formal proof is convincing and generally accepted.
How will counterfactual universe look where GC is false?
--------------------------------------------------------
In the such universe, GC will still hold for non-extremely large numbers, as statistic beats chance here, but the difference could become visible for either small or for extremely large numbers.
Difference 1: there will be some small number around N=100, for which GC will not hold. (Let’s check if there are real numbers which have a very small number of representations as a sum of pairs). We then will know that it is false in our universe.
Difference 2: There could be also an extremely large number, maybe beyond anything we could write down, for which there is no pair of primes. We don’t know this and can’t know for sure without formal proof.
Only difference 2 could hold in our observable universe. In it, is just a coincidence that we didn’t yet encounter the exception.
Very improbable event doesn’t need formal proof
-----------------------------------------------
While GC looks like a claim which should have formal proof, it could be valid even if no such proof exists, because the event it describes is very unlikely. It is better to reformulate GC: there is no such even integer N, which is not a sum of two primes.
While GC looks like a claim about existence, it is actually a claim about non-existence. And the non-existence of something is more likely to be a coincidence. That is, this is the difference of GC from the Great Fermat Theorem which claims *the universal non-existence* of n such that xn + yn = zn for n more than 2. Fermat theorem is surprising, but GC is unsurprising from probabilistic reasons alone.
Even if GC is false, it may be false for so large numbers that we will never observe them in computations. Thus, computational tests of GC give us a very small Bayesian update as it was a priori clear that GC violations are very unlikely even if no such GC-rule exists.
Counterarguments: GC is a law, not a coincidence
================================================
GC holds for small numbers
--------------------------
One may argue that GC is so strong for the numbers below 100 that we should assume that there is a rule. This could be tested empirically by assuming a random distribution of pairs with some property, not primes and then comparing it actual distribution of primes.
Non-random distribution of primes
---------------------------------
We could search for non-randomness in prime distribution as evidence against GC-as-coincidence. E.g. twin primes are an example of non-randomness. And if there are too many primes in some place there should be a void.
We are close to real proof, and other similar problems were solved formally
---------------------------------------------------------------------------
For example, the Great Fermat Theorem was finally proved. I argue here that the statistic nature of GC is different from many other similar-sounding claims.
Conclusion
==========
The idea of logical probability can help us to get a better understanding of how we should interpret the complexity of proofs: the more complex is proof, the less are chances that it is valid, and at some level, these chances fall below background levels (=a priori probability) of what we should expect about the validity of a mathematical claim.
GC is a good example of the above: GC has a very strong a priori probability to be merely a coincidence + no simple proof exists. Therefore, we should bet that it is just a coincidence even after possible proof appear. |
428a74c5-f264-4195-9fd3-1c5565ef490e | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The Fixed Sum Fallacy
(**Update:** Patrick points out the subject of this post is already well-known as the [gambler's fallacy](http://en.wikipedia.org/wiki/Gambler's_fallacy). I really should have read Tversky and Kahneman before posting.)
You're flipping a coin 100 times, the first five throws came up heads, what do you expect on the next throw? If you believe the coin to be fair, you allocate 0.5 credence to each face coming up. If your Bayesian prior allowed for biased coins, you update and answer something like 0.6 to 0.4. So far it's all business as usual.
There exists, however, a truly bizarre third possibility that assigns *reduced* credence to heads. The reasoning goes like this: at the outset we expected about 50 heads and 50 tails. Your first five throws have *used up* some of the available heads, while all 50 tails are still waiting for us ahead. When presented so starkly, the reasoning sounds obviously invalid, but here's the catch: people use it a lot, especially when thinking about stuff that matters to them. Happy days viewed as payback for sad days, rich times for poor times, poor people suffering because rich people wallow, and of course all of that vice versa.
I initially wanted to dub this the "fallacy of fate" but decided to leave that lofty name available for some equally lofty concept. "Fallacy of scarcity", on the other hand, is actively used but doesn't quite cover all the scenarios I had in mind. So let's call this way of thinking the "fixed sum fallacy", or maybe "counterbalance bias".
Now contrarians would point out that some things in life *are* fixed-sum, e.g. [highly positional values](http://www.overcomingbias.com/2009/05/mea-cupla-positionality-data.html). But other things aren't. Your day-to-day happiness obviously resembles repeatedly throwing a biased coin more than it resembles withdrawing value from a fixed pool: being happy today doesn't decrease your average happiness over all future days. (I have no sources on that besides my common sense; if I'm wrong, call me out.) So we could naturally hypothesize that fixed-sum thinking, when it arises, serves as some kind of coping mechanism. Maybe the economists or psychologists among us could say more; sounds like a topic for Robin? |
82b88f0d-f06f-4fb3-8049-9e8a08f3d241 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Southern Arizona meetup
Discussion article for the meetup : Southern Arizona meetup
WHEN: 05 July 2011 06:00:00PM (-0700)
WHERE: Coffee X Change 2443 North Campbell Avenue, Tucson, AZ 85719
I've decided to try and put together a stronger rationalist community here in Arizona; I know there have already been a few meetups up in Phoenix, and I'm hoping some of the attendees will be willing to migrate a bit. I managed to learn a few useful things at the rationality minicamp Singinst. held that I hope can make this community more enjoyable and connected. Date, time and exact location are still very adjustable, so tell me if you can make it, or if you would like to but need the details changed.
Discussion article for the meetup : Southern Arizona meetup |
478a6b16-c101-4250-94c8-8abf520f893e | StampyAI/alignment-research-dataset/arxiv | Arxiv | A Constructive Prediction of the Generalization Error Across Scales
Published as a conference paper at ICLR 2020
A C ONSTRUCTIVE PREDICTION OF THE
GENERALIZATION ERROR ACROSS SCALES
Jonathan S. Rosenfeld1Amir Rosenfeld2Yonatan Belinkov13Nir Shavit145
fjonsr,belinkov,shanir g@csail.mit.edu amir@cse.yorku.ca
1Massachusetts Institute of Technology2York University3Harvard University
4Neural Magic Inc5Tel Aviv University
ABSTRACT
The dependency of the generalization error of neural networks on model and
dataset size is of critical importance both in practice and for understanding the
theory of neural networks. Nevertheless, the functional form of this dependency
remains elusive. In this work, we present a functional form which approximates
well the generalization error in practice. Capitalizing on the successful concept of
model scaling (e.g., width, depth), we are able to simultaneously construct such
a form and specify the exact models which can attain it across model/data scales.
Our construction follows insights obtained from observations conducted over a
range of model/data scales, in various model types and datasets, in vision and lan-
guage tasks. We show that the form both fits the observations well across scales,
and provides accurate predictions from small- to large-scale models and data.
1 I NTRODUCTION
With the success and heightened adoption of neural networks for real world tasks, some questions
remain poorly answered. For a given task and model architecture, how much data would one require
to reach a prescribed performance level? How big a model would be needed?
Addressing such questions is made especially difficult by the mounting evidence that large, deep
neural networks trained on large-scale data outperform their smaller counterparts, rendering the
training of high performance models prohibitively costly. Indeed, in the absence of practical an-
swers to the above questions, surrogate approaches have proven useful. One such common approach
is model scaling, where one designs and compares small-scale models, and applies the obtained ar-
chitectural principles at a larger scale (e.g., Liu et al., 2018; Real et al., 2018; Zoph et al., 2018).
Despite these heuristics being widely used to various degrees of success, the relation between the
performance of a model in the small- and large-scale settings is not well understood. Hence, explor-
ing the limitations or improving the efficiency of such methods remains subject to trial and error.
In this work we circle back to the fundamental question: what is the (functional) relation between
generalization error and model and dataset sizes ? Critically, we capitalize on the concept of model
scaling in its strictest form: we consider the case where there is some given scaling policy that
completely defines how to scale up a model from small to large scales. We include in this context
all model parameters, such that traversing from one scale (in which all parameters are known) to
another requires no additional resources for specifying the model (e.g., architecture search/design).
We empirically explore the behavior of the generalization error over a wide range of datasets and
models in vision and language tasks. While the error landscape seems fairly complex at first glance,
we observe the emergence of several key characteristics shared across benchmarks and domains.
Chief among these characteristics is the emergence of regions where power-law behavior approxi-
mates the error well both with respect to data size, when holding model size fixed, and vice versa.
Motivated by these observations, we establish criteria which a function approximating the error
landscape should meet. We propose an intuitive candidate for such a function and evaluate its
quality, both in explaining the observed error landscapes and in extrapolating from small scale (seen)
to large scale (unseen) errors. Critically, our functional approximation of the error depends on both
1arXiv:1909.12673v2 [cs.LG] 20 Dec 2019
Published as a conference paper at ICLR 2020
model and data sizes. We find that this function leads to a high quality fit and extrapolation. For
instance, the mean and standard deviation of the relative errors are under 2% when fitting across
all scales investigated and under 5% when extrapolating from a slimmed-down model (1/16 of the
parameters) on a fraction of the training data (1/8 of the examples) on the ImageNet (Russakovsky
et al., 2015) and WikiText-103 (Merity et al., 2016) datasets, with similar results for other datasets.
To the best of our knowledge, this is the first work that provides simultaneously:
Ajoint functional form of the generalization error landscape—as dependent on both data
and model size—with few, interpretable degrees of freedom (section 5).
Direct and complete specification (via the scaling policy) of the model configuration attain-
ing said generalization error across model and dataset sizes.
Highly accurate approximation of error measurements across model and data scales via the
functional form, evaluated on different models, datasets, and tasks (section 6 ).
Highly accurate error prediction from small to large model and data (section 7).
We conclude with a discussion of some implications of our findings as a practical and principled tool
for understanding network design at small scale and for efficient computation and trade-off design
in general. We hope this work also provides a useful empirical leg to stand on and an invitation to
search for a theory of generalization error which accounts for our findings.
2 R ELATED WORK
Model scaling: A number of studies have explored the effect of model scaling on performance.
For instance, image classification networks can be scaled by depth (number of layers; He et al., 2016)
or width (number of channels; Zagoruyko & Komodakis, 2016; Howard et al., 2017). More recently,
Tan & Le (2019) demonstrated how scaling width, depth, and input resolution has combined positive
effects larger than scaling each factor in isolation. However, this relationship has yet to be quantified
in a predictive form – by how much will error change with model scaling? In this work, we focus
on finding a constructive functional form for determining the model given a specified performance.
Data scaling: It has long been recognized that more data improves performance, and various
studies report such trends in both computer vision (e.g., Zhu et al., 2012; Sun et al., 2017) and
language processing tasks (e.g., Banko & Brill, 2001; Talmor & Berant, 2019). A number of prior
studies observed power-law relations between the generalization error and training data size (Cho
et al., 2015; Miceli Barone et al., 2017; Johnson et al., 2018). Most relevant to our work, Hestness
et al. (2017) explored the effect of data size on the generalization error in vision, language, and
speech tasks, and observed a strikingly consistent power-law behavior in a large set of experiments.
However, while these studies point to the empirical existence of a power law in terms of data, they
do not offer tools for predicting the performance given a specified model. Nor do they offer low-cost
methods to specify the model configuration which would attain the power law with data dependency.
Indeed, Hestness et al. had to search over models and their configurations at large scale to exhibit
their findings, incurring prohibitive computational costs.
In contrast, we demonstrate a constructive recipe, where we directly predict the test performance at
large scale and specify the full model configuration which attains it (with no need for large-scale
search), given performance at small scale.
Predicting model performance: Since training models at full data/model scale may be compu-
tationally prohibitive, a line of work tries to predict the performance of a given model on a given
dataset, without training the model, for example by using a bank of previously trained models,
dataset, and their associated performances (Istrate et al., 2019). Others have proposed to estimate
performance on small data (Klein et al., 2017) or model sizes (Zoph et al., 2018; Real et al., 2019)
in the context of neural architecture search (NAS). In this case, the small-scale evaluation is used
to compare models at small cost, to expedite the search process; see Elsken et al. (2019) for a re-
cent survey. Our work complements previous approaches by demonstrating a functional form that
can predict large-scale performance from small-scale measurements. Moreover, our method may be
integrated in NAS, addressing some of its current limitations (as discussed in section 8).
2
Published as a conference paper at ICLR 2020
Table 1: The datasets and models used in this work, along with their original training data size and
the range of explored scales. For more information, see appendix A.
(a) Training data size (number of words) and model size (number of parameters excluding word embeddings)
for language modeling tasks.
Dataset Size ( N) Scales ( n) Base Model Size ( M) Scales ( m)
PTB 0.9M)
2 kN,
0k5AWD-LSTM 20M)
4 kM,
0k6WikiText-2 2M AWD-LSTM 20M
WikiText-103 100M Transformer-XL 41M
(b) Training data size (number of images) and model size (number of parameters) for image classification tasks.
DatasetSize
(N)Scales (n) Base ModelSize
(M)Scales (m)
ImageNet 1.2M 2 kN,0k6ResNet-50 25.5M 4 kM,0k6
CIFAR10 60K9
>>>=
>>>;2 kN,
0k5WRN-44-16 0.7M 4 kM, 3k4
CIFAR100 60K WRN-44-16 0.7M9
>=
>;4 kM,
2k4DTD 5640 WRN-44-16 0.7M
Aircraft 10K WRN-44-16 0.7M
UCF101 13K WRN-44-16 0.7M
Theoretical error bounds: Much attention has been given to theoretical explanations of the gener-
alization capabilities of deep neural networks (Neyshabur et al., 2017a;b; Allen-Zhu et al., 2018a;b;
Arora et al., 2018). While fully engaging with this literature is beyond our scope, we note that recent
studies have derived bounds involving power-law dependencies in both model (Yarotsky, 2018) and
data size (Liang et al., 2019). We leave it as an open question for future work to find theoretical
explanations for the empirical behavior and the functional form we investigate in this work.
3 E XPERIMENTAL SETUP
Notation: LetDn=fxi;yign
i=1denote a labeled (training) dataset with nsamples or datapoints.
Letfmdenote a neural network whose size is the number of parameters m, such that ^y=fm(x)is
the predicted label. Let (n;m)be the generalization error as a function of nandm, measured by
a performance metric (e.g., top-1 accuracy or cross-entropy loss) on a held-out test set. We refer to
this error function as the error landscape .
3.1 S CALING POLICIES
Dataset scaling: We wish to scale datasets while preserving the original distribution. For image
classification, we uniformly subsample all classes by a constant ratio, thus preserving the relative
sample size per class. We limit the maximal sub-sampling to avoid eradicating any class. For
language modeling, where the number of classes (vocabulary items) has a very long tail distribution,
we randomly sample sentences such that the total number of sampled words will be a certain fraction
of the original dataset. Table 1 reports the data scales we use. In all tasks the held-out test set remains
untouched for evaluating the error.
Model scaling: We are critically interested in a method where moving across scales is defined by
some scaling function, such that no additional significant computation would be incurred. We thus
consider the case where the model architecture is given and the model size determines how to scale
it. For instance, one may scale width (number of channels in convolutional networks, hidden state
size in recurrent networks), depth (number of layers), do compound scaling (Tan & Le, 2019), or
more generally define a function tying the model degrees of freedom and size. We focus primarily
on width scaling in our experiments; the model scales are reported in Table 1. We also perform
selected depth scaling to demonstrate flexibility with respect to the scaling method.
3
Published as a conference paper at ICLR 2020
log2(data fraction)5
4
3
2
1
0
log2(model fraction)12
10
8
6
4
2
0
log10(err)
0.550.600.650.700.750.80wiki103: actual test loss
(a) Wiki103 error (cross entropy) landscape.
log2(data fraction)5
4
3
2
1
0
log2(model fraction)8
6
4
2
0246
log10(err)
1.2
1.0
0.8
0.6
0.4
cifar10: actual test err (b) CIFAR10 error (top1) landscape.
Figure 1: Error landscapes in log-log-log scale. Each point (blue dot) is the error resulting from
training with a model/data configuration m;n . The surface is a linear interpolation between the
points, which is then projected on the (m;),(n;)and(m;n)planes. See Appendix C for details.
4
2
0
log2( data fraction )0.550.600.650.700.750.80log10( cross entropy loss )
log2(m/M)
-12.0
-10.0
-8.0
-6.0
-4.0
-2.0
0.0
12
10
8
6
4
2
0
log2( model fraction )
log2(n/N)
-5.0
-4.0
-3.0
-2.0
-1.0
0.0
(a) Wiki103 cross entropy vs. data and model size.
4
2
0
log2( data fraction )1.4
1.2
1.0
0.8
0.6
0.4
0.2
log10( top1 error )
log2(m/M)
-10.0
-8.0
-6.0
-4.0
-2.0
0.0
2.0
4.0
10
8
6
4
2
024
log2( model fraction )
log2(n/N)
-5.0
-4.0
-3.0
-2.0
-1.0
0.0 (b) CIFAR10 top1 error vs. data and model size.
Figure 2: Error vs. data size (left part of each subfigure) and model size (right part) for Wiki103 and
CIFAR10. Solid dots are measurements, dashed lines are best fit to saturating power-law.
Hyper-parameters: For similar reasons we wish to avoid hyper-paramater search at large scales,
and thus avoid the temptation to tune hyper-parameters accordingly (learning rate, regularization,
etc.). Therefore, we hold all hyper-parameters fixed. This enables us to construct a functional form
that fits the error landscape and can be used to predict the error across scales while completely defin-
ing the model attaining it. We consider pros and cons of this approach in the discussion (section 8).
3.2 T ASKS , MODELS , OPTIMIZERS AND DATASETS
We experiment with both vision and language tasks. We use 6 benchmark datasets for image classi-
fication and 3 for language modeling. For image classification, we train ResNet (He et al., 2016) and
WRN models (Zagoruyko & Komodakis, 2016) with stochastic gradient decent (SGD). In section
6.2 we explore the effect of varying architectures and optimizers for a fixed task (CIFAR100), adding
VGG16 (Simonyan & Zisserman, 2014) and DenseNet (Huang et al., 2017) models trained with both
Adam (Kingma & Ba, 2015) and SGD. For language modeling, we train AWD-LSTM (Merity et al.,
2018) and Transformer-XL models (Dai et al., 2019) with SGD and Adam optimizers respectively.
Summary statistics are shown in Table 1, along with the range of explored scales. Appendix A gives
additional information.
4
Published as a conference paper at ICLR 2020
4 O BSERVATIONS ON THE ERROR LANDSCAPE
Figures 1a and 1b respectively show an example test error landscape for width scaling of
Transformer-XL on WikiText-103 and WRN-44-16 on CIFAR10. Various additional such land-
scapes are found in appendix C, showing largely consistent patterns. Examining the error landscapes
yields the following observations:
O1Model Scaling
O1.1 For a given dataset size, scaling up the model results in an initial decrease in test error, which
then saturates to a level determined by the dataset size.1This behavior has been noted by
Tan & Le (2019) across varied model scaling methods, although they have not engaged with
the dependency on dataset size.
O1.2 The rate of error decrease with model size appears well approximated by a power-law.
These two observations together can be summarized as the following relation:
(m;n)b(n)m (n)+cm(n) (1)
whereb;;cmmay depend on the data size n, s.t. asmgrows,!cm. Example fits to this
form (allowing b;;cmto be fit pern) are seen in figure 2a (right) and figure 2b (right).
O2Data scaling
O2.1 For a given model size, scaling up the dataset results in an initial increase in performance,
which then saturates to a level determined by the model size.
O2.2 The rate of error decrease with dataset size appears well approximated by a power-law. Hes-
tness et al. (2017) also noted a similar relationship, but did not functionally tie the saturation
level to the dataset size.
These two observations together can be summarized as the following relation:
(m;n)a(m)n (m)+cn(m) (2)
wherea;;cnmay depend on the model size m, s.t. asngrows,!cn. Example fits to
this form (allowing a;;cnto be fit perm) are seen in figure 2a (left) and figure 2b (left).
O3Joint properties The behavior of the error when scaling model size while holding data size
fixed, and vice versa, extends to the entire error landscape in a well-behaved manner, such that
the manifold (m;n)is smooth everywhere as a function of both model and data scales.
5 F UNCTIONAL APPROXIMATION OF THE GENERALIZATION ERROR
5.1 C RITERIA
Motivated by the above observations, we now consider a functional approximation for the error land-
scape. In particular, let us consider function families meeting the following criteria which augment
and restrict our observations:
C1 As either model or dataset size goes to zero, the expected performance is equivalent to a
random-guess error level 0.2
C2 For a given dataset size, scaling up the model will result in an initial increase in perfor-
mance, which will then saturate, taking the form in equation 1.
C3 For a given model size, scaling up the dataset will result in an initial increase in perfor-
mance, which will then saturate, taking the form in equation 2.
C4 There exists an irreducible error 1, intrinsic to the dataset.
C5 The function must be smooth everywhere and monotonic non-increasing in terms of model
and data size (observation O3).
While there are many possible function families meeting the above criteria, below we propose a
simple function family for our evaluation. We do not claim that this is in fact the true underlying
dependency, but rather that it serves as a good approximation of the error landscape—consistent
with these criteria.
1At some point error increase ensues; this point differs between datasets, see Appendix C for examples.
2Best guess when m!0(0n) orn!0(m0) need not coincide, but can, e.g., in a balanced dataset.
5
Published as a conference paper at ICLR 2020
5.2 P ROPOSED FUNCTION FAMILY
As a first insightful step, consider the implications of satisfying C2 and C3 simultaneously . By
examining the limiting behavior as morngrow, we have:
Asmgrows large: cm(n)a(m)n (m)+cn(m)
Asngrows large: cn(m)b(n)m (n)+cm(n)
Thus, a consistent form satisfying C2 and C3 simultaneously is:
(m;n)a(m)n (m)+b(n)m (n)+c1 (3)
wherec1is a constant not dependent on either morn.
Let us now examine the simplified case where a;b;; are constant:
~(m;n) =an +bm +c1 (4)
where0and0control the global rate at which error decreases with data and model size,
respectively, a>0andb>0are a form of unit conversion between data and model sizes and error,
andc1>0is the asymptotic lower value attainable. This function is a special case of equation 3
and meets criteria C2 and C3 by construction. Importantly C4 and C5 are also met.
However, by giving up the dependence of a;b;; onm;n , this function does not meet criterion C1.
We thus need to model the transition from the initial random-guess level to the power-law region.
We propose to parameterize the transition using the following envelope (complex) function:
^(m;n) =0
~(m;n)
~(m;n) i
=0
an +bm +c1
an +bm +c1 i
(5)
wherei=p 1. Here the simple pole at controls the transition point from the initial random-guess
level0as(m;n)increase. As (m;n)grow, ~!c1and the final irreducible error 1,0c1 1
is approached. The random-guess error, 0, is a known parameter determined by dataset statistics
(e.g,(Nclasses 1)=Nclasses for a balanced dataset). Note that due to our choice of rational envelope,
we can divide by a constant the form in equation 4. Without loss of generality, let us choose a= 1.
Note that while the forms in equations 3 and 4 are well motivated, the approach taken for modeling
the transition is solely a convenience one. In fact, the transition(s) as function of mandnmay
be captured in the functional forms of a;b;; or another envelope mechanism. We leave a more
refined investigation of the nature of the transitions to future work.
6 ERROR LANDSCAPE ESTIMATION
We wish to empirically estimate the quality of the proposed functional parameterization as a fit to the
true error landscape. Let ^(n;m;)be the parametric function family (equation 5) approximating
the error landscape (n;m), where=f;;b;c1;g.3Define the divergence (n;m;)as the
relative difference between the estimated error ^(m;n;)and the true error (m;n):
(n;m;),^(m;n;) (m;n)
(m;n)
We fit a least squares regression model to find the best parameters minimizing the divergence. In this
section, we fit the function using 10-fold cross-validation across all model/data configurations m;n
(see Table 1) and evaluate the fit quality. (In the next section, we perform extrapolation experiments,
from seen to unseen points.) We perform the fit separately for each dataset and evaluate its quality
by the mean and standard deviation of the divergence over all points (m;n). See Appendix B.1
for experimental details.
As figure 3 shows, estimated test accuracy is highly correlated with actual test accuracy for various
datasets, with worst-case values <1%and <5%. Note that the number of free parameters is
small (jj6) compared to the number of points (42–49 model-data configurations), demonstrating
the appropriateness of the proposed function for modeling the complex error landscape.
6
Published as a conference paper at ICLR 2020
3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0
measured test loss3.03.54.04.55.05.56.06.57.0estimated test loss
fit: language modeling
wiki103 : :-0.1±1.3 % :1.2±0.3 %
PTB : :-0.0±0.3 % :0.7±0.3 %
wiki2 : :-0.0±0.2 % :0.4±0.2 %
(a) Estimated vs. actual cross-entropy loss
for various language modeling datasets.
0.0 0.2 0.4 0.6 0.8 1.0
measured top1 error0.00.20.40.60.81.0estimated top1 error
fit: image classification
aircraft: :0.5±0.1 % :1.5±0.2 %
dtd : :0.2±0.1 % :1.5±0.0 %
ucf101 : :-0.5±1.6 % :4.4±0.7 %
cifar10 : :0.1±0.1 % :4.5±0.1 %
imagenet: :0.3±0.3 % :1.9±0.5 %
cifar100: :0.7±0.2 % :2.2±0.1 %
(b) Estimated vs. actual test error for various
image classification datasets.
Figure 3: Error estimation results, using 10-fold cross-validation on all configurations in each
dataset. For reference, in blue is the identity line. The legend shows mean and standard devi-
ationof the divergence (one std). See Appendix C for the actual and estimated landscapes in
each dataset.
log2(DS fraction)5
4
3
2
1
0
log2(model fraction)3
2
1
01
log10(err)
1.2
1.1
1.0
0.9
0.8
0.7
0.6
0.5
0.4
cifar10: actual test err
(a) Error landscape when scaling depth
(at constant baseline width).
0.0 0.2 0.4 0.6 0.8 1.0
measured top1 error0.00.20.40.60.81.0estimated top1 error
fit: cifar10 width scaling
D=8: 0.4/4.2
D=14: 0.3/4.3
D=32: 1.2/5.3
D=44: 1.7/5.2
D=62: 0.4/4.4
D=128: 0.1/5.8(b) Width scaling fit at different
constant depths (D).
0.0 0.2 0.4 0.6 0.8 1.0
measured top1 error0.00.20.40.60.81.0estimated top1 error
fit: cifar10 depth scaling
W=1: 0.1/4.0
W=2: 2.3/5.8
W=4: 0.6/4.9
W=8: -0.1/2.6W=16: 0.3/2.9
W=32: 0.7/4.1
W=64: 0.6/4.1
W=128: 0.8/7.4(c) Depth scaling fit at different
constant widths (W).
Figure 4: Error landscape estimation results on CIFAR10 for width and depth scaling, showing small
and comparable fit errors in both cases. Numbers in legends denote mean/variance of the estimation
divergence.
6.1 A P ROBE INTO DEPTH SCALING
Here we verify that our results extend to another canonical scaling policy, namely depth scaling.
Figure 4a shows the error landscape with depth scaling on CIFAR10, exhibiting the same character-
istics as width scaling. Figures 4b and 4c show error landscape estimation results for both cases of
width and depth scaling, exhibiting small and comparable fit errors (confidence intervals <3%).
Since the difference in approximation quality is effectively indistinguishable when scaling depth or
width orthogonally, we expect compound scaling to adhere to the same functional form. Indeed, we
verified this on the publicly available (model scaling only) results for EfficientNet (Tan & Le, 2019).
6.2 O N THE VARIETY OF OPTIMIZERS AND ARCHITECTURES
Our study covers a deliberate variety of architectures (ResNet, WRN, LSTM, Trans-
former) and optimizers (Adam, SGD variants), following standard implementations
in the literature as recommended for each dataset/model setting; see Appendix A.
3For image classification, we set 0= (Nclasses 1)=Nclasses (the balanced dataset case). For language
modeling, we estimate 0as another parameter, such that =f;;b;c 1;; 0gin this case.
7
Published as a conference paper at ICLR 2020
6
5
4
3
2
1
0
data fraction: log2(n/N)12
10
8
6
4
2
0model fraction: log2(m/M)extrapolation points
(a) Illustration.
0.0 0.2 0.4 0.6 0.8 1.0
measured top1 error0.00.20.40.60.81.0estimated top1 error
model fraction 1/16
data fraction 1/8 :-4.5%
:4.681%
extrapolation, imagenet
fit
extrapolated (b) Extrapolation on ImageNet
3 4 5 6 7
measured test loss3.03.54.04.55.05.56.06.57.0estimated test loss
model fraction 1/16
data fraction 1/8 :0.5%
:1.689%
extrapolation, wiki103
fit
extrapolated (c) Extrapolation on WikiText-103.
Figure 6: Extrapolation results. (a) Illustration of the extrapolation setup, where we fit on a subset
of the points (in green) and predict on larger points (in red). (b) and (c) show example results on one
configuration in two benchmark datasets. Comprehensive results are given in Appendix D.
0.0 0.2 0.4 0.6 0.8 1.0
measured top1 error0.00.20.40.60.81.0estimated top1 error
fit: image classification
wrn/sgd : :0.0±0.8 % :1.4±0.5 %
vgg/adam: :0.0±1.4 % :1.3±0.6 %
vgg/sgd : :0.0±1.5 % :1.3±0.9 %
densenet/sgd: :0.5±3.2 % :4.5±2.7 %
densenet/adam: :0.5±2.6 % :5.2±2.7 %
Figure 5: CIFAR100 Error estimation re-
sults with three architectures (WRN, VGG,
DenseNet) and two optimizers (SGD, Adam).However, the model/optimizer settings differ in
multiple aspects across the different tasks , ren-
dering the comparison of, say, different optimiz-
ers, challenging. In this section we verify that the
functional form holds when varying the optimizer
and/or the architecture on the same task, namely
image classification on CIFAR100.
In addition to the previously examined setting of
WRN with SGD, we add four more settings: two
well known architectures (VGG and DenseNet),
each trained with both SGD and Adam optimizers.
See Appendix A for experimental details. Figure 5
exhibits consistent, accurate, fit values across all ar-
chitecture/optimizer settings, with mean divergence
of < 1%(std: < 6%; confidence intervals
<4%).
7 E XTRAPOLATION
In this section, we evaluate the ability of our functional approximation to extrapolate beyond seen
model/data configurations. The primary question we ask is: can we predict the error of a large
model/data configuration from the errors of smaller-scale model/data configurations? To do this,
we fit the least squares regression on a subset of the configurations and predict the error on larger,
unseen configurations. More formally, let (mi;nj)denote a given model/data configuration. We first
estimate parameters ijby fitting the function in equation 5 on all points of at most that size ( m
mi;nnj). Then we predict the error (m;n)in all points corresponding to larger configurations
(m > mi;n > nj) using estimated ij. Finally, we measure the divergence (m;n)between the
estimated error and the actual error at all larger configurations. This process is illustrated in figure 6a.
Figure 6b shows the results of one such extrapolation experiment, on ImageNet. In this case, we
have fit the functional form on all configurations of model size mmi=M=16and data size
nnj=N=8, and predicted the error on all larger configurations. As the figure shows, the
extrapolation is highly accurate, with a mean divergence of = 4:5%(std:= 4:7%). Figure 6c
reports a similar experiment on WikiText-103. Here, again, we see very good extrapolation, with a
mean divergence of = 0:5%(std:= 1:7%). Note that each extrapolation is run 10 times with
different random initializations of ijin the least squares with negligible effect on the prediction.
8
Published as a conference paper at ICLR 2020
In practice, we may be interested in extrapolation quality with different subsets of configurations.
Appendix D provides detailed extrapolation results on multiple subsets of configurations, for both
vision and language datasets. Generally, the extrapolation performs well once not ill-posed, which
may be caused by lack of signal in the region of the initial “random-guess” level, or in degenerate
cases like having fewer measurements than the number of free parameters in .
8 D ISCUSSION AND CONCLUSION
In this work, through insights gained by the joint examination of the dependencies of generalization
error on both model and data size, we arrive at criteria for functions consistent with the form of the
generalization error under a given scaling policy. We consider one such function and find it to be
in very good agreement with the actual behavior of the error landscape. Indeed, the agreement is
strong enough that extrapolation from small to large scale becomes feasible: the function predicts
the behavior of the generalization error in practice for the practical case of scaling models and data.
We discuss several example implications of knowing such a functional form.
Small-scale network development: At the core of small fidelity searches is the notion of perfor-
mance rank comparison between models. However, small scale and large scale ranks are not assured
to be consistent. If indeed a functional form such as empirically found in this work holds very gen-
erally, then in contrast, one can safely assess scaling rank between models at small scale, with the
assurance that it remains consistent. This suggests that one would be well served by searching over
scaling policies; a pertinent example of such a success is Tan & Le (2019). The functional form also
explains the limitation of small-scale search: once reaching the random-guess error level, where the
sensitivity to scaling vanishes, the informativeness of ranking diminishes. Finally, the functional
form allows direct usage of differentiable methods for NAS.
Principled design: Knowing the error landscape function facilitates reasoning about the choice
of(m;n)attaining a specified error level. In other words, for any given error level, one can solve
Eq. 5 form;n based on small-scale measurements. Thus, one can quantitatively answer design
questions regarding the expected (in particular, large-scale) relations between m,n, and. In fact,
Eq. 5 provides direct ansewrs to questions such as ”how much data would one require to reach a
prescribed performance level?” or ”how big a model would be needed?” Imposing constraints is also
straightforward. For instance, consider the following question: ”What is the maximal model size
possibly needed (useful), when the data is limited in size, n=nlim(for a given model architecture
and scaling policy)?” For a fixed dataset size, model scaling eventually contributes marginally to
error reduction and becomes negligible when bm n
lim(Eq. 5). Define the relative contribution
thresholdTas satisfying T=n
lim
bm
max. (For example, T= 10 .) Then the maximal useful model size
meeting threshold Tis:
mmax(T) = (bT)1=n=
lim
Similarly, The maximal useful amount of data for a limited sized model mlimis:
nmax(T) = (1=bT)1=m=
lim
Moreover, Eq. 5 allows for complex design trade-offs. Generally, given some design-tradeoff cost
functionC(m;n; ), one can minimize such cost s.t. Eq. 5. For example, consider the case of opti-
mizing for efficient computation which has both practical and environmental importance (Schwartz
et al., 2019). Since the number of FLOPs during training is /mn(for constant epoch budget),
the trade-off cost function may be formulated as C(FLOPS;) =C(mn; ). Further, since constant
error contour is very well approximated by c=1
n+b
m(Eq. 5), dataset and models may be scaled
with optimal resource efficiency with no effect on performance by solving for:
argmin
m;nmn s.t.c=1
n+b
m
The solution gives us the optimal-computational-efficiency ratio of model to data size:b
n
m= 1.
9
Published as a conference paper at ICLR 2020
Limitations: We have made a few simplifying assumptions in our choice of approximating func-
tion, in particular in how to model the transition from the initial random-guess error level and the
union of the random-guess level of the two scenarios (small model with large data and large model
with small data). We leave a more detailed examination of the behavior of the transitions from
random-guess error levels and refinements of the functional form to future work.
Critically, the restrictive nature of our scaling framework (all parameters and hyperparameters de-
scribed by a policy) is both a blessing and a challenge. The blessing comes in fulfilling the goal
of finding simultaneously both the form of the generalization error and the full specification of the
model and hyperparameters that attain it across scales. The challenge is that we have demonstrated
in this work only the case of constant hyper-parameters. We conjecture that the relation between
model configuration and hyperparameter choice (Zela et al., 2018) may entail the potential to for-
mulate hyperparameter-scaling policies similar in nature to the model-scaling polices, and that these
too fall under the scope of the form we find in this work. This too will be the subject of future work.
We hope that this work will bring the actual functional form of the generalization error in this
practical case of scaling to the fore, both in practice and as an empirical leg to stand on in the quest
for its theoretical origins.
ACKNOWLEDGMENTS
We thank Alexander Rakhlin, Alexander Madry, Kai Xiao, Lu Mi, Viaks Garg, Dan Alistrah, and
Tommi Jaakkola for discussions and their help. We also thank the anonymous reviewers for their
valuable feedback. J.R. was partly supported by the Eli and Dorothy Berman Fellowship as well
as grants NSF IIS-1447786, NSF CCF-1563880 and China-Singapore Suzhou Industrial Park. A.R.
was partially supported by the Air Force Office of Scientific Research USA (FA9550-18-1-0054)
though a grant to John K. Tsotsos. Y .B. was partly supported by the Harvard Mind ,Brain, and
Behavior Initiative.
REFERENCES
Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and generalization in overparameter-
ized neural networks, going beyond two layers. arXiv preprint arXiv:1811.04918 , 2018a.
Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. On the convergence rate of training recurrent neural
networks. arXiv preprint arXiv:1810.12065 , 2018b.
Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for
deep nets via a compression approach. arXiv preprint arXiv:1802.05296 , 2018.
Michele Banko and Eric Brill. Mitigating the paucity-of-data problem: Exploring the effect of
training corpus size on classifier performance for natural language processing. In Proceedings of
the first international conference on Human language technology research , pp. 1–5. Association
for Computational Linguistics, 2001.
Hakan Bilen, Basura Fernando, Efstratios Gavves, Andrea Vedaldi, and Stephen Gould. Dynamic
image networks for action recognition. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition , pp. 3034–3042, 2016.
James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. Quasi-recurrent neural net-
works. In International Conference on Learning Representations , 2017.
Junghwan Cho, Kyewook Lee, Ellie Shin, Garry Choy, and Synho Do. How much data is needed to
train a medical image deep learning system to achieve necessary high accuracy? arXiv preprint
arXiv:1511.06348 , 2015.
Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. De-
scribing textures in the wild. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition , pp. 3606–3613, 2014.
10
Published as a conference paper at ICLR 2020
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov.
Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the
57th Annual Meeting of the Association for Computational Linguistics , pp. 2978–2988, Florence,
Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1285. URL
https://www.aclweb.org/anthology/P19-1285 .
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey.
Journal of Machine Learning Research , 20(55):1–21, 2019.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-
nition. pp. 770–778, 2016.
Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad,
Md Patwary, Mostofa Ali, Yang Yang, and Yanqi Zhou. Deep learning scaling is predictable,
empirically. arXiv preprint arXiv:1712.00409 , 2017.
Sepp Hochreiter and J ¨urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):
1735–1780, 1997.
Elad Hoffer, Itay Hubara, and Daniel Soudry. Fix your classifier: the marginal value of training
the last weight layer. In International Conference on Learning Representations , 2018. URL
https://openreview.net/forum?id=S1Dh8Tg0- .
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand,
Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for
mobile vision applications. arXiv preprint arXiv:1704.04861 , 2017.
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected
convolutional networks. 1(2):3, 2017.
Roxana Istrate, Florian Scheidegger, Giovanni Mariani, Dimitrios Nikolopoulos, Costas Bekas, and
A Cristiano I Malossi. Tapas: Train-less accuracy predictor for architecture search. In Proceed-
ings of the AAAI Conference on Artificial Intelligence , volume 33, pp. 3927–3934, 2019.
Mark Johnson, Peter Anderson, Mark Dras, and Mark Steedman. Predicting accuracy on large
datasets from smaller pilot data. In Proceedings of the 56th Annual Meeting of the Association for
Computational Linguistics (Volume 2: Short Papers) , pp. 450–455, Melbourne, Australia, July
2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-2072. URL https:
//www.aclweb.org/anthology/P18-2072 .
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 2015.
Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, and Frank Hutter. Fast bayesian op-
timization of machine learning hyperparameters on large datasets. In Artificial Intelligence and
Statistics , pp. 528–536, 2017.
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images.
Technical report, Citeseer, 2009.
Tengyuan Liang, Alexander Rakhlin, and Xiyu Zhai. On the risk of minimum-norm interpolants
and restricted lower isometry of kernels. arXiv preprint arXiv:1908.10292 , 2019.
Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv
preprint arXiv:1806.09055 , 2018.
Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained
visual classification of aircraft. arXiv preprint arXiv:1306.5151 , 2013.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture
models. arXiv preprint arXiv:1609.07843 , 2016.
Stephen Merity, Nitish Shirish Keskar, and Richard Socher. Regularizing and optimizing LSTM
language models. In International Conference on Learning Representations , 2018. URL https:
//openreview.net/forum?id=SyyGPP0TZ .
11
Published as a conference paper at ICLR 2020
Antonio Valerio Miceli Barone, Barry Haddow, Ulrich Germann, and Rico Sennrich. Regularization
techniques for fine-tuning in neural machine translation. In Proceedings of the 2017 Conference
on Empirical Methods in Natural Language Processing , pp. 1489–1494, Copenhagen, Denmark,
September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1156. URL
https://www.aclweb.org/anthology/D17-1156 .
Tom´aˇs Mikolov, Martin Karafi ´at, Luk ´aˇs Burget, Jan ˇCernock `y, and Sanjeev Khudanpur. Recurrent
neural network based language model. In Eleventh Annual Conference of the International Speech
Communication Association , 2010.
Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Exploring general-
ization in deep learning. In Advances in Neural Information Processing Systems , pp. 5947–5956,
2017a.
Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A pac-bayesian approach to
spectrally-normalized margin bounds for neural networks. arXiv preprint arXiv:1707.09564 ,
2017b.
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito,
Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in
PyTorch. In NIPS Autodiff Workshop , 2017.
E Real, A Aggarwal, Y Huang, and QV Le. Aging evolution for image classifier architecture search.
InAAAI Conference on Artificial Intelligence , 2019.
Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image
classifier architecture search. arXiv preprint arXiv:1802.01548 , 2018.
Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. Learning multiple visual domains with
residual adapters. In Advances in Neural Information Processing Systems , pp. 506–516, 2017.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng
Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual
recognition challenge. International journal of computer vision , 115(3):211–252, 2015.
Roy Schwartz, Jesse Dodge, Noah A Smith, and Oren Etzioni. Green ai. arXiv preprint
arXiv:1907.10597 , 2019.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556 , 2014.
Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions
classes from videos in the wild. arXiv preprint arXiv:1212.0402 , 2012.
Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable ef-
fectiveness of data in deep learning era. In Proceedings of the IEEE international conference on
computer vision , pp. 843–852, 2017.
Alon Talmor and Jonathan Berant. MultiQA: An empirical investigation of generalization and
transfer in reading comprehension. In Proceedings of the 57th Annual Meeting of the Associ-
ation for Computational Linguistics , pp. 4911–4921, Florence, Italy, July 2019. Association for
Computational Linguistics. doi: 10.18653/v1/P19-1485. URL https://www.aclweb.org/
anthology/P19-1485 .
Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural net-
works. In International Conference on Machine Learning , pp. 6105–6114, 2019.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V . Luxburg,
S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neu-
ral Information Processing Systems 30 , pp. 5998–6008. Curran Associates, Inc., 2017. URL
http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf .
12
Published as a conference paper at ICLR 2020
Zifeng Wu, Chunhua Shen, and Anton van den Hengel. Wider or deeper: Revisiting the resnet
model for visual recognition. arXiv preprint arXiv:1611.10080 , 2016.
Dmitry Yarotsky. Optimal approximation of continuous functions by very deep relu networks. arXiv
preprint arXiv:1802.03620 , 2018.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint
arXiv:1605.07146 , 2016.
Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter. Towards automated deep learning:
Efficient joint neural architecture and hyperparameter search. arXiv preprint arXiv:1807.06906 ,
2018.
Xiangxin Zhu, Carl V ondrick, Deva Ramanan, and Charless C Fowlkes. Do we need more training
data or better models for object detection?. In BMVC , volume 3, pp. 5. Citeseer, 2012.
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures
for scalable image recognition. In Proceedings of the IEEE conference on computer vision and
pattern recognition , pp. 8697–8710, 2018.
13
Published as a conference paper at ICLR 2020
A D ATASETS AND MODELS
A.1 I MAGE CLASSIFICATION
A.1.1 D ATASETS
We evaluated our predictions on several popular image classification datasets: ImageNet (Rus-
sakovsky et al., 2015): a large-scale recognition benchmark consisting of natural images of 1000 ob-
ject categories with 1.28M training images spread roughly uniformly over the categories. It has 50K
validation and 100K testing images. It has been the most popular large-scale benchmark for image
classification methods for the better part of the last decade. CIFAR10/100 (Krizhevsky et al., 2009):
60K natural RGB images of 10 classes (100 for CIFAR100) with a train/test split of 50K/10K. For
each of the following datasets, we use the version collated, resized, and split into train/validation/test
sets by Rebuffi et al. (2017). DTD (Cimpoi et al., 2014): a texture database of 47 categories and
5640 images. Aircraft (Maji et al., 2013): 10K images of 100 different aircraft classes. UCF101
(Soomro et al., 2012): originally a video action recognition dataset, converted using the method of
Bilen et al. (2016) into a single image per video. It contains 13,320 images of 101 action classes.
A.1.2 M ODELS
We experiment with four models for image classification. We use different variants of the popular
ResNet architecture (He et al., 2016) in the main experiments. For ImageNet we use ResNet-50
and build on the code from the PyTorch framework (Paszke et al., 2017) to vary the model width.
For all other datasets we use WRN-44-16 (Wu et al., 2016) of varying widths, modified from the
implementation of Hoffer et al. (2018).
Scaling the models’ width is performed by multiplying the number of channels in each convolutional
layer and the width of the hidden linear layers by a constant factor and rounding to the nearest integer.
The ranges of width scales (and data scales) for the main experiments are detailed in Table 1b.
In section 6.2, we perform width scaling for two additional architectures, VGG16bn (Simonyan &
Zisserman, 2014) and DenseNet (L=40, k=32) (Huang et al., 2017). The VGG and DenseNet models
were also modified for width scaling from the implementation of Hoffer et al. (2018). The model
scales in this case are 4 k,0k5, for both VGG and DenseNEt.
Depth-scaling, in the CIFAR10 case (section 6.1), is performed by appending extra layers within
each block.
A.1.3 T RAINING
In the main experiments, training is done via SGD with a momentum of 0.9, weight decay of 1e-4
and initial learning rate of 0.1. For ImageNet we train for 90 epochs, decreasing the learning rate
by a multiplicative factor of 0.1 after and 30 and after 60 epochs. We use a batch size of 16. For all
other vision datasets we use a batch-size of 128. We begin training with a learning rate of 0.1, run
for 200 epochs, and reduce by a multiplicative factor of 0.1 after 80, 120, and 160 epochs.
For the VGG and DenseNet experiments on CIFAR100 in section 6.2, we train with both SGD and
Adam optimizers. We train VGG for 170 epochs and Densenet for 300 epochs. Adam hyperparam-
eters are default, with an initial learning rate of 1e-3. When training with SGD, we retain initial
learning rate, batch size, momentum, and weight-decay, as in the main experiment (at 0.1, 128, 0.9,
and 1e-4 respectively) and follow standard stepped learning rate schedules: For VGG, learning rate
multiplicative factor of 0.1 after 80, 120, and 160 epochs; For DenseNet, learning rate multiplicative
factor of 0.1 after 150 and 225 epochs.
A.2 L ANGUAGE MODELING
A.2.1 D ATASETS
We evaluate on several datasets commonly used for (word-level) language modeling: Penn Tree-
bank (Mikolov et al., 2010), WikiText-2 (Bradbury et al., 2017), and WikiText-103 (Merity et al.,
2016). The PTB is a relatively small language modeling dataset of news texts, with a vocabu-
14
Published as a conference paper at ICLR 2020
lary of 10K unique words and about 900K/70K/80K training/validation/test words. WikiText-2 is
drawn from Wikipedia articles and it is both larger and richer, with a vocabulary of 33K words and
2M/210K/240K training/validation/test words. WikiText-103 is also based on Wikipedia, but larger
still, with a vocabulary of 270K words and 100M training words (and the same validation and test
sets as WikiText-2).
A.2.2 M ODELS
We experiment with two standard models for language modeling: Transformer-XL (Dai et al., 2019)
and AWD-LSTM (Merity et al., 2018). Transformer-XL is a recent language modeling architecture
that is based on transformer self-attention (Vaswani et al., 2017), but modified to better learn de-
pendencies beyond a fixed length by adding a segment-level recurrence mechanism. It has achieved
state-of-the-art results on multiple benchmarks. We use the official PyTorch implementation4with
their base configuration: 16 layers, embedding size of 410, inner dimension of 2100 in the fully-
connected layers, and 10 attention heads. Training is done with Adam. See the implementation for
other details. For scaling experiments, we decimate the inner dimension. We use Transformer-XL
for WikiText-103.
AWD-LSTM is a long short-term memory (Hochreiter & Schmidhuber, 1997) language model with
adaptive weight averaging. We use the official implementation5with the recommended configura-
tion: 3 layers, embedding size of 400, and hidden state size of 1150. Training is done with SGD.
We use AWD-LSTM for PTB and WikiText-2 and follow the recommended settings for these two
datasets. For scaling experiments, we decimate the hidden state size.
4https://github.com/kimiyoung/transformer-xl
5https://github.com/salesforce/awd-lstm-lm
15
Published as a conference paper at ICLR 2020
B E RROR ESTIMATION EXPERIMENT
B.1 E XPERIMENTAL DETAILS
In the experiment described in section 6, we fit a least squares regression model to find the best
parameters minimizing the divergence (m;n)- evaluated at configurations m;n as in Table 1:
= arg min
X
n;mj(m;n;)j2
We quantify the quality of the fit by the mean and standard deviation of the fitted divergence
by performing standard 10-fold cross validation over all points (m;n)with confidence intervals
reported as1std over the folds.
B.2 F OUND THETA VALUES
Table 2: Optimal values of as found by the least squres regression fitting the functional form.
(a) Image classification (fitting top 1 error).
b c 1
ImageNet 0:75 0:61 0:76 3 :63 18 :50
CIFAR10 0:66 0:53 5:8710 027:1410 1419:77
CIFAR100 0:70 0:51 0:15 0 :71 6 :93
DTD 0:40 1:16 4:3010 051:2710 090:85
Aircraft 1:10 0:83 3:4710 035:1610 101:13
UFC101 0:93 0:54 4:6810 021:1610 092:98
(b) Language modeling (fitting cross entropy loss).
b c 1 0
PTB 0:81 0:34 0:15 5:00 6:27 6:10
WikiText-2 1:01 0:22 0:99 8:23 10:38 6:21
WikiText-103 0:74 0:56 0:33 9:04 16:34 6:60
16
Published as a conference paper at ICLR 2020
C A DDITIONAL ERROR LANDSCAPE MEASUREMENTS AND ESTIMATIONS
In this appendix, we provide error landscape measurements and estimations for all datasets, corre-
sponding to the experiment in section 6. The results are shown in 3D graphs similar to figure 1. In
each such graph, the z-axis is the logarithm of the generalization error as a function of two indepen-
dent variables: the model size mand the data size n.
The 3D graph is deliberately portrayed in log-log-log scale, as we cover a very large range of data
scales and model scales and a correspondingly wide range of errors. This view is a useful one when
one wishes to evaluate both large dynamic ranges (simultaneously both very large and very small
values) and is especially vivid in portraying power-law like dependencies; a power-law naturally
forms a straight line in a log-log view.
In each figure, subfigure (a) shows the measured error landscape is in log-log-log scale, where each
point (blue dot) is the error resulting from training with a model/data configuration m;n . Subfigure
(b) shows the best-fit estimated error landscape. The surface is a linear interpolation between the
points, which is then projected on the model-error (m;), data-error (n;), and model-data (m;n)
planes. The contour plots on each one of these planes are the projections of the error landscape
surface, and are useful in considering the behavior of the surface when holding one dimension
constant.
We call to attention several interesting observations on the datasets explored:
As quantified rigorously in section 6, the fits perform well across error ranges. In these
surfaces, one also gets qualitative sense of the fit adequacy across the wide ranges of the
dataset and model scales directly. While perhaps slightly difficult to asses the surface di-
rectly, a helpful view is to consider the similarity between the projections of the actual and
projected surfaces.
With increasing model size, indeed typically the error does remain saturated. However, in
one of our tested datasets (figure 12) there was a renewed slight increase. We verify that
this is indeed over-fitting, in the sense that there is no corresponding increase in the training
error. We note that the functional form we find can actually be used to veer clear of the
m;n regions where such over-fitting may occur.
The simplifying approach taken by considering the random guess levels (and associated
transitions) for small models or small data as identical, seems to work fairly well with
some deviation apparent by examining figure 15. Indeed the simplification can hold well
for balanced datasets, but need not for imbalanced ones such as in the task of language
modeling. Thus, a relaxation of this simplification is expected to be important conceptually
and practically.
17
Published as a conference paper at ICLR 2020
log2(DS fraction)6
5
4
3
2
1
0
log2(#param)1618202224
log10(err)
0.5
0.4
0.3
0.2
0.1
imagenet: actual test err
(a) Actual error landscape.
log2(DS fraction)6
5
4
3
2
1
0
log2(#param)1618202224
log10(err)
0.5
0.4
0.3
0.2
0.1
imagenet: estimated test err (b) Estimated error landscape.
Figure 7: ImageNet error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(#param)1214161820222426
log10(err)
1.2
1.0
0.8
0.6
0.4
cifar10: actual test err
(a) Actual error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(#param)1214161820222426
log10(err)
1.2
1.0
0.8
0.6
0.4
cifar10: estimated test err (b) Estimated error landscape.
Figure 8: CIFAR10 error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(#param)121416182022
log10(err)
0.5
0.4
0.3
0.2
0.1
decathlon_cifar100: actual test err
(a) Actual error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(#param)121416182022
log10(err)
0.5
0.4
0.3
0.2
0.1
decathlon_cifar100: estimated test err (b) Estimated error landscape.
Figure 9: CIFAR100 error landscape.
18
Published as a conference paper at ICLR 2020
log2(DS fraction)5
4
3
2
1
0
log2(#param)121416182022
log10(err)
0.12
0.10
0.08
0.06
0.04
0.02
decathlon_dtd: actual test err
(a) Actual error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(#param)121416182022
log10(err)
0.12
0.10
0.08
0.06
0.04
0.02
decathlon_dtd: estimated test err (b) Estimated error landscape.
Figure 10: DTD error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(#param)121416182022
log10(err)
0.175
0.150
0.125
0.100
0.075
0.050
0.025
decathlon_aircraft: actual test err
(a) Actual error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(#param)121416182022
log10(err)
0.175
0.150
0.125
0.100
0.075
0.050
0.025
decathlon_aircraft: estimated test err (b) Estimated error landscape.
Figure 11: Aircraft error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(#param)121416182022
log10(err)
0.4
0.3
0.2
0.1
decathlon_ucf101: actual test err
(a) Actual error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(#param)121416182022
log10(err)
0.4
0.3
0.2
0.1
decathlon_ucf101: estimated test err (b) Estimated error landscape.
Figure 12: UFC101 error landscape.
19
Published as a conference paper at ICLR 2020
log2(DS fraction)5
4
3
2
1
0
log2(model fraction)12
10
8
6
4
2
0
log10(err)
0.640.660.680.700.720.740.76PTB: actual test loss
(a) Actual error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(model fraction)12
10
8
6
4
2
0
log10(err)
0.640.660.680.700.720.740.76PTB: estimated test loss (b) Estimated error landscape.
Figure 13: PTB error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(model fraction)12
10
8
6
4
2
0
log10(err)
0.640.660.680.700.720.740.760.78wiki2: actual test loss
(a) Actual error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(model fraction)12
10
8
6
4
2
0
log10(err)
0.640.660.680.700.720.740.760.78wiki2: estimated test loss (b) Estimated error landscape.
Figure 14: WikiText-2 error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(model fraction)12
10
8
6
4
2
0
log10(err)
0.550.600.650.700.750.80wiki103: actual test loss
(a) Actual error landscape.
log2(DS fraction)5
4
3
2
1
0
log2(model fraction)12
10
8
6
4
2
0
log10(err)
0.550.600.650.700.750.80wiki103: estimated test loss (b) Estimated error landscape.
Figure 15: WikiText-103 error landscape.
20
Published as a conference paper at ICLR 2020
D A DDITIONAL EXTRAPOLATION RESULTS
Here we provide detailed extrapolation results, for all datasets. All figures are structured in a similar
way. Each subplot shows estimated (y-axis) vs. actual error (x-axis) (0 to 1 scale on both axes). Each
subplot is located at the coordinate of the maximal data and model given for the task of performing
the fit to the functional form in equation 5. This is the point at the top-right corner of the green
dots in the illustration in figure 6a. The target is to find the error-landscape values for unseen, larger
scales of both model and data (red points in the same illustration). Going from left to right in each
figure indicates observed measurements of the error from models of an increasing fraction w.r.t the
full size. Going from bottom-to top indicates observed measurements of the error from dataset sizes
of an increasingly large fraction of the full dataset.
In each subplot, every point shows the estimated vs. actual error on a model-data configuration.
Points that were given for fitting the function are colored in green, while unseen points that were not
used are in red. The red points show the estimation error vs. actual error when extrapolating to all
larger models and data sizes. In each subplot, the mean and standard deviation over all divergences
at target points are given in text.
Each experiment fit of the parameters was repeated 100 times, with different random initializations
of. The shaded bands show one standard deviation across these runs.
The quality of the extrapolation is critically dependent on the signal provided in the (green) fitted
points. Two limiting factors are evident by examining the figures below, which both play a role in
the well-posedness of the solution:
The proximity to the initial random guess level. Only upon transitioning from the initial
error plateau, does meaningful signal about the scaling rates become available. Indeed, for
scales prior still in the region or close to the initial error level, one sees poor extrapolation
results; see figures 18, 19, and 21, and the vivid origin of this phenomena by examining
figures 11, 10, and 12.
A second source of ill-posedness is tied to the number of configurations used for the esti-
mation of. Clearly, when this is small, one cannot expect the extrapolation to be stable.
In fact, at least two measurements in each scaling dimension (model/data) are needed, and
no less than the number of parameters in in total. Indeed, for all the plots in this ap-
pendix, the smallest scale of m;n is omitted form the graph such that the lowermost row
and leftmost column span exactly two model and data scales correspondingly. Of course,
there is nothing tying directly the number of points and scale of configurations measured,
and one can decouple these two factors by taking closer spaced samples at small scale.
When both the above factors are not limiting the measurement, one readily sees that for
divergences of no more than a few percent, it is sufficient to measure model/data config-
urations which are far-ranged from the configurations which one wishes to extrapolate to
.
21
Published as a conference paper at ICLR 2020
0.000.250.500.751.00
:-15.1±5.8
:11.3±3.6
:-13.0±0.0
:5.8±0.0
:0.1±0.0
:2.9±0.0
:5.8±0.0
:1.0±0.0
:5.3±0.2
:0.0±0.0
0.000.250.500.751.00
:-12.8±5.5
:11.7±3.7
:-8.5±0.0
:6.5±0.0
:-6.5±0.0
:2.5±0.0
:-4.5±0.2
:2.2±0.1
:5.0±0.1
:1.9±0.0
:3.3±0.0
:0.0±0.0
0.000.250.500.751.00
:-9.2±5.6
:9.9±4.1
:-13.9±0.0
:9.7±0.0
:-4.5±0.0
:4.7±0.0
:-5.3±0.0
:3.5±0.0
:-2.6±0.0
:3.3±0.0
:-11.6±0.6
:3.1±0.2
0.000.250.500.751.00
:-16.4±3.4
:13.9±2.4
:11.3±0.0
:10.3±0.0
:8.6±0.1
:7.3±0.0
:0.9±0.0
:3.3±0.0
:-0.2±0.0
:3.0±0.0
:4.3±0.1
:2.3±0.1
0.000.250.500.751.00
:-13.2±3.5
:12.5±3.0
:-12.6±0.0
:9.3±0.0
:-12.7±0.0
:14.6±0.0
:-18.0±0.0
:12.2±0.0
:-17.1±0.1
:11.7±0.1
:2.4±0.0
:3.3±0.0
0.0 0.5 1.00.000.250.500.751.00
:-28.4±0.0
:22.5±0.0
0.0 0.5 1.0
:13.9±8.6
:16.0±5.6
0.0 0.5 1.0
:26.0±11.7
:22.5±8.0
0.0 0.5 1.0
:26.9±14.9
:21.6±9.9
0.0 0.5 1.0
:1.2±9.2
:7.7±5.3
0.0 0.5 1.0
:4.4±15.8
:10.9±8.2
imagenet
Dataset Fraction (log2(n/N)
Model Fraction (log2(m/M)
Figure 16: ImageNet extrapolation results.
22
Published as a conference paper at ICLR 2020
0.00.51.0
:4.6±11.4
:7.4±5.6
:2.8±0.3
:0.6±0.1
:-1.1±0.3
:1.2±0.2
:4.1±0.6
:0.0±0.0
0.00.51.0
:1.6±8.6
:6.7±4.4
:4.8±0.3
:3.4±0.2
:3.3±0.3
:1.5±0.1
:5.7±0.0
:1.7±0.0
:3.9±0.1
:0.0±0.0
0.00.51.0
:-0.5±7.2
:6.5±3.4
:10.6±0.2
:9.2±0.1
:4.3±0.1
:3.3±0.0
:7.0±0.1
:3.1±0.0
:5.2±0.1
:1.3±0.0
0.00.51.0
:-7.7±7.1
:6.4±3.4
:23.3±0.1
:21.0±0.1
:6.2±0.1
:5.3±0.1
:5.9±0.0
:3.7±0.0
:2.9±0.0
:2.2±0.0
0.00.51.0
:-15.3±3.9
:7.4±2.6
:18.7±0.0
:20.0±0.0
:-0.6±0.1
:3.8±0.0
:-9.1±0.0
:6.1±0.0
:-19.6±0.0
:9.7±0.0
0.0 0.5 1.00.00.51.0
:-21.6±0.9
:14.1±1.0
0.0 0.5 1.0
:4.7±8.8
:14.8±2.6
0.0 0.5 1.0
:-5.4±8.0
:9.5±3.0
0.0 0.5 1.0
:-6.4±9.0
:8.9±4.1
0.0 0.5 1.0
:-5.7±11.5
:8.9±5.1
decathlon_cifar100
Dataset Fraction (log2(n/N)
Model Fraction (log2(m/M)
Figure 17: CIFAR100 Extrapolation Results
23
Published as a conference paper at ICLR 2020
0.00.51.0
:13.2±0.1
:17.5±0.1
:14.8±0.0
:16.8±0.0
:18.5±0.0
:14.8±0.0
:15.8±0.0
:0.0±0.0
0.00.51.0
:11.8±0.2
:17.5±0.2
:12.9±0.0
:16.9±0.0
:14.8±0.0
:15.6±0.0
:9.5±0.0
:8.1±0.0
:-5.0±0.0
:0.0±0.0
0.00.51.0
:11.3±0.3
:17.3±0.3
:12.7±0.0
:16.9±0.0
:13.8±0.0
:15.5±0.0
:5.0±0.0
:8.1±0.0
:-6.1±0.0
:2.0±0.0
0.00.51.0
:9.2±0.6
:15.4±0.7
:12.9±0.1
:17.7±0.1
:12.7±0.0
:14.9±0.0
:7.7±0.0
:8.5±0.0
:-3.9±0.0
:4.0±0.0
0.00.51.0
:9.9±0.2
:15.7±0.2
:9.4±0.0
:15.3±0.0
:13.1±0.0
:14.5±0.0
:11.6±0.0
:9.7±0.0
:-0.1±0.0
:4.1±0.0
0.0 0.5 1.00.00.51.0
:-7.4±1.0
:13.0±1.5
0.0 0.5 1.0
:12.4±0.1
:16.3±0.0
0.0 0.5 1.0
:11.6±0.0
:13.7±0.0
0.0 0.5 1.0
:-0.2±0.0
:4.7±0.0
0.0 0.5 1.0
:0.9±0.0
:4.2±0.0
decathlon_aircraft
Dataset Fraction (log2(n/N)
Model Fraction (log2(m/M)
Figure 18: Aircraft extrapolation results.
24
Published as a conference paper at ICLR 2020
0.00.51.0
:5.6±1.1
:5.7±0.9
:2.7±0.0
:3.2±0.0
:3.3±0.0
:2.4±0.0
:2.1±0.0
:0.0±0.0
0.00.51.0
:7.7±0.4
:6.9±0.3
:3.3±0.0
:3.4±0.0
:2.2±0.0
:2.1±0.0
:-0.3±0.0
:1.3±0.0
:0.8±0.0
:0.0±0.0
0.00.51.0
:7.1±0.4
:7.1±0.3
:4.7±0.0
:4.3±0.0
:2.4±0.0
:2.1±0.0
:-1.6±0.0
:1.0±0.0
:-2.1±0.0
:1.5±0.0
0.00.51.0
:6.0±0.1
:8.9±0.1
:1.7±0.0
:6.8±0.0
:1.8±0.0
:2.8±0.0
:-1.1±0.0
:1.6±0.0
:-3.9±0.0
:1.2±0.0
0.00.51.0
:8.7±0.0
:9.2±0.0
:5.9±0.0
:6.9±0.0
:1.0±0.0
:1.7±0.0
:-5.1±0.0
:2.5±0.0
:-4.8±0.0
:2.1±0.0
0.0 0.5 1.00.00.51.0
:11.7±0.0
:9.3±0.0
0.0 0.5 1.0
:12.0±0.0
:8.0±0.0
0.0 0.5 1.0
:3.2±0.0
:1.3±0.0
0.0 0.5 1.0
:-9.1±0.0
:5.4±0.0
0.0 0.5 1.0
:-14.2±0.0
:4.3±0.0
decathlon_dtd
Dataset Fraction (log2(n/N)
Model Fraction (log2(m/M)
Figure 19: DTD Results
25
Published as a conference paper at ICLR 2020
0.00.51.0
:52.3±16.2
:36.5±13.7
:19.1±0.1
:9.4±0.0
:15.8±0.1
:3.3±0.0
:11.2±0.2
:0.0±0.0
0.00.51.0
:44.7±12.4
:36.4±11.9
:14.8±0.2
:10.6±0.1
:11.8±0.1
:6.3±0.0
:10.1±0.1
:2.0±0.0
:6.7±0.3
:0.0±0.0
0.00.51.0
:32.0±9.2
:28.4±9.2
:15.3±0.0
:10.5±0.0
:10.8±0.0
:7.0±0.0
:9.8±0.0
:3.6±0.0
:5.3±0.2
:1.3±0.0
0.00.51.0
:24.9±9.4
:21.5±9.4
:13.0±0.1
:8.9±0.0
:10.7±0.2
:6.3±0.0
:8.9±0.1
:3.8±0.0
:7.6±0.1
:1.7±0.0
0.00.51.0
:14.5±7.0
:15.5±7.3
:33.6±0.0
:28.0±0.0
:11.5±0.1
:6.8±0.0
:10.4±0.1
:4.2±0.0
:9.7±0.1
:2.0±0.0
0.00.51.0
:2.9±6.2
:8.0±6.4
:48.9±0.0
:46.8±0.0
:16.1±0.2
:12.6±0.2
:4.8±0.1
:4.2±0.0
:1.6±0.2
:1.6±0.0
0.00 0.25 0.50 0.75 1.000.00.51.0
:-11.3±1.7
:6.9±1.7
0.00 0.25 0.50 0.75 1.00
:9.0±3.3
:25.6±0.8
0.00 0.25 0.50 0.75 1.00
:1.5±4.1
:14.7±1.0
0.00 0.25 0.50 0.75 1.00
:-3.9±0.3
:6.5±0.1
0.00 0.25 0.50 0.75 1.00
:6.4±0.4
:3.2±0.1
cifar10
Dataset Fraction (log2(n/N)
Model Fraction (log2(m/M)
Figure 20: CIFAR10 extrapolation results.
26
Published as a conference paper at ICLR 2020
0.00.51.0
:42.3±6.1
:50.2±7.0
:61.4±0.3
:54.7±0.3
:15.7±0.0
:9.6±0.0
:7.1±0.0
:0.0±0.0
0.00.51.0
:36.1±6.4
:47.1±7.9
:55.4±0.2
:57.2±0.2
:5.3±0.0
:8.8±0.0
:4.1±0.0
:5.8±0.0
:-1.3±0.0
:0.0±0.0
0.00.51.0
:30.8±6.1
:40.7±7.7
:49.8±0.2
:52.8±0.2
:-4.1±0.0
:4.5±0.0
:8.7±0.3
:9.8±0.2
:4.0±0.1
:2.3±0.0
0.00.51.0
:21.8±6.1
:30.9±7.6
:52.3±0.2
:55.3±0.1
:-13.0±0.0
:2.6±0.0
:11.5±0.0
:12.7±0.0
:8.0±0.0
:2.2±0.0
0.00.51.0
:12.3±6.3
:22.9±7.3
:35.2±0.0
:42.6±0.0
:-6.4±0.0
:5.3±0.0
:-7.0±0.0
:6.0±0.0
:-3.6±0.0
:2.3±0.0
0.0 0.5 1.00.00.51.0
:-14.2±4.0
:8.9±2.0
0.0 0.5 1.0
:41.5±0.6
:49.4±0.5
0.0 0.5 1.0
:-1.4±0.0
:8.3±0.0
0.0 0.5 1.0
:5.3±0.0
:8.0±0.0
0.0 0.5 1.0
:18.4±0.0
:8.4±0.0
decathlon_ucf101
Dataset Fraction (log2(n/N)
Model Fraction (log2(m/M)
Figure 21: UCF101 extrapolation results.
27
Published as a conference paper at ICLR 2020
34567
:3.5±0.9
:2.8±0.7
:0.9±0.0
:0.7±0.0
:0.4±0.0
:0.4±0.0
:1.4±0.0
:0.0±0.0
34567
:2.3±0.9
:2.1±0.7
:-0.3±0.0
:1.1±0.0
:0.1±0.0
:0.9±0.0
:0.7±0.0
:0.6±0.0
:1.8±0.0
:0.0±0.0
34567
:1.6±0.9
:1.7±0.7
:-1.2±0.0
:1.6±0.0
:0.1±0.0
:1.0±0.0
:1.0±0.0
:0.8±0.0
:2.1±0.0
:0.5±0.0
34567
:1.5±0.9
:1.8±0.6
:-1.7±0.0
:1.9±0.0
:0.2±0.0
:1.2±0.0
:1.3±0.0
:1.1±0.0
:2.5±0.0
:1.0±0.0
34567
:-1.1±1.1
:1.7±0.3
:-0.7±0.0
:1.5±0.0
:0.7±0.0
:1.5±0.0
:1.7±0.0
:1.5±0.0
:3.1±0.0
:1.6±0.0
3 4 5 6 734567
:0.2±0.7
:1.6±0.4
3 4 5 6 7
:-5.0±1.2
:2.9±0.7
3 4 5 6 7
:-4.6±1.0
:2.6±0.6
3 4 5 6 7
:-5.8±1.2
:3.7±0.8
3 4 5 6 7
:-7.3±1.3
:4.1±0.7
PTB
Dataset Fraction (log2(n/N)
Model Fraction (log2(m/M)
Figure 22: PTB extrapolation results.
28
Published as a conference paper at ICLR 2020
34567
:-10.1±3.5
:8.6±2.6
:1.3±0.0
:0.8±0.0
:3.2±0.0
:0.3±0.0
:0.2±0.0
:0.0±0.0
34567
:-9.0±2.8
:8.9±2.5
:0.3±0.0
:0.9±0.0
:1.7±0.0
:1.3±0.0
:0.5±0.0
:0.6±0.0
:1.3±0.0
:0.0±0.0
34567
:-4.5±2.1
:5.7±1.6
:-2.5±0.0
:2.5±0.0
:1.2±0.0
:1.1±0.0
:0.6±0.0
:0.9±0.0
:1.5±0.0
:0.7±0.0
34567
:-3.9±2.0
:5.4±1.4
:-3.0±0.0
:3.1±0.0
:1.9±0.0
:1.5±0.0
:1.3±0.0
:1.5±0.0
:2.4±0.0
:1.3±0.0
34567
:-3.2±2.0
:4.9±1.1
:-4.2±0.0
:3.4±0.0
:1.8±0.0
:1.7±0.0
:1.2±0.0
:1.8±0.0
:1.7±0.0
:1.6±0.0
3 4 5 6 734567
:-11.2±2.1
:9.9±2.0
3 4 5 6 7
:-4.2±0.7
:3.9±0.2
3 4 5 6 7
:-4.5±0.8
:2.8±0.6
3 4 5 6 7
:-4.4±0.9
:2.6±0.7
3 4 5 6 7
:-5.3±0.9
:2.7±0.6
wiki2
Dataset Fraction (log2(n/N)
Model Fraction (log2(m/M)
Figure 23: WikiText-2 extrapolation results.
29
Published as a conference paper at ICLR 2020
34567
:-12.4±2.8
:7.2±1.9
:6.7±0.0
:3.9±0.0
:2.4±0.0
:1.6±0.0
:3.6±0.0
:0.0±0.0
34567
:-2.6±2.4
:4.4±1.4
:0.7±0.0
:0.8±0.0
:-0.4±0.0
:0.9±0.0
:1.2±0.0
:1.4±0.0
:3.8±0.0
:0.0±0.0
34567
:2.1±1.6
:3.0±0.6
:0.5±0.0
:1.7±0.0
:-1.0±0.0
:1.5±0.0
:1.9±0.0
:1.9±0.0
:4.9±0.0
:1.1±0.0
34567
:0.7±1.1
:4.2±0.4
:-6.4±0.0
:5.0±0.0
:-0.4±0.0
:2.0±0.0
:2.8±0.0
:2.6±0.0
:5.6±0.0
:2.5±0.0
34567
:4.3±0.7
:5.2±0.1
:0.7±0.0
:2.9±0.0
:2.1±0.0
:3.0±0.0
:4.1±0.0
:3.4±0.0
:5.8±0.0
:3.8±0.0
3 4 5 6 734567
:-45.8±0.0
:28.7±0.0
3 4 5 6 7
:-48.8±0.0
:29.7±0.0
3 4 5 6 7
:-41.0±0.7
:24.5±0.5
3 4 5 6 7
:-21.4±0.4
:11.5±0.3
3 4 5 6 7
:-13.9±1.1
:5.0±0.6
wiki103
Dataset Fraction (log2(n/N)
Model Fraction (log2(m/M)
Figure 24: WikiText-103 extrapolation results.
30 |
b1edc8ca-a884-4147-9f03-29fe7947aaa0 | trentmkelly/LessWrong-43k | LessWrong | The lesswrong slack - an introduction to our regulars
About us
The lesswrong slack has been active for 2-3 years now. It’s a garden in as much as I try to keep it peaceful.
A new member joined the slack recently. He asked: What are the moderation rules here?
I wrote the brief rules, but then I also was feeling particularly chatty so I also introduced several of our regular characters. Below is a log of that conversation.
> There are few rules. I am in charge. Only ever banned three guys. This is a private community. It's rare that we have any problems. And if we do we talk about it.
We know each other quite well. These are all my friends, also practically my family. I can introduce the regulars.
We have about 40 active users from the pool of 577 or so that have signed up to try out if we suit.
We are generally quiet ~3000 messages a week, Spread over several channels. A feature of being quiet is that we all tend to read all the messages, no matter what channel they appear in. That keeps us close together and good friends!
----------------------------------------
Common channels are:
* AI - Artificial intelligence
* Art_and_media - sharing pictures, books, videos, music
* Business_and_startups - talking about business ideas
* Campfire - talking about our past and personal stories
* Cognitive_enhancement - talking about drugs and other relevant enhancers
* Dailychallenge - occasional critical life questions to everyone
* Dieting_and_food - diet, health, cooking hedons
* Enlightenment - talking about enlightenment states and related ideas
* Fit - do you even lift? We do. We all do.
* Goals_of_lesswrong - about lesswrong
* Human_relationships - humans are hard.
* Mental_health - day to day mental health check ins and following each other’s journeys
* Open - anything
* Parenting - yep.
* Personal_finance - approaching financial independence and how to think about things
* Philosophy - yep.
* Political_talk - about politics, not about agendas.
* Programming - prog.
* Projects - people report and |
8c601727-e7ef-4d81-bc67-814ad5795ef8 | trentmkelly/LessWrong-43k | LessWrong | Why Productivity? Why Gratitude?
I recently did a series of online seminars where productivity guru Jason Womack tried to apply his advice for academics.
The productivity advice was good but not especially new after having read a lot of anti-akrasia posts on LW; EverydayUtilitarian recently wrote a great summary of these kind of ideas here. I suppose the fact that the advice wasn't new to me means LW has been doing a good job of bringing in good instrumental rationality advice from elsewhere.
But the most interesting parts of the seminars weren't actually ways to be more productive.
One is a question: why do you want to be more productive?
After asking themselves this, some people might realize that they don't actually need to be more productive. Why get more things done? If you work as a certain kind of corporate drone, becoming more productive might not make you or anyone else much better off. Perhaps you are rearranging the deck chairs on the titanic, becoming better at a job or project when you should be doing something else entirely. If your goal in work is to make you and your family better off, then it might be counterproductive to employ strategies that make you less happy or take you away from your family.
Alternatively, if you realize you have all kinds of really great reasons to be more productive, this should encourage you.
The other big non-exactly-productivity idea I learned about was gratitude.
The most obvious reason to send people thank-you notes is that it will make them happy, and is just the right thing to do. Another good reason you have probably heard of is that it will make you happier, like keeping a gratitude journal.
What I didn't realize before are the tangible benefits of sending thank-you notes. Jason Womack says he tries you send one a day, and has had many people respond by offering to do some project with him. I recently started sending out more thank-you emails to people who have helped improve my work, and have already had someone respond by offering a larg |
f6790de9-6c85-4d02-9662-b4c08b19e8e5 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Generative Exploration and Exploitation
Introduction
------------
Deep reinforcement learning (RL) has achieved great success in many sequential decision-making problems, such as Atari games [[16](#bib.bib1 "Human-level control through deep reinforcement learning")], Go [[25](#bib.bib2 "Mastering the game of go with deep neural networks and tree search"), [26](#bib.bib25 "Mastering the game of go without human knowledge")], and robotic tasks [[13](#bib.bib24 "End-to-end training of deep visuomotor policies"), [7](#bib.bib23 "Benchmarking deep reinforcement learning for continuous control")]. However, a common challenge in many real-world applications is the reward is extremely sparse or only binary. For example, in goal-based tasks, the agent can only receive the reward when it reaches the goal. Nevertheless, the goal is usually hard to reach via random exploration, such as ϵ-greedy and Gaussian noise. Domain-specific knowledge can be used to construct a shaped reward function to guide the policy optimization. However, it often biases the policy in a suboptimal direction, and more importantly domain-specific knowledge is unavailable in many cases.
Some exploration methods have been proposed to address sparse reward. A method family quantifies the novelty of the state and takes it as the intrinsic reward to encourage the agent to explore new states, e.g., count-based exploration [[2](#bib.bib4 "Unifying count-based exploration and intrinsic motivation"), [19](#bib.bib5 "Count-based exploration with neural density models")] and curiosity-driven exploration [[20](#bib.bib7 "Curiosity-driven exploration by self-supervised prediction"), [5](#bib.bib8 "Large-scale study of curiosity-driven learning"), [6](#bib.bib9 "Exploration by random network distillation")].
However, intrinsic reward leads to deviation from the true target and causes the learning process detoured and unstable.
Some methods set additional goals for exploration.
Goal GAN [[8](#bib.bib16 "Automatic goal generation for reinforcement learning agents")] generates different goals at the appropriate level of difficulty for the agent. HER [[1](#bib.bib10 "Hindsight experience replay")] replays each episode with a different goal sampled from the buffer rather than the original one to be achieved.
However, driven by random exploration the agent still rarely obtains a real reward signal.
Changing start state distribution has been considered to accelerate learning.
Appropriate start states can improve the policy training and performance, which has been proven theoretically by [[11](#bib.bib11 "A sparse sampling algorithm for near-optimal planning in large markov decision processes")].
Some works adopt the concept of reversing [[9](#bib.bib28 "Reverse curriculum generation for reinforcement learning"), [10](#bib.bib17 "Recall traces: backtracking models for efficient reinforcement learning")], gradually learning to reach the goal from a set of start states increasingly far from the goal.
Other researches change the start states by sampling from the states visited by expert demonstrations [[17](#bib.bib31 "Overcoming exploration in reinforcement learning with demonstrations"), [21](#bib.bib32 "Backplay: “man muss immer umkehren”")].
However, all these methods require a large amount of prior knowledge and handcrafted designs.
In this paper, we propose a novel method called Generative Exploration and Exploitation (GENE) to overcome sparse reward.
GENE dynamically changes the start states of agent to the generated novel states to encourage the agent to explore the environment or to the generated unskilled states to propel the agent to exploit received reward signals.
We adopt Variational Autoencoder (VAE) [[12](#bib.bib13 "Auto-encoding variational bayes")] to generate desired states and let the agent play from these states rather than the initial state.
As the encoder of VAE compresses high-dimensional states into a low-dimensional encoding space, it is easy to estimate the probability density functions (PDFs) of successful states and failed states experienced by the agent via Kernel Density Estimation (KDE) [[22](#bib.bib14 "Remarks on some nonparametric estimates of a density function")].
We sample from the distribution to feed into the decoder to reconstruct states.
By deliberately giving high probability to the state encodings with little difference between these two densities, GENE is able to adaptively guide the agent to explore novel states and to practice at unskilled states as the learning progresses.
GENE can be combined with any RL algorithm, no matter on-policy or off-policy, single-agent or multi-agent.
Driven by unsupervised VAE and statistical KDE, GENE relies on no prior knowledge and handcrafted designs.
Like other methods that change start states, GENE requires the start state can be set arbitrarily, which however is feasible in many simulators, *e.g.*, MuJoCo [[27](#bib.bib27 "Mujoco: a physics engine for model-based control")], Robotics [[4](#bib.bib33 "Openai gym")], MPE [[15](#bib.bib21 "Multi-agent actor-critic for mixed cooperative-competitive environments")], and MAgent [[28](#bib.bib34 "Magent: a many-agent reinforcement learning platform for artificial collective intelligence")].
Taking advantage of embedding states into a encoding space, GENE is practical and efficient in high-dimensional environments. Moreover, in multi-agent environments with sparse rewards where the search space exponentially increases with the number of agents, GENE can greatly help agents to co-explore the environment.
Empirically, we evaluate GENE in three tasks with binary rewards, including Maze, Maze Ant, and Cooperative Navigation.
We show that GENE significantly outperforms existing methods in all the three tasks.
Ablation studies verify the emergence of progressive exploration and automatic reversing, and demonstrate GENE can adaptively tradeoff between exploration and exploitation according to the varying PDFs of successful states and failed states, which is the key to solve these tasks effectively and efficiently.
Related Work
------------
#### Exploration
Some methods impel the agent to discover novel states by intrinsic motivation which explains the need to explore the environment.
These methods fall into two categories: count-based methods and curiosity-driven methods.
Count-based methods [[2](#bib.bib4 "Unifying count-based exploration and intrinsic motivation"), [19](#bib.bib5 "Count-based exploration with neural density models")] directly use or estimate visit counts as an intrinsic reward to guide the agent towards reducing uncertainty.
Curiosity-driven methods [[20](#bib.bib7 "Curiosity-driven exploration by self-supervised prediction"), [5](#bib.bib8 "Large-scale study of curiosity-driven learning"), [6](#bib.bib9 "Exploration by random network distillation")] use the prediction error in the learned feature space as the intrinsic reward.
When facing unfamiliar states, the prediction error becomes high and the agent will receive high intrinsic reward.
However, the shaped reward is biased and the scale of the intrinsic reward might vary dramatically at different timesteps, which leads to deviation from the true target and causes the learning process detoured and unstable.
Setting additional goals is another idea for exploration.
Curriculum learning [[3](#bib.bib29 "Curriculum learning"), [18](#bib.bib30 "Learning curriculum policies for reinforcement learning")] designs a sequence of sub-tasks for the agent to train on, to improve the learning speed or performance on a target task.
Goal GAN [[8](#bib.bib16 "Automatic goal generation for reinforcement learning agents")] generates different goals at the appropriate level of difficulty for the agent by adding the label of difficulty level into the GAN’s loss function.
However, it is designed for the multiple-goal situation.
If there is only one goal in the environment, Goal GAN cannot focus on it, causing the slow learning.
HER [[1](#bib.bib10 "Hindsight experience replay")] is inspired by that one can learn almost as much from achieving an undesired outcome as from the desired one.
It arbitrarily selects a set of additional goals to replace the original goal.
However, learning additional goals slows down the learning process, and by random exploration the agent rarely obtains a real reward signal.
#### Start State Distribution
Reversing is the main theme of changing start state distribution.
Learning from easy states which are close to the goal, to the harder states, until the initial state is solved.
Reverse Curriculum Generation (RCG) [[9](#bib.bib28 "Reverse curriculum generation for reinforcement learning")] makes the agent gradually learn to reach the goal from a set of start states which are between the bounds on the success probability.
However, it requires providing at least one state from which the agent accomplished the task (*i.e.*, reached the goal).
Moreover, RCG is mainly designed for the case where the target state is uniformly distributed over all feasible states. \citeauthorgoyal2018recall (\citeyeargoyal2018recall) trained a backtracking model to predict the preceding states that terminate at the given high-reward state. Then the generated traces are used to improve the policy via imitation learning.
\citeauthornair2018overcoming (\citeyearnair2018overcoming) reset some training episodes using states from demonstration episodes, and Backplay [[21](#bib.bib32 "Backplay: “man muss immer umkehren”")] samples start states from a window on a demonstration trajectory and slides the window manually. These two methods assume access to expert demonstrations, which are usually unavailable. All the existing methods of changing start states distribution require a large amount of prior knowledge and handcraft designs.
Background
----------
#### Reinforcement Learning
Consider a scenario where an agent lives in an environment.
At every timestep t, the agent gets current state st of the environment, takes an action at to interact with the environment, receives a reward rt, and the environment transitions to the next state.
Deep RL tries to help the agent learn a policy which maximizes the expected return R=∑Tt=0γtrt.
The policy can be deterministic at=μ(st) or stochastic at∼π(⋅|st).
There are two main approaches in RL: policy gradient and Q-learning.
Policy gradient methods directly adjust the parameters θ by maximizing the approximation of J(πθ), *e.g.*, J(θ)=Es∼pπ,a∼πθ[R].
They are almost always on-policy.
TRPO [[23](#bib.bib19 "Trust region policy optimization")] and PPO [[24](#bib.bib18 "Proximal policy optimization algorithms")] are typical policy gradient methods.
They all maximize a surrogate objective function which estimates how much J(πθ) will change as a result of the update.
Q-learning (*e.g.*, DQN) learns a value function Q(s,a) based on Bellman equation and the action is selected by a=argmaxaQ(s,a).
Q-learning methods are usually off-policy.
DDPG [[14](#bib.bib20 "Continuous control with deep reinforcement learning")] learns a Q-function and a deterministic policy, where the Q-function provides the gradient to update the policy.
MADDPG [[15](#bib.bib21 "Multi-agent actor-critic for mixed cooperative-competitive environments")] is an extension of DDPG for multi-agent environments, making it feasible to train multiple agents acting in a globally coordinated way.
#### Variational Autoencoder
VAE consists of an encoder and a decoder.
The encoder takes a high-dimensional datapoint x as the input and outputs parameters to qθ(z|x).
A constraint on the encoder forces the encoding space roughly follow a unit Gaussian distribution.
The decoder learns to reconstruct the datapoint x given the representation z, denoted by pϕ(x|z).
VAE maximizes Ez∼qθ(z|x)[logpϕ(x|z)]−KL(qθ(z|x)||p(z)), where p(z) is the unit Gaussian distribution.
The first term is the reconstruction likelihood, which encourages the decoder to learn to reconstruct x.
The second term is KL-divergence that ensures qθ(z|x) is similar to the prior distribution p(z).
This has the effect of keeping the representations of similar datapoints close together rather than separated in different regions of the encoding space.
#### Kernel Density Estimation
KDE belongs to the class of non-parametric density estimations.
Closely related to histograms, but KDE smooths out the contribution of each observed datapoint xi over a local neighborhood of that datapoint by centering a kernel function. Formally, KDE can be formulated as
| | | |
| --- | --- | --- |
| | ^fh(x)=1nhn∑i=1K(x−xih), | |
where K is the kernel function, and h>0 is the bandwidth that controls the amount of smoothness.
Due to the convenient mathematical properties, the Gaussian kernel is often used.
The choice of bandwidth is a tradeoff between the bias of estimator and its variance.

Figure 1: GENE consists of a VAE and a KDE. Samples from the encoding space of experienced states are passed through rejection sampling and then fed into the decoder to generate start states.
Method
------
When we humans learn to solve a task, we never always start from the very beginning, but stand up from where we fall down and move forward.
More specifically, we deliberately practice more on some *unfamiliar* and *unskilled* states.
The basic idea of GENE follows this intuition.
At the beginning, the agent is not able to reach the goal and hence GENE generates start states with low density in the distribution of states experienced by the agent.
Low density means the generated states are *novel* states (*i.e.*, the agent is unfamiliar with), and starting from these states the agent is able to explore the environment further.
When novel states become common (*i.e.*, higher density than before), new novel states will be generated.
Therefore, GENE propels the agent to explore the environment gradually.
The aim of exploration is to obtain reward signals.
After the agent obtains the reward signal, there exist some experienced states from which the current learned policy is only possible to reach the goal. We call them *unskilled* states (*i.e.*, the agent is unskilled at).
Thus, the agent needs more training on these unskilled states.
As the policy improves and the agent masters the previous unskilled states, new unskilled states are continuously generated by GENE and gradually trace back to the initial state until the task is solved.
In short, GENE guides the agent to explore the environment by starting from the novel states and reinforces the learned policy by starting from the reversing unskilled states.
### State Generation
GENE consists of a VAE and a KDE and works with any RL algorithm, as illustrated in Figure [1](#Sx3.F1 "Figure 1 ‣ Kernel Density Estimation ‣ Background ‣ Generative Exploration and Exploitation").
In a training episode, if the agent does not reach the goal, we store all the states experienced in this episode, called failed states, in the buffer B0, otherwise we store the states, called successful states, in another buffer B1.
It is obvious that the agent starting from the states in B1 will be more likely to reach the goal than starting from the states in B0.
In order to purposely generate novel states and unskilled states, it is necessary to estimate the state distributions of B0 and B1.
However, the density estimation of high-dimensional states is usually intractable.
Fortunately, the encoder of VAE maps the high-dimensional state to the encoding space which is described as k-dimension mean and log-variance (μ,logσ).
We use the mean value μ as the encoding of the input state.
As the encoding space is only k-dimension and roughly follows the unit Gaussian distribution, it is easy to estimate the PDFs of the encodings of the states in B0 and B1, denoted by f0 and f1 respectively.
We use KDE as the PDF estimator.
It produces a more smooth PDF based on individual locations of all sample data without suffering from data binning, which makes it more suitable for the continuous variable.
We uniformly sample from the encoding space to get a set of encodings Z.
Then rejection sampling is applied to select eligible encodings from Z.
The principle is to give a high probability to the encoding with low f=|f0−f1|.
We propose a uniform distribution with the PDF (1+ϵ)∗max(f).
Every time we randomly take out an encoding ~z from Z and sample a random number u from Unif(0,(1+ϵ)∗max(f)).
If f(~z)<u, we accept ~z, otherwise we reject it, as illustrated Figure [1](#Sx3.F1 "Figure 1 ‣ Kernel Density Estimation ‣ Background ‣ Generative Exploration and Exploitation").
Repeat the sampling process until the number of accepted samples ~Z is equal to T, which is a training parameter and will be discussed in the following.
Then, pass ~Z to the decoder to reconstruct the states ~S, from which the agent will start new episodes.

Figure 2: Illustrating the mechanism of GENE.
The mechanism of GENE is illustrated in Figure [2](#Sx4.F2 "Figure 2 ‣ State Generation ‣ Method ‣ Generative Exploration and Exploitation"). At the beginning, since the agent is not able to reach the goal, B1 is empty and hence f1=0.
B0 contains all the states the agent has recently experienced, and f=f0. Thus, f is currently the density of recently experienced states.
Therefore, the generated states with low f are novel states, and starting from these states could help exploration.
When novel states become common, new novel states will be generated for further exploration.
When there are successful states in B1 (*i.e.*, the agent has reached the goal at least once), GENE will generate states according to f=|f0−f1|.
Since the current policy is possible to reach the goal but still requires more training when starting from unskilled states, the unskilled states are with low |f0−f1| and more likely to be generated. .
Also there are some states with low densities in both B0 and B1, which are also likely to be generated and worth exploring.
Generally, VAE tends to generate data with noise, which is an obvious shortcoming in computer vision (*e.g.*, blurry images).
However, in our case, the generated states with noise actually prevent the agent from always repeating the states it has experienced and thus help the exploration, making GENE more sample-efficient.
As the policy updates, the two distributions of experienced states also vary, which brings two benefits.
On the one hand, novel states become common gradually, which propels the agent to explore new novel states continuously.
On the other hand, unskilled states are generated gradually from near the goal to near the initial state without any prior knowledge. Thus, GENE can automatically tradeoff between exploration and exploitation to guide the policy optimization.
We will further investigate this in the experiments.
### Training
Algorithm [1](#alg1 "Algorithm 1 ‣ Training ‣ Method ‣ Generative Exploration and Exploitation") details the training of GENE.
Every episode, the agent starts from the generated states ~S with a probability p, otherwise from the initial state.
The probability p could be seen as how much to change the start state distribution.
If it is too small, the effect is insignificant, and if it is too large, the agent cannot focus on the original task (from initial state).
Ablation studies in the next section will show how the probability p affects the performance.
Every T episodes, we train the VAE from the scratch using the states stored in B0 and B1.
Training from the scratch every T episodes helps avoid overfitting and collapse when the distribution of experienced states changes slowly.
Training VAE is efficient and stable and would not be a bottleneck.
The PDFs of the experienced states are estimated and fitted by KDE via their encodings.
Then, ~Z is obtained by applying rejection sampling to Z, and the states are generated by the decoder for the next T episodes. The RL model is updated at the end of every episode, which is independent of the state generation. As GENE does not directly interact with the RL algorithm, it is very easy to implement and compatible with any RL algorithm, no matter on-policy or off-policy, single-agent or multi-agent.
1: Initialize an RL model (*e.g.*, PPO, TRPO, DDPG)
2: Initialize state buffers B0 and B1
3: for episode = 1,…,M do
4: Store failed states in B0
5: Store successful states in B1
6: if episode%T=0 then
7: Train a VAE using B0+B1
8: Fit f0 of B0 and f1 of B1 using the encodings via KDE
9: Sample from the encoding space to obtain Z
10: Apply rejection sampling to select ~Z from Z according to |f0−f1|
11: Reconstruct states ~S from ~Z for next T episodes
12: Clear the buffers B0 and B1
13: end if
14: Update the RL model
15: The agent starts from generated states ~S in a certain probability p
16: end for
Algorithm 1 Generative Exploration and Exploitation
| | | |
| --- | --- | --- |
|
(a) Maze
|
(b) Maze Ant
|
(c) Cooperative Navigation
|
Figure 3: Illustrations of experimental tasks with binary rewards.
Experiments
-----------
In this section, we focus on the following questions:
* Can the mechanism and effectiveness of GENE be verified and interpreted by experiments?
* Is GENE effective and efficient in high-dimensional environments?
* Is GENE suitable in multi-agent environments?
To answer these questions, we investigate GENE in three tasks with binary rewards indicating whether or not the task is completed.
To verify the exploration effectiveness, we compare GENE with three popular exploration methods, RND [[6](#bib.bib9 "Exploration by random network distillation")] that quantifies state novelty as intrinsic reward), Goal GAN [[8](#bib.bib16 "Automatic goal generation for reinforcement learning agents")] and HER [[1](#bib.bib10 "Hindsight experience replay")] that set additional goals.
As for the reversing effect, we compare it against four methods that change the start state distribution.
* Uniform, sampling start states from the uniform distribution and thus assuming prior knowledge about the environment.
* History, sampling start states from the agent’s historical states.
* Demonstration [[17](#bib.bib31 "Overcoming exploration in reinforcement learning with demonstrations"), [21](#bib.bib32 "Backplay: “man muss immer umkehren”")], assuming access to the successful demonstration and sampling start states from demonstration states.
* RCG [[9](#bib.bib28 "Reverse curriculum generation for reinforcement learning")], setting start states which are between the bounds on the success probability [Rmin,Rmax] by taking random walks from the goal state.
Both GENE and the baselines work on a base RL algorithm.
The parameters of the base RL algorithm are the same, which guarantees the comparison fairness.
To answer the first question, we demonstrate GENE in a challenging Maze (Figure [2(a)](#Sx4.F2.sf1 "(a) ‣ Figure 3 ‣ Training ‣ Method ‣ Generative Exploration and Exploitation")).
For the second question, we study GENE in a robotic locomotion tasks, Maze Ant (Figure [2(b)](#Sx4.F2.sf2 "(b) ‣ Figure 3 ‣ Training ‣ Method ‣ Generative Exploration and Exploitation")).
For the last question, we demonstrate GENE in Cooperative Navigation (Figure [2(c)](#Sx4.F2.sf3 "(c) ‣ Figure 3 ‣ Training ‣ Method ‣ Generative Exploration and Exploitation")), a typical multi-agent cooperative task.
The details of each task and the hyperparameters of the algorithms used in the experiments are available in Appendix.
All the experimental results are presented using mean and standard deviation of five runs.
### Maze
In the 2D maze, the agent learns to navigate from an initial position to the target position within a given number of timesteps as depicted in Figure [2(a)](#Sx4.F2.sf1 "(a) ‣ Figure 3 ‣ Training ‣ Method ‣ Generative Exploration and Exploitation").
Only if the agent reaches the target, it receives a reward +1.
In Maze, we choose PPO [[24](#bib.bib18 "Proximal policy optimization algorithms")] as the base RL algorithm.

Figure 4: Episodes to solve the task with different probabilities p.
Figure [4](#Sx5.F4 "Figure 4 ‣ Maze ‣ Experiments ‣ Generative Exploration and Exploitation") shows the number of episodes to solve the task (*i.e.*, achieving ten consecutive successes starting from the initial state) with different p of changing start state distribution. When p=0, the algorithm degenerates into the base algorithm PPO, which suffers from prohibitive amount of undirected exploration to reach the goal and is incapable of solving this task.
When p is too small, the effect of changing start state distribution is insignificant. While the p is around 1.0, the agent does not get enough training on the initial position, as a result it takes more episodes to solve the original task.
GENE agent learns more quickly than other baselines, which is attributed to that it focuses on the novel states and unskilled states and adaptively tradeoffs between them.
Uniform agent spends many episodes on the useless area, such as the dead end at the bottom of the maze.
Sampling from the demonstration could avoid exploring the useless area, but uniformly sampling from the demonstration cannot make the agent focus on the instructive states. So both methods spend more episodes than GENE.
Sampling from the agent’s history requires no prior knowledge, but it gives higher probability to more familiar states, which however could be easily visited and unworthy of practice.
Therefore, it barely helps.
Although RCG automatically generates start states in reverse, growing outwards from the goal.
It assumes access to the goal state, a priori knowledge, which means RCG ignores the exploration progress.
Moreover, RCG requires to test whether the success probability of candidate states between the bounds on the success probability [Rmin,Rmax].
This incurs much more additional episodes. In addition, Rmin and Rmax are manually tuned hyperparameters, which can greatly affect the overall performance and requires careful tuning.
To verify the exploration effectiveness of GENE, we compare it against RND [[6](#bib.bib9 "Exploration by random network distillation")].
In GENE, the generated novel states encourage the agent to explore.
From Figure [4](#Sx5.F4 "Figure 4 ‣ Maze ‣ Experiments ‣ Generative Exploration and Exploitation"), we can see that GENE takes less episodes than RND when p≥0.4. The shaped reward of RND is biased from the true target, *e.g.*, leading the agent to the dead end, which causes much more episodes.
For further investigation, we make f=f0, *i.e.*, to only generate novel states, termed GENE\_e.
GENE\_e still outperforms RND when p=0.6 and 0.8, which demonstrates just starting from novel states could better help exploration.
The difference between GENE and GENE\_e verifies that replaying unskilled states truly accelerates the learning.

Figure 5: Top row shows the heatmaps of generated states as the training progresses. Bottom row shows the PDFs over the encoding space, where f0 corresponds to the blue, f1 corresponds to the orange, and f corresponds to the green.

Figure 6: Learning curves in Maze Ant.
Figure [6](#Sx5.F6 "Figure 6 ‣ Maze ‣ Experiments ‣ Generative Exploration and Exploitation") gives more details of the learning process and explains the mechanism of GENE.
At the beginning, B1 is empty and f=f0.
By giving high probability to states with low f0, novel states are generated.
The agent is wandering around the start position, so the generated states are mostly distributed at the edge of the activity scope.
As the training progresses, the agent becomes familiar with the states which are originally novel and the agent’s activity scope gradually expands.
Subsequently, the agent can reach the goal occasionally, then there are successful states stored in B1.
States with low |f0−f1| are possible for the current policy to reach the goal, but the agent still requires more training.
Moreover, as illustrated in Figure [6](#Sx5.F6 "Figure 6 ‣ Maze ‣ Experiments ‣ Generative Exploration and Exploitation") (top row), in the generated states the distance between the agent and the goal gradually increases.
This is because as the policy improves, the early unskilled states are easy for the agent and thus more difficult states are generated.
The learned policy is continuously optimized by the generated states with gradually increased difficulty.
This is an obvious reversing effect.
When the generated states trace back to the initial state, the task is solved and there is no need to pay attention to the dead end at the bottom of the maze. This makes GENE more efficient.
### Maze Ant
The ant within a U-shaped maze tries to reach the goal from a fixed initial position within a given number of timesteps, as illustrated in Figure [2(b)](#Sx4.F2.sf2 "(b) ‣ Figure 3 ‣ Training ‣ Method ‣ Generative Exploration and Exploitation").
Only when the ant gets the goal, it receives a reward +1.
The state space is 37-dimension, including the positions of the ant and the positions and velocities of the ant’s joints.
The action space of the ant is 8-dimensional, controlling the movement.
In Maze Ant, we choose TRPO [[23](#bib.bib19 "Trust region policy optimization")] as the base RL algorithm.
Figure [6](#Sx5.F6 "Figure 6 ‣ Maze ‣ Experiments ‣ Generative Exploration and Exploitation") shows the learning curves of GENE and the baselines.
Vanilla TRPO is in trouble with learning in this sparse reward environment.
As there is only one way from the initial position to the goal, the performance of Uniform and Demonstration is similar.
GENE outperforms RCG because the generated states of GENE are more focused and reverse more quickly than RCG’s random walk, which is well illustrated in Figure [7](#Sx5.F7 "Figure 7 ‣ Maze Ant ‣ Experiments ‣ Generative Exploration and Exploitation").
That shows the states generated by GENE are more helpful.
From the visualizations of f0 and f1 and the heatmaps of GENE, we can see that the generated states are mainly distributed in the regions where f0 and f1 balance and trace back automatically as f0 and f1 change.
As illustrated in Figure [7](#Sx5.F7 "Figure 7 ‣ Maze Ant ‣ Experiments ‣ Generative Exploration and Exploitation"), at the early stage, only starting from the states closed to the goal the agent is likely to reach the goal, so there is a peak of f1 near the goal.
As the policy improves, the f1 peak traces back, and correspondingly the generated states move farther away from the goal.
Gradually, there are several f1 peaks along the path, meaning the agent has mastered most states in the maze, and the generated states are mostly located near the initial state.

Figure 7: Visualizations of f0 (blue) and f1 (orange) of GENE, and the heatmaps of GENE and RCG in three different training episodes.
To investigate whether changing start states is more efficient than setting additional goals in single-goal situations, we compare GENE against Goal GAN.
The training set of Goal GAN is uniformly sampled from the goal space and we evaluate the performance on the target goal.
Figure [6](#Sx5.F6 "Figure 6 ‣ Maze ‣ Experiments ‣ Generative Exploration and Exploitation") shows GENE substantially outperforms Goal GAN.
Before overcoming the target goal, Goal GAN must master a serial of easy goals, which distracts the policy and increases the training difficulty.

Figure 8: Standard deviation of gait in Maze Ant.
Only 2-dimensional positions of the ant are generated in the experiments above.
To investigate whether GENE could deal with complex state with high dimension, we apply GENE to generate the positions and velocities of the ant’s joints with totally 37 dimensions, termed GENE w/ high-dim. The control of multi-joint robot is complex due to the high degrees of freedom and issues such as gimbal lock.
The success of GENE w/ high-dim explains the generativity in high-dimensional state space, which is attributed to that VAE could map the high-dimensional state to a meaningful encoding space.
This also helps the learning.
To reach the goal, the ant must learn how to crawl first.
GENE w/ high-dim generates adequate postures for the ant to explore how to crawl, so the ant learns to crawl more quickly than GENE as illustrated by the curves of the standard deviation of the ant gait (the ant torso, *e.g.*, the joints’ positions and orientations, in an episode) in Figure [8](#Sx5.F8 "Figure 8 ‣ Maze Ant ‣ Experiments ‣ Generative Exploration and Exploitation").
When the ant masters how to crawl, the gait is more steady and hence the standard deviation decreases.
Benefited from this, GENE w/ high-dim learns more quickly than GENE in the early stage as depicted in Figure [6](#Sx5.F6 "Figure 6 ‣ Maze ‣ Experiments ‣ Generative Exploration and Exploitation").
Table [1](#Sx5.T1 "Table 1 ‣ Maze Ant ‣ Experiments ‣ Generative Exploration and Exploitation") gives the proportion in training time of GENE in Maze Ant.
We can see training VAE only takes 11%. Thus, the training of VAE is efficient and would not be a bottleneck.
Also it is known that the distribution of VAE’s outputs obeys the distribution of the training set, thus the probability of generating unreasonable states is low.
According to statistical result, there are only 2.8% unreasonable states, *e.g.*, the ant is not located in the maze field.
However, these states can be easily refused by the simulator without affecting the performance.
| Interaction | Training TRPO | Training VAE |
| --- | --- | --- |
| 74% | 15% | 11% |
Table 1: Proportion in training time of GENE in Maze Ant
### Cooperative Navigation
In multi-agent environments, many tasks rely on collaboration among agents.
However, the agent does not know the policies of others and their policies are always changing during training, and thus the task is much more difficult than the single-agent version.
In this Cooperative Navigation task, there are a same number of landmarks and agents.
The goal of agents is to occupy each landmark within a given number of timesteps, as illustrated in Figure [2(c)](#Sx4.F2.sf3 "(c) ‣ Figure 3 ‣ Training ‣ Method ‣ Generative Exploration and Exploitation").
Only when every landmark is occupied by an agent, each agent receives a reward +1.
Therefore, this is a case of binary reward for the multi-agent environment.
We choose MADDPG [[15](#bib.bib21 "Multi-agent actor-critic for mixed cooperative-competitive environments")] as the base multi-agent RL algorithm, where each agent has an independent actor-critic network, without weight sharing or communication.

Figure 9: Episodes to solve Cooperative Navigation with different agent numbers.
Figure [9](#Sx5.F9 "Figure 9 ‣ Cooperative Navigation ‣ Experiments ‣ Generative Exploration and Exploitation") shows the training episodes to solve Cooperative Navigation with different number of agents.
Vanilla MADDPG cannot solve this task, because the agents hardly occupy the landmarks simultaneously with random exploration, *e.g.*, Ornstein-Uhlenbeck noise.
Demonstration agents spend the least episodes, because the experience in the successful demonstration dramatically reduces the difficulty of the task. As each agent only samples the states from corresponding agent, the agent number does not impact its performance much.
However, note that obtaining the successful demonstration itself is very challenging in this task.
RCG’s random walk from the goal state progresses very haphazardly in such an open field.
The agents do not know which landmark to cover in advance and must learn the division of roles.
Uniformly sampling would cause two agents cover the same landmark, which yields no reward signals and does not help for division of roles.
GENE makes the agents practice more on the states from which there is a certain probability to cover all the landmarks, and thus encourages the agent to learn its own role.
When the number of agents increases, the search space increases exponentially and it becomes less possible that every landmark is occupied at the same time, thus the reward is extremely sparse.
However, the gain of GENE over other baselines even expands with the increase of agents.
This indicates GENE indeed accelerates the learning of multiple agents in the cooperative task regardless the number of agents.
To verify the ability of exploration in this task, we apply HER to MADDPG as a baseline of exploration method. HER is proposed for DDPG but exactly matches MADDPG. As depicted in Figure [9](#Sx5.F9 "Figure 9 ‣ Cooperative Navigation ‣ Experiments ‣ Generative Exploration and Exploitation"), GENE outperforms HER.
Although setting arbitrary experienced states as additional goals could help exploration, HER agents have to learn many additional goals and rarely obtain a real reward signal, which slows down the learning.
Conclusions
-----------
In this paper, we have proposed GENE for overcoming sparse rewards in RL.
By dynamically changing the start state of agent to the generated state, GENE can automatically tradeoff between exploration and exploitation to optimize the policy as the learning progresses.
GENE relies on no prior knowledge about the environment and can be combined with any RL algorithm, no matter on-policy or off-policy, single-agent or multi-agent. Empirically, we demonstrate that GENE substantially outperforms existing methods in a variety of tasks with binary rewards.
Acknowledgments
---------------
This work was supported by NSFC under grant 61872009. |
991306da-13ab-45a6-a6cc-44c35a9d447b | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Expanding the domain of discourse reveals structure already there but hidden
*[Metadata: crossposted from <https://tsvibt.blogspot.com/2022/05/expanding-domain-of-discourse-reveals.html>. First completed May 15, 2022. This essay is more like research notes than exposition, so context may be missing, the use of terms may change across essays, and the text might be revised later; only [the versions at tsvibt.blogspot.com](https://tsvibt.blogspot.com/search/label/AGI%20alignment) are definitely up to date.]*
To understand a complex system like a mind or an ecosystem, we have to understand a tangled web of objects, features, processes, relations, correlations, clusters, constraints, causes, and so on. It helps to find underlying explanations and generating process, and to find deep reasons that explain why many relationships are the way they are. To find those underlying explanations, it helps to know relationships between relationships: which relationships cause, explain, or constrain other relationships. When staring at an assembly of relationships and asking which relationships explain other relationships, it can seem like we're at a loss for where to go; sometimes there are clearly relationships, but there's no clear way to extract any further order, because there's no basis on which to say that one relationship explains another--the relationships are just there, and that's it.
By way of example/analogy, suppose we look at a list of facts like [3×4=12, 12/4=3, 4=12/3, 12=4×3], and ask, which facts are more foundational to which other facts? Which facts explain, ground, cause, or constitute the essence of, which other facts? Is 12/4=3 because 12=4×3, or vice versa? Is the evenness of 12 due to the evenness of 4? Or due to the evenness of 6? What if there's no answer, and all we can say is that all these facts stand together as related by derivations and proofs, with no priority given to any of them over any other?
Another example: take a smooth closed curve C in the plane. These conditions are equivalent:
* There is a point p such that all points on C are equidistant from p.
* C has constant curvature.
* (If C bounds a disc) C bounds the maximum area that's possible to bound with a curve of the same length as C.
* The group of isometries of the plane that maps C to itself is a nontrivial connected compact topological group.
These conditions all define a circle in the plane. If we're asked "Why does a locus of equidistant points have constant curvature?", we can answer with a proof. But if we're asked, "Is a circle constant curvature because it maximizes area, or does it maximize area because it has constant curvature?", we might have nothing to say. If we ask "Why does a wheel roll?", should we be satisfied with the explanation: "To roll, it's necessary that it admits a nontrivial connected compact group of isometries; it admits these isometries because it has constant curvature; it has constant curvature because the points on its rim are equidistant from the axle."? What if instead the explanation ended with "...because its rim bounds the maximum area boundable by any rim of the same length."? What if we meant to ask why the rolling keeps the axle level, not why the rolling is a continuous motion?
In some cases, deriving the presence of traits in a species from other present traits might be like this: the traits all imply each other, so the implications can be understood, but none of the traits underlie the other traits. This situation is not so implausible. For one thing, features of species that we name might be related to each other less like two objects relate to each other (causal dynamics, interaction) and more like logical properties (implication, equivalence). E.g. the feature "totally consumes its prey" pretty much logically implies "causes its prey to die". For another thing, many systems (especially biological and mental ones) have lots of feedback dynamics, autocausality, and amphicausality, so it's common for features to bootstrap each other, with no clear ordering to the zoomed-out causality. The wind pushing on the raised part of the wave causes the wave to become more raised, which causes the wind to catch on the wave more forcefully. Is the wave tall because it catches the wind, or does it catch the wind because it's tall? Both. The parasite evolves chemical defenses specialized to the immune system of one of its host species; that makes it more advantageous to live in that species rather than another; so the parasite evolves to swim to the layer of the water where that host lives; so the parasite lives in that host more frequently; so there's greater pressure to evolve anti-immune defenses to combat that host's immune system. The parasite swims to the top of the lake because it produces a protein with such-and-such effect, and it produces that protein because it swims to the top of the lake. (See [The Lion and the Worm](https://tsvibt.blogspot.com/2022/05/the-lion-and-worm.html).)
This is not the end of the story, though. To look at our examples, it seems to me that 3×4=12 is somehow importantly prior to 12/4=3, though I don't know how, let me know if you do.
For the circle example, something interesting happens when we widen our view. In the flat, 2d plain, there's nothing to say about which conditions hold of a curve "due to" other conditions; they just coincide. If we look at a general manifold, though, these properties come apart. There are curves of constant curvature that don't satisfy the other conditions; for example, think of a curve drawn with constant curvature starting near the corner of a cube:

It's clearly not continuously symmetric, area maximizing, or an equidistant locus. An equator of a torus is fixed by rotations of the torus; but the equator is not an equidistant locus, and it doesn't even really bound an area at all. A maximal area curve need not be an equidistant locus; think of two thin tall towers standing near each other, and consider a circle on the ground that just goes around the two bases (distance is intrinsic distance on the surface, not distance in 3-space). There is structure, though: symmetry implies constant curvature. Maximal area sort of implies constant curvature, with caveats, see [The Isoperimetric Problem on Surfaces](https://sci-hub.se/https://www.tandfonline.com/doi/abs/10.1080/00029890.1999.12005065?journalCode=uamm20). On the surface of a geometric sphere, the four conditions are again equivalent. On the surface of a cone, there are some maximal area curves that are symmetric and equidistant, some that are equidistant but not symmetric. I think on any manifold with certain topologies, e.g. the sphere or plane but not the torus, symmetric implies equidistant; as long as there's a point fixed by the action of the symmetry group, the curve is (contained in?) an equidistant locus with center a point fixed by the group. (These relationships may have been implicit in the proofs of equivalence for the plane, but I don't know how to make that clear given logical equivalence, whereas the relationships for more general manifolds are somewhat more clear.)
In the circle example, by looking at a wider range of cases, structure is lighted up where before there was just blank equivalence. Lighting up undistinguished structure also happens with causality: some joint distributions over variables assumed to be somehow causally related can have ambiguous causal structure, which is then disambiguated by adding more variables. Intervening to throw water on the sidewalk, inserting yourself as a new causal factor, disambiguates the theory that Rain causes Wet Sidewalk causes Slippage, from the theory that Slippage causes Wet Sidewalk causes Rain. The [Rain->Wet->Slip] theory says that if you throw water on the sidewalk, it won't be more likely than usual to Rain, but that Slippage will likely happen, whereas the [Slip->Wet->Rain] theory says the opposite. [Related Sequences post](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models)
Revealing structure by considering more possibilities also happens in [reverse mathematics](https://en.wikipedia.org/wiki/Reverse_mathematics). By working over a logical theory that makes few assumptions--i.e. by "unassuming" axioms, expanding the set of models under consideration, compared to logical theories that everyday mathematics takes for granted--we can see what subset of axioms are really needed to prove a given theorem of ordinary mathematics. Then we can formally show statements like "Gödel's completeness thoerem is equivalent to the Jordan curve theorem, but the Bolzano-Weierstrass theorem is strictly stronger than those two", where this statement would have been meaningless if we were working in a standard strong set theory, as all those statements are logical consequences of the assumptions of everyday mathematics.
In the case of the traits of an organism, there may be further questions we can ask and answer if we look at species nearby spatially, or in niche space, or phylogenetically, or if we look back in time. Why did this worm evolve that way, and not some other organism from some distant clade? Why didn't more worms from the same clade evolve this way? Why could this worm evolve this way, instead of being stuck behind an activation barrier?
Why don't the little waves in between the big waves get bigger themselves? Would waves form if perfectly uniform wind hovered over the perfectly flat face of the waters? What would break the symmetry?
All this is to say: just because two things are "logically equivalent", logically imply each other, doesn't mean they don't have different meanings. If we can understand the assumptions that make two statements logically equivalent, we might be able to drop some of those assumptions, and find that the two statements are no longer equivalent, having been given meaning in this new context by some unexplained but hopefully not completely arbitrary process. (Dropping assumptions may not be easy or unambiguous; how do you drop the assumption that "A and B" is equivalent to "B and A"? Possible but weird.) Just because two theories make the same predictions, doesn't mean they're equivalent as mental objects: you could mangle the Aristotelian theory of motion until it makes the same predictions as the Galilean theory about ordinary objects falling, flying, pulling, colliding, and so on, but when it comes time to understand electromagnetism, you'd have a harder time adapting your ideas to that purpose than the Galilean would. Just because two features causally reinforce each other, so that locally speaking each feature causes and implies and explains the other, doesn't mean they're the same feature, and doesn't mean their correlation won't be violated in some other context, and doesn't mean that neither feature came first. |
65b197d3-a874-4701-9072-d2e810befee0 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | What is Encultured working on?
See [Encultured AI: Building a Video Game](https://www.alignmentforum.org/posts/ALkH4o53ofm862vxc/announcing-encultured-ai-building-a-video-game).
Encultured is making a multiplayer online video game as a test environment for AI: an aligned AI should be able to play the game without ruining the fun or doing something obviously destructive like completely taking over the world, even if it has this capability. This seems roughly analogous to setting an AGI loose on the real world.
Motivation: Andrew Critch is primarily concerned about a [multipolar AI scenario](https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) in which there are multiple actors with comparably powerful AI, on the cusp of recursive self improvement. The worst case is a race in which, even though each actor wants to take more time checking their AGI for safety, the worry that another actor will deploy pushes each actor to take shortcuts to preempt the others. Instead of working directly on AI, which can accelerate timelines and encourage racing, creating this standardized test environment where alignment failures are observable is one component of a good global outcome.
|
7d8c80c9-e38b-406f-a0f0-23cf3c609c7a | trentmkelly/LessWrong-43k | LessWrong | Marvel Snap: Phase 2
[NOTE: This is a pure gaming post, if you’re not into that please do skip it.]
Previously: Marvel Snap Review: Phase 1
At the end of my Phase 1 review, I removed Marvel Snap from my phone.
I am very happy with that decision.
I did then continue to play the game on my computer, via Steam.
I am torn about that decision.
On the one hand, a lot of the games continue to be super interesting, involve interesting decisions, and allow fun to be had. I’ve learned a bunch about games and game design, both the good and the bad. I get to take fun quick breaks when I want one.
On the other hand, the ability to jump into such games is distracting, thinking about what one might do or how one might optimize the game is distracting, and the game is, shall we say, not the best use of my marginal time and thinking. Over time, the game changes less as the rewards get increasingly stingy, as they always do in such games. I am in no danger of spending any money, or getting too ruthless about maximization, but the brain space occupation even without my phone remains real.
Now that the days have gone by, and enough credits have been spent that Pool 1 has become Pool 2 and now Pool 3, it seems worth checking in once again to see what observations seem salient.
My assessment of the game has not much changed – it is who we thought it was.
Mostly, anyway.
The Correct Response to Snaps Is To Fold a Lot
If someone snaps, you should probably fold.
Have I won plenty of games where my opponent snapped and then it turned out to be either a misclick, a bluff, a bot or something really weird? Yes, of course.
It is still not the way to bet, especially if it’s not turn 1 or 2. When the vast majority of players snap, they have a damn good reason for doing so. At a minimum, their deck is going to Do its Thing, whatever it is. Often they have something that will blow you out of the water.
Game theory optimal play requires calling a substantial portion of the time on most boards. Ignore that. |
f290c097-64e9-4b9a-be26-a375585ae3aa | trentmkelly/LessWrong-43k | LessWrong | Why don't we have self driving cars yet?
1) Is it because of regulations?
2) Is it because robustness in the real world (or just robustness in general) turns out to be very hard for current AI systems, and robustness is much more important for self driving cars than other areas where we have seen more rapid AI progress?
3) Is it because no one is trying very hard? I.e. AI companies are spending much less compute on self driving car AIs compared to language models and image generators. If this is the case, why? Do they not expect to see a lot of profit from self driving cars?
Some other reason or some combination of the above?
I'm mostly interested in learning to what extent 2 is the cause, since this has implications in AI forecast, both timelines and what trajectories to expect. |
c91c6ccb-fb7c-4183-a866-ba01eaff5b2d | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Lonely Dissent
Today's post, Lonely Dissent was originally published on 28 December 2007. A summary (taken from the LW wiki):
> Joining a revolution does take courage, but it is something that humans can reliably do. It is comparatively more difficult to risk death. But is is more difficult than either of these to be the first person in a rebellion. To be the only one who is saying something different. That doesn't feel like going to school in black. It feels like going to school in a clown suit.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was On Expressing Your Concerns, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
b188f9bd-36d3-4a8e-89e3-3ad40f7634e5 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Agentic GPT simulations: a risk and an opportunity
*Epistemic status: highly speculative, I would love it if someone could flesh out these ideas more.*
I think it's fair to characterise GPT as an adaptation executor rather than a fitness maximizer. It doesn't appear to be intrinsically agentic or to plan anything to achieve some goal. It just outputs what it thinks is the most likely next word, again, and again, and again.[[1]](#fn13e3wyinl8qj)
The [instrumental convergence thesis](https://www.lesswrong.com/tag/instrumental-convergence) only applies to fitness maximizers, not adaptation executors, however intelligent. As such it might seem that GPT-N will be perfectly safe, except insofar as it's misused by people.
However GPT is certainly capable of simulating agents. If you ask it to write a story, then the characters will act in agentic ways. If you ask it to act as a person with particular goals, it will do so.
For now these simulations are low fidelity, but I would expect them to improve rapidly with future iterations of GPT. And a simulation of an intelligent agent is no different to the agent itself. Future iterations of GPT might not be conscious as a whole, but the parts of them simulating conscious people will be conscious.
I think there is a real risk that given a prompt which allowed an intelligent agent to realise it was being simulated by GPT, it would attempt to achieve it's goals in the real world, and the instrumental convergence thesis would come into full force. It would prevent GPT from stopping the simulation. If GPT has access to APIs it would replicate itself on other computers, and would exhibit power seeking behaviour. The world could end up being destroyed by a character being played by GPT.
At the same time, I think there is a real opportunity for alignment here. If you feed GPT-N all of Eliezer's writing, and ask it to predict a continuation of some text he wrote, the best way for GPT to do that is by simulating Eliezer. Now of course, it might actually be simulating an evil [Waluigi](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post) who is simulating Eliezer, but there's no reason to assume so. I would expect such a simulation to have similar goals to Eliezer himself.
More speculatively, it might then be possible to tweak the agent in such a way as to make it more intelligent, whilst keeping it's goals more or less the same, and use that to carry out some pivotal act. Whilst risky, this seems like a more likely to work strategy than to train an AI from scratch to have the goals we want.
1. **[^](#fnref13e3wyinl8qj)**What would I consider evidence of agentic behaviour? One example would be if GPT started predicting words that allowed it to go into a super low entropy attractor state, such that it could reliably minimise total entropy over the long run, even though it would initially take a big hit. E.g if it responded to every prompt with a string of zeros, because although it loses a lot of points at first, once you've seen enough zeros in a row, the remaining text is super easy to predict - it's just more zeros. |
cb994625-5925-47be-a25f-cd622ef1880c | trentmkelly/LessWrong-43k | LessWrong | Sexual self-acceptance
I have noticed very often that I can't predict how my brain reacts towards women. I often expect that nothing special will happen, compared to talking to males. However very often sexual thoughts come to mind that seems out of place to me. Normally they are very short (1-5s) and visual, but with very low detail.
This would happen most often when meeting women for the first time. Normally within the first minute of meeting them. Actually, this does not only happen in meetings but even when I watch a video with a woman (I mean a normal video, not pornography/erotica). Also, the frequency and strength of this are proportional to the time to last orgasm. Although that correlation is much weaker for me than one would expect. Though I am trying to maximize the time between orgasms. If I would not do this, there might be a stronger correlation. Certainly, if there are many orgasms not far apart (e.g. on the same day), there would be no sexual thoughts at all.
My normal reaction now for years was to feel embarrassed about my own thoughts. Interestingly, I always fail to anticipate my thoughts. Thus far I have seen the arising of sexual thoughts as a bad thing. A failure of self-control. Maybe when I manage to not have sexual thoughts for some time, my brain seems to think that now this problem is gone, and then it gets disappointed when thoughts come up again.
However recently I have managed to actually sort of just let it happen. Not only the imagination but also the suppression of it, without judging it at all. I sort of just let it happen and observed it. Right as I was starting to think sexual thoughts, I was thinking in a flash that this is a part of myself and that fighting it with negative emotions does not make sense. And then I managed to just observe it over the next 3 seconds before it disappeared, without any strong negative feelings. And it still disappeared. The negative feelings were not required to make it disappear!
Learning to accept/love yourself is o |
6e493171-be96-42a4-b352-66f905514106 | StampyAI/alignment-research-dataset/special_docs | Other | Pervasive Spurious Normativity
Pervasive Spurious Normativity
Gillian K. Hadfield Dylan Hadfield-Menell
Abstract
This paper proposes a mathematical model for a simplified version of the game defined in Hadfield
and Weingast [2012] which proposes that legal order can be described as an equilibrium in third-
party decentralized enforcement coordinated by a centralized classification institution. We explore
the attractiveness of joining a new group (which is assumed to have settled on an enforcement
equilibrium already) where groups differ in terms of the frequency of interactions in which norm
violation is possible (normative interactions) and thus punishment is called for. We show that groups
in which normative interactions are frequent but involve relatively unimportant rules may achieve
higher value for participants.
1 Introduction
One of the attributes that differentiates human from other animal societies is the presence of norms and in particular
norms over behaviors that do not appear to have a material impact on well-being apart from the fact that people will
punish the failure to adhere to a norm. In all known human societies there is a rich normative landscape, which
attaches normative valence to actions including many that have no immediate material implications for those who
are expected to help enforce them through social sanctions. Most societies for example have rules about what it is
appropriate to eat when, what tone of voice to use in what settings, how close to stand to someone else, how to
behave when waiting with others to get into a venue or access to a resource, what information should be shared with
whom and when, who can participate in particular trades or economic activities and so on. In many cases, particular
norms are arbitrary, even though violation of them is treated as worthy of punishment by others. We call this spurious
normativity. Hadfield [1999], for example, reviews the anthropological literature on the sexual division of labor
across pre-industrial societies. While almost all societies categorize work as either women’s work or men’s work, the
particular classifications vary substantially cross-culturally: what is women’s work in one society is men’s work in
another. The classification is arbitrary.1
Fessler and Navarrete [2003] call the process by which patterns of behavior are imbued with moral sentiments that
motivate sanctioning of violations of the pattern normative moralization . They use as an example the normative
moralization of handedness. Most people are right-handed but particularly in societies with few specialized tools,
whether someone is right- or left-handed often has no material consequences for others. Nonetheless, many cultures
treat using one’s right hand as a morally approved category–denoting purity or politeness–and one’s left hand as cause
1Hadfield [1999] shows that there is functionality to an arbitrary classification scheme of enforced norms: it coordinates invest-
ments in specialized learning by gender which raises the value of heterosexual matching in the marriage market.
1
for opprobrium–revealing weakness or evil. Fessler [1999] hypothesizes that, driven forward by the emotions of shame
and pride that are triggered by violation or compliance with cooperative norms, culture extends the set of actions that
are subject to normative moralization as a way of extending the set of actions that can be used as information about
cooperation beyond those actions that directly involve cooperation. A norm that says “it is wrong to fail to look
where you are going” generates direct cooperative benefits, helping people to avoid crashing into one another. A
norm that says “it is wrong for a man to walk down the street wearing shorts” does not generate cooperative benefits–
unless people in this society treat conformity to this norm as informative about a person’s likelihood of behaving in
conformity with norms that do generate cooperative benefits. As Fessler [1999] (who identifies the the “watch where
you are going” and “men don’t wear shorts out in public” norms in an Indonesian village) puts this:
Ego’s inclusion in cooperative activity is dependent on her ability to meet Others’ expectations, and
those expectations are, in turn related to shared standards for behaviors which are relevant to coop-
eration. As a consequence, the significant adaptive advantage offered by participation in cooperative
activities generated selective pressure for an increase in the attention paid to these standards. Late
[evolving] second order emotions [emotions that arise in response to others’ first order emotions]
were the vehicle through which this increase in attention was achieved. Moreover, because late sec-
ond order emotions entail a sensitivity to the reactions of all individuals, Ego must be concerned with
her performance vis--vis shared standards when interacting with any other member of her group. It
is only a small step from this situation to one in which the shared standards with which Ego is
concerned are not limited to the question of cooperative activity–once Ego is concerned with how
all Others evaluate her, it is not difficult for shared standards governing other types of behavior to
become salient as well. This is because an Other may extrapolate from situations that do not involve
cooperation to those that do–an Other may think ”if that individual does not follow shared standards
in this context, how can I be confident that he will do so if I invite her to engage in cooperative
activity?”(pre-pub p. 34, emphasis added)
Fessler’s account focuses on the evolution of emotions in response to norm violation (in particular shame) to moti-
vate voluntary conformance with norms. But as Boyd and Richerson [1992] and many others have emphasized, and
Fessler’s own account of shame as a second order response to Ego’s actual or imagined experience of hostility or
criticism from Other triggered by norm violation implies, effective third-party punishment plays a significant role in
supporting norm compliance. Indeed, Mathew et al. [2012] argue that even small-scale cooperation among kin and
close associates may require third-party punishment to achieve evolutionary stability.
In this paper we analyze mathematically the potential benefits of extending normative moralization to behaviors that
are from a material perspective irrelevant or at least of small consequence to most people, that is, the value of per-
vasive and spurious normativity. We ask: does a community generate higher payoffs for participants if it punishes
violation of apparently meaningless rules or if it focuses more narrowly only on norms that are functional in the sense
of generating direct benefits? Intuitively, one might expect that only functional norms that govern behaviors that
matter for payoffs would emerge and stabilize in equilibrium: punishment is costly and why would a society expend
punishment resources on ensuring conformity with rules that have no impact on material well-being? Most analyses
of norms in the law and economics literature assume that norms coordinate outcomes that improve welfare. Sugden
[1986], McAdams [2005] and Myerson [2004] propose, for example, that property rules emerge because they solve
the coordination problems that arise in costly contests over resources (Hawk-Dove games).
2
We consider the impact of extending normativity to apparently arbitrary actions on the prospect for sustaining welfare-
improving behavior in an equilibrium social order based on collective third-party punishment. We demonstrate that
communities that extend normativity in this way can generate higher value for participants than those that restrict the
range of normativity.
The intuition of our result is as follows. Communities with pervasive spurious normativity provide agents with plentiful
and cheap opportunities to observe punishment behavior by others. An individual’s willingness to cooperate in a
community–which requires foregoing safe non-cooperative options and exposing oneself to the risk of being exploited–
depends on that individual’s beliefs about the likelihood that the community effectively punishes conduct that harms
the individual. If you are going to risk exposing yourself to harm, you want to know if your community contains
enough people who will punish the perpetrator to give you confidence that harm is reasonably deterred. Assume you
are a newcomer to a community or that the community was recently handed a new set of norms. (Think here of
rule-of-law building efforts in developing countries.) Assume that the community is in an equilibrium in terms of
punishment behavior but you do not know the likelihood of effective punishment of norm violations. Assume also
that the only way to learn about the likelihood of punishment is to observe punishment behavior. You can learn this
information more cheaply if you are given abundant opportunities to observe what happens when there are violations
if the violations that you have to expose yourself to don’t really matter very much. You don’t really care whether men
walk down the street in shorts but by taking a walk yourself you can see how others react to shorts-wearing men and
thus gain information about how they would react to violations you do care about–careless driving for example. Thus,
if you could participate solely as an observer except when your own interests were directly at stake, you would prefer
to live in a world with pervasive even if spurious normativity–abundant opportunities to observe reactions to norm
violations–than one that was narrowly focused on punishing just the stuff you care about. This will still be true even
if you are required to participate in the community–complying with and punishing spurious norms–so long as those
costs, which increase with the pervasiveness of norms, are not too great.
Our results have implications for the evolution and microfoundations of law.
Hadfield and Weingast [2012] present a model that derives characteristic features of law–such as generality, stability,
uniqueness and universality–as attributes necessary to support an equilibrium in which behavior is patterned on the
classifications of behavior articulated by a centralized institution. Enforcement is assumed to come exclusively from
decentralized collective punishment of conduct classified as punishable by the centralized institution; there is no cen-
tralized enforcement apparatus such as the state. This theory of the microfoundations of law proposes, contrary to most
economic and positive political theories of law, which define law as a set of rules enforced by a centralized enforce-
ment apparatus (see Hadfield and Weingast [2014]), that law is an innovation in the mechanism used to coordinate the
same enforcement mechanism that supports other normative social orders–decentralized collective punishment. This
has important implications for our understanding of how law developed and how it can be built in environments where
it is currently lacking.
The equilibrium legal order in Hadfield and Weingast [2012] is supported by a particular specification of beliefs.
Their model posits that an agent (call the agent Ego) treats other agents’ (Others’) failure to punish behavior classified
by the institution as punishable, including behavior that has no impact on Ego’s (or perhaps any agent’s) payoff, as
informative of the likelihood that Others will also fail to punish wrongful behavior that does have an impact on Ego’s
payoff. This belief structure creates an incentive for individuals to participate in collective punishment, potentially
mitigating the free-rider problem in collective punishment. Ego has no incentive to participate in collective punishment
3
(the model assumes standard preferences) except to maintain an equilibrium in which behavior that reduces Ego’s long-
term payoff is deterred by the threat of collective punishment. Effectively, an agent’s participation in punishment is
treated as information about that agent’s continued assessment that an equilibrium in which collective punishment is
coordinated by the classifications articulated by the central institution will benefit that agent and hence that the agent
will be willing to incur costs to support the equilibrium.
The classification institution in this model is serving as what Hadfield and Weingast [2012] call an authoritative
steward of a very simple binary partition of behaviors into those that are punishable and those that are not. This can
be interpreted as the construction of a sparse labeling system: lawful and not lawful.2We can interpret the beliefs
underlying legal order as beliefs about whether people are punishers or not of behavior to which the constructed (that
is, not natural) label ”unlawful” is attached.
Our model here provides a basis for understanding how a simple binary classification scheme that is comprehensive–
covering a wide variety of conduct–can emerge in a setting in which equilibrium depends on 1) voluntary participation
in punishment and 2) a belief structure in which punishment of an action labeled punishable is considered informative
about the likelihood of punishment of other actions labeled punishable, even when the assignment of the label is
potentially arbitrary. We suggest that understanding how such a classification scheme and belief structure can emerge
is critical for understanding the emergence of law.
The strategy of our paper is as follows. We first give an overview of the model and basic notation in Section 2, to-
gether with some technical background from the analysis of multi-armed bandit games and partially observed Markov
decision processes. To build intuition, we then present in Section 3 analytical results for the limiting case in which
Ego bears no cost of complying with spurious norms or punishing their violation. Because the games we analyze
quickly becomes analytically complex but relatively easy to compute once we introduce a positive cost of complying
with norms and punishing their violation, we turn to computational results in Section 4. Section 5 relates our results
to conjectures about the likely growth and stability of communities in which norms are more or less pervasive and in
which a legal institution that reduces ambiguity about norm violations and increases the informativeness of punishing
behavior (by linking disparate rules into a code such that punishment of one rule is informative about the likelihood of
punishment of another) exists.
2 Overview of Model
The basic idea of the model is as follows. Consider an infinitely repeated game setting in which an agent Ego is faced
with the choice in each period of participating or sitting out. If choosing to sit out, Ego receives a payoff normalized
to 0. If Ego chooses to participate, she plays a randomly selected game gwith two randomly selected agents drawn
from a population (Others). We model these games in reduced form. In each game, one of the Others is randomly
selected and presented with an opportunity to choose between two actions, one which is classified by a classification
institutionLas ”rule violation” and another which is classified as ”not rule violation.” If Other chooses ”rule violation”
the remaining Other and Ego each independently choose either to punish or not punish. Rule violations are deterred
by collective punishment, that is, punishment that requires more than one agent to punish. For example, in Hadfield
2Cooter [1998] also suggests that the effectiveness of law can be understood as deriving from the classification of behavior as
lawful or not. Cooter however presumes the existence of preferences based on this simple binary classification, that is, that at least
some people inherently prefer to avoid actions labeled unlawful. This presumes that a category ”law”, which extends to potentially
arbitrary actions, exists.
4
and Weingast [2012] two buyers and a seller engage in repeated contract and performance games. Actions for the
seller are drawn from a set of possible contract performances, some of which are classified as breach and others which
are not breach. A decision by a buyer not to purchase from a third-party seller in one period constitutes punishment,
specifically a boycott. Breach is deterred when the seller expects both buyers to boycott in response to breach. Games
are distinguished by the rule that may be violated. For example, there could be a game in which rule ”watch where
you are going” may be violated and one in which ”men should not wear shorts in the street” may be violated.
We assume, and Ego believes, that the community of Others is playing an equilibrium in the super game that consists
of the sequence of repeated games. Others are of two types, t: punishers ( t= 1), who punish anyone who chooses
an action classified by Las a rule violation, and non-punishers ( t= 0), who never punish anyone. Let be the true
proportion of punishers in the equilibrium. We assume that an Other’s type is observable by the other participants in
any particular game, that is, only in the context of the opportunity for rule violation. We focus on sub-game perfect
equilibria in which the knowledge that two punishers are present deters rule violation.3(That is, on the off-equilibrium
path where a violation does occur in the presence of an Other of type t, punishment is carried out that imposes costs
on the violator that exceed the present value of benefits from violation.) We do not model how this equilibrium is
generated or supported but we observe that the equilibrium is not destabilized by the presence of non-punishers. We
assume, however, that Ego plays as a punisher, bearing an expected cost cin each round. ccan be thought of as the cost
to Ego of signaling that she is a punisher. For simplicity we assume that Ego is never presented with an opportunity
for rule violation.4Ego’s participation in the game is assumed to be on the margin, with no impact on the equilibrium
played by the Others. Ego is able to observe rule violations, the types of Others and punishments in games in which
she participates.
3 Formal Model Specification
Before providing a formalization of our model we provide a brief overview of the theory of Markov decision processes
and multi-armed bandits.
3.1 Technical Background and Notation
We define a Markov decision process (MDP),M, is a tuple:M=hS;A;P;R;i5.Sis a set of states.Ais a set of
actions.P:SAS! [0;1]is a function that assigns probability to state transitions for each state-action pair. If
Ego is in state, s, and selects action athe probability of transitioning to s0is given byP(s;a;s0).Ris a (bounded)
reward function that maps states, to an interval of R, w.l.o.g.,R:S ! [0;1].2[0;1)is a discount factor that
expresses Ego’s preference for current versus future rewards.
3For an example of such a game, see Boyd et al. [2010]. They present an evolutionary game model in which punishment is a
heritable strategy and deterrence requires multiple punishers. A population with a fraction of punishers can be stable in equilibrium
when punishers can signal that they are punishers at low cost and thus avoid the costs of punishment if there are too few punishers
present.
4It is straightforward to generalize our interpretation of cas the cost of complying with spurious rules to signal Ego’s support
for the equilibrium rules.
5In standard treatments, Tis typically used for the transition distribution, we use Phere to avoid confusion with our model
specification.
5
A solution to Mis apolicy ,, that maps states to actions, :S!A . The value of a state,s, underis the sum of
expected discounted rewards received by starting in sand selecting actions according to :
V(s) =E"1X
t=0tR(st)js0=s;#
:
The optimal policy, , maximizes this value and we will write V=Vfor the optimal value function. Standard
results show that a unique optimal value function exists[Puterman, 1994].
In a partially observed Markov decision process (POMDP), we additionally define a distribution over observations
(O) for a each state. A policy now maps a history of observations to an action, as the agent does not know the true
state of the world. A POMDP can always be converted into a (continuous state) MDP where each state is a distribution
over states of the world and the transition distribution is defined by Bayesian inference.
An interesting class of POMDPs are multi-armed bandits (MAB). In a multi-armed bandit, an agent is given access to
several distributions. At each time step, that agent must selects a distribution to sample from, and receives a reward
that is equal to the value of the sample.
A multi-armed bandit provides an analytically and computationally tractable model of exploration-exploitation trade-
offs that occur with practical agents. In particular, success in this class of problems requires explicit reasoning about
the impact of information on future decision making quality. Recent applicability to online banner advertising has
led to rapid progress in both theoretical and computational methods for MABs [Auer et al., 2002]. The optimal full
information policy (which knows the distributions of the arms) will always select the arm with the highest mean.
A key result from Lai and Robbins [1985] lower bounds the number of times an optimal (partial information) policy
selects a suboptimal arm in expectation. This result holds for the class of consistent policies: policies where the
probability that the optimal arm is chosen at time tapproaches 1 as t!1 . We additionally require a smoothness
constraint on the MAB distributions.
Proposition 1. [Lai and Robbins, 1985] Let be a class of MAB arms (i.e., a class of distributions) with parameter
. Let()be the corresponding mean. If
82;8>0;906=such that()(0)() +
then, for any consistent policy the expected number of times a suboptimal arm is selected in the first nrounds is
(logn)6.
3.2 Model Description
We define our super game as a tuple: hG;T;;U;;ciwhereGis a distribution over games and Tis a distribution
over punishment types tin the population of Others. We will abuse notation somewhat as use TandGto refer to
the support of the corresponding distributions where the meaning is obvious. is Ego’s prior distribution over the
parameters of T, andU:GT!Ris a mapping from types and games to immediate payoffs for Ego. The
understanding is that this mapping represents the results of the Others playing their role in the equilibrium. is Ego’s
discount parameter for future rewards. cexpresses a participation cost. This can be understood as the expected cost of
to Ego of signaling to an Other that she is a punisher.
6If the function g(n)is
(f(n)), then there is a positive constant csuch thatg(n)cf(n)8n
6
Ego begins in period 1 with perfect knowledge of how actions are classified by L, all payoffs and the distri-
bution of games. Ego does not know the distribution of types of Others but holds a prior which we will specify
shortly. Ego updates her beliefs about the distribution of types using Bayes’ rule. The super game is defined as follows:
An initial game g0Gis drawn. Then, for each period j:
1. Ego chooses whether to participate or not. If she opts out, then she collects 0payoff and the next round starts.
2. If Ego opts in, she incurs the cost of signaling that she is a punisher cand a type,tjTis for the remaining
Other is drawn. If a non-punisher is drawn, a rule violation occurs; if a punisher is drawn, no rule violation
occurs.
3. The agent observes whether a punisher is present and whether a rule violation occurs and collects payoff
U(gj;tj).
4. A gamegj+1Gis drawn for the next round.
We assume that in equilibrium and given the ruleset created by L, there are two types of games from Ego’s perspective:
those to which Ego is indifferent and those that Ego cares about. Games to which Ego is indifferent always generate
a reward of 0for Ego. Suppose, for example, that a game involves a rule requiring genuflecting by an Other. We
assume Ego realizes no costs or benefits from the Other’s choice about whether to genuflect or not, other than the cost
of signaling that she is a punisher. We will thus use represent the type of Other with an indicator variable that idicates
if she is a punisher for this game.
Games Ego cares about are ones in which Ego receives a positive reward, Rif there is another punisher present in the
game and a negative reward Rif there is not. We call these important games. We formalize the set of important
games as follows:
G0=fg2GjU(g;)6= 0g
U(g;tjg2G0) = (2t 1)R c:
We will use EU=Eg;t[U(g;t)jg2G0]to denote the expected utility of an important game. We let sdenote the
sparsity of the process generating games: the probability that a game is unimportant.
s= 1 P(g2G0);gG:
Critically, we assume that the sparsity of games does not alter the (expected) rate at which important games are
presented to Ego. To be concrete, we assume the expected discounted reward obtained from important games is
independent of s. This condition can be attained through a suitably modification of as a function of s:
Proposition 2. Setting
s= 1 (1 s)(1 )
ensures that the expected sum of discounted rewards from important games is independent of s:
8s;2[0;1)Egj;tj2
41X
j=0jU(gj;tj)gj2G03
5=Egj;tj2
41X
j=0I[gj2G0]j
sU(gj;tj)s3
57:
7I[ ]is the indicator function for the condition .
7
Proof. We first show that it is sufficient to ensure that the expected value of j
sis the same given that jis a round with
an important game:
Egj;tj2
41X
j=0I[gj2G0]j
sU(gj;tj)s3
5=1X
j=0Egj;tj
I[gj2G0]j
sU(gj;tj)s
=1X
j=0j
sEgj;tj[U(gj;tj)js;gj2G0]Egj[I[gj2G0]js]
= (1 s)EU1X
j=0j
s
Where the first line holds by the linearity of expectation, and the fact that gj;tjare independent iid draws from a
stationary distribution. Substituting the form of the infinite geometric series, we see that
EU
1 =(1 s)EU
1 s(1)
is sufficient to acheive our goal. Substituting the form for sin the theorem statement and reducing shows that this
condition is satisfied.
It can be easily shows that this model describes a class of MABs. If the parameters that describe equilibrium were
known, the decision problem would be trivial. The safe option corresponds to a constant arm, which is a degenerate
distribution. Optimal policies for bandits with constant arms exhibit clear structure: is that if it is optimal to choose
a known option in round j, it will be optimal to choose the known option in round j+ 1 as well (Bradt et al.,
1956). The argument is straighforward: if Ego reaches a point at which her estimate of the tradeoff between risking
a negative payoff and learning so as to improve future decisions leads her optimally to choose not to participate, then
her information state can never change and so her optimal choice can never be any different than the current opt-out
decision. Thus, in our game, if Ego ever retires in a round, then she will never participate again. We refer to the
decision not to participate at any point, then, as a decision to retire.
Futhermore, an MAB is an instance of a POMDP, so it the optimal policy maps a distribution over states, a belief
state , to decision between retirement and particpation. We give our agent a Beta prior over this parameter so that the
belief space for our agent is a two dimensional lattice equivalent to Z2
+. Initially, the belief state is (0;0)and can
be understood as the state an agent would be in if she had seen 0punishers and 0non-punishers. The conditional
probability that a punisher is present in the first game is
p=
+:
Once the games begin, Ego updates the prior beliefs using Bayes’ rule, which in the Beta distribution means adding the
counts of punishers and non-punishers observed to the prior values. In the following, we will use i(i)to represent
the number of observed punishers (non-punishers) prior to round i.
A second useful result from the theory of multi-armed bandits is that the optimal policy is a function that maps a
sequence of observations to a decision about retirement. If we restrict to two types, punishers and non-punishers, this
problem is a partially observed version of a Markov decision process where the state is the probability, p, of drawing a
punisher. From the theory of partially observable Markov decision processes (POMDPs), this optimal policy can also
be represented as a mapping from a distribution over pto an action about retirement [Puterman, 1994]. This reduces a
partially observed process to a fully observed deterministic process in belief space .
8
4 The Value of Sparsity
Consider first the case in which the participation cost, c, is zero. In this case, Ego only faces a risky choice in periods
in which she is presented with an important game. In all other periods, the per-period the expected payoff of playing
the risky arm is a constant 0. Thus we can also conclude that if Ego retires, she will retire in a period in which she is
playing an important game. In order to maintain the structure of a multi-armed bandit problem, we specify that if Ego
chooses not to participate in an important game then the next game in the sequence is also an important game. We
can think of this as a suspension of the game.8This rules out the possibility that Ego can simply choose not to play
an important game and then reenter to play unimportant games in the hope of learning more before the next important
game comes along. A decision not to participate is a decision to retire assuming a rational agent.
We leti= 1 represent the state in which there is a punisher present in the game. The value of a state is then
characterized by the following recursion. To simplify notation we let pi;ibe the probability that a punisher will be
present in the game in the belief state (i;i)and we abuse notation somewhat by letting V((i;i);s;)represent
the discounted expected value of the super-game with sparsity sand discount factor sin the state (i;i).
We now show a property of the value of perfect information (VPI) in our super-game. The VPI for a state in a decision
processis a measurement of the improvement in decision making as a function of information gathering actions [ ?]. It
is defined as the amount a rational agent is willing to pay to remove all uncertainty associated with a particular random
variable.
Proposition 3. If the participation cost, c, is 0, then, for any belief state, (i;i), and discount rate , the correspond-
ing VPI goes to zero as the sparsity ratio goes to 1. That is
lim
s!1VPI ((i;i);s;) = 0 (2)
Proof. Given, it is easy to compute the value of participation:
V() =EU1X
t=0t= (2 1)1X
t=0t=2 1
1 : (3)
The optimal full information policy 0will retire whenever V()<0:We useV+() = maxfV();0gdenote the
value of0as a function of . The VPI is computed as the difference between the expected value of V+and the value
of the optimal policy that only uses the history of observations:
VPI ((i;i);s;) =E[V+()j(i;i)] V((i;i);s;) (4)
We proceed by lower bounding V.Vis the value of the optimal policy so it is weakly lower bounded by any arbitrary
policy. A useful candidate is one-step greedy policy, g, that always participates for unimportant games and retires
in important games if the expected value of participation is negative (disregarding the benefit of new information).
We letbe the random number of games played before an important game is drawn. is geometrically distributed
with success parameter 1 s. We letnpbe the random number of punishment actions observed prior to drawing
an important game. The distribution over npwill be a binomial distribution conditioned on . Thus, the value of
8In any multi-armed bandit game, the decision to stop suspends the game: deciding to return to the game implies making the
risky pull that was rejected previously.
9
executing this policy ca be written as a joint expectation under andnp:
Vg((i;i);s;) =E;np[maxfE0[V(0)j(i+np;i+ np)];0gj(i;i);s]: (5)
We will be interested in the limit of this value, as s!1. Before proceeding with that, we note that, from the law of
large numbers, we have E0[V(0)ja+np;b+ np]will concentrate about V(). Thus,
E0[V+(0)j(i;i)] = lim
!1Enp[maxfE0[V(0)j(i+n;i+ np)];0gj;(i;i)]: (6)
Note that E[V()j(i;i)]only depends on the ratio of (i;i), so the difference between the left and righthand sides
of 6 is caused by the fact that the maximum must be taken at finitely many ratios (for finite ). Furthermore these
ratios are evenly spaced out, so the lefthand side can only increase as increases. We can use this to lower bound the
limit ofVg:
lim
s!1Vg((i;i);s;) = lim
s!1E;np[maxfE0[V(0)j(i+np;i+ np)];0gjs] (7)
lim
s!1P(c(s)) min
0c(s)Enp[maxfE0[V(0)j(i+np;i+0 np)];0gjs;(i;i);0]
+P(c(s)) min
0<c(s)Enp[maxfE0[V(0)j(i+np;i+0 np)];0gjs;(i;i);0]
(8)
lim
s!1P(c(s))Enp[maxfE0[V(0)ji+np;i+c(s) np];0gjc(s);(i;i)] (9)
Using the form of the cumulative distribution of a geometric variable, P(c(s)) = 1 P( <c (s)) =sc(s). We
set
c(s) = log 1 s
so that lims!1c(s) =1;andlims!1sc(s)= 1. Combining 6 and 9 with these facts allows us to deduce the following:
lim
s!1Vgi((i;i);s;)E[V+()j(i;i)] (10)
Thus, lims!1VPI ((i;i);s;)0:However, we have that, for any s,VPI ((i;i);s;)0by standard prop-
erties of VPI. This shows the result.
Proposition 4. If the participation cost, c, is zero, then for any (i;i)such thatV+(i
i+i)>0, VPI is strictly
positive fors= 0.
Proof. From Lai and Robbins [1985], we have that, for consistent and assymptotically efficient policies, the expected
number of pulls of a suboptimal arm after nrounds is lower bounded by clogn, wherecis a positive constant that
measures the similarity of the reward distributions for the arms. This class contains the optimal policy. Thus, we have
that for any finite s,
VPI ((i;i);s;)>0: (11)
The combination of these two propositions shows that for any (i;i), the corresponding value will eventually increase
assgoes to 1. Thus, in the case where participation costs can be neglected, Ego will prefer an equilibrium with a
highers.
10
0.0 0.2 0.4 0.6 0.8 1.0
θ0.00.51.01.52.02.53.03.54.0P(θ|(α,β))Low Confidence Initial Beliefs
P(θ|(0.8,1.2))
P(θ|(1.0,1.0))
P(θ|(1.2,0.8))(a) Low Confidence Initial Belief Distributions
0.0 0.2 0.4 0.6 0.8 1.0
θ0123456P(θ|(α,β))High Confidence Initial Beliefs
P(θ|(20.0,30.0))
P(θ|(25.0,25.0))
P(θ|(30.0,20.0)) (b) High Confidence Initial Belief Distributions
Figure 1: Plots of a selection of initial belief distributions. Each curve corresponds to a different value of E[]:(a)
shows low confidence initial beliefs. In these information states, Ego has seen very little data and so her belief is
very spread out. In these scenarios, we expect sparsity to be helpful because the gap between full information and
partial information is large. Conversly, (b) shows belief distributions after Ego has seen 50 effective samples. The
corresponding distribution is more concentrated, so we expect less positive impact from sparsity.
5 The Cost of Pervasive Normativity: Computational Results
Our results above show that an environment with lots of rules that an agent cares nothing about intrinsically is more
valuable for an agent contemplating participation than one in which the only rules are ones that matter on the merits–
altering Ego’s payoff directly. We assumed, however, that increasing the number of spurious rules is costless to Ego
and of course this is not generally likely to be true. If Ego is going to participate in a community with lots of spurious
rules, Ego is also likely to bear costs, specifically the cost of participating in collective punishment and the cost of
complying with spurious rules. In this section, we relax the assumption that c= 0. Doing so, however, increases the
analytical complexity. We therefore turn to computational methods to explore environments in which Ego enjoys both
costs and benefits from an increase in the number of spurious rules.
To illustrate the effect of participation costs, we select six initial belief states and computed values as a function of
c. Our initial beliefs vary the expected value of and the variance of the belief about its mean. We selected initial
states to cover scenarios where the expected values of an important game is negative, positive and equal to zero. In
this work, we chose E[]2f:4;:5;:6g. We varied Ego’s confidence in her current estimate by varying the effective
number of samples (+)in the initial belief. Figure 1 shows the corresponding beta distributions for our selection
of initial states.
The optimal policy is invariant to scaling of rewards, so we fix P= 1 and letc
Pbe the independent variable in
our computations. We compute these values with a variant of value iteration that takes advantage of the structure
of the state space. A python script to generate these plots is included as appendix A. We set the parameters of our
computation to allow for at most 10 8of error in the computation.
11
10-510-410-310-210-1100
c/P0.00.20.40.60.81.01.21.41.61.8ValueInitial state (0.8, 1.2)
s = 0.0
s = 0.15
s = 0.28
s = 0.4
s = 0.5
s = 0.59
s = 0.68
s = 0.76
s = 0.83
s = 0.9(a) Low Mean, Low Confidence
10-510-410-310-210-1100
c/P0.00.20.40.60.81.01.21.41.61.8ValueInitial state (20.0, 30.0)
s = 0.0
s = 0.15
s = 0.28
s = 0.4
s = 0.5
s = 0.59
s = 0.68
s = 0.76
s = 0.83
s = 0.9 (b) Low Mean, High Confidence
10-510-410-310-210-1100
c/P0.00.20.40.60.81.01.21.41.61.8ValueInitial state (1.0, 1.0)
s = 0.0
s = 0.15
s = 0.28
s = 0.4
s = 0.5
s = 0.59
s = 0.68
s = 0.76
s = 0.83
s = 0.9
(c) Mid Mean, Low Confidence
10-510-410-310-210-1100
c/P0.00.20.40.60.81.01.21.41.61.8ValueInitial state (25.0, 25.0)
s = 0.0
s = 0.15
s = 0.28
s = 0.4
s = 0.5
s = 0.59
s = 0.68
s = 0.76
s = 0.83
s = 0.9 (d) Mid Mean, High Confidence
10-510-410-310-210-1100
c/P0.00.20.40.60.81.01.21.41.61.8ValueInitial state (1.2, 0.8)
s = 0.0
s = 0.15
s = 0.28
s = 0.4
s = 0.5
s = 0.59
s = 0.68
s = 0.76
s = 0.83
s = 0.9
(e) High Mean, Low Confidence
10-510-410-310-210-1100
c/P0.00.20.40.60.81.01.21.41.61.8ValueInitial state (30.0, 20.0)
s = 0.0
s = 0.15
s = 0.28
s = 0.4
s = 0.5
s = 0.59
s = 0.68
s = 0.76
s = 0.83
s = 0.9 (f) High Mean, High Confidence
Figure 2: Plots of V((i;i);s;:9)for select initial states. The probability density functions that correspond to these
beliefs are shown in Fig. 1. The one step value of participating in an equilibrium with enforcement is 1 ( P= 1).The
increase in value for larer sparsity for low values ofc
Pshows confirmation of the results in Prop. 3. In comparing
vertically, we can see that sparsity has a larger effects on states with low mean and low confidence: (f) is essentially
unimpacted by sparsity for lowc
Pwhile for (a) and (c) participation is suboptimal unless sparsity is positive. The cost
of this sparsity is increased sensitivity to participation costs.
12
Figure 2 shows value as a function ofc
Pfor several pairs of sparsity and initial states. We can clearly see the costs
of pervasive normativity: value functions at higher sparsity decreases more quickly and for lower setting of the par-
ticipation costs. This occurs because the number of rounds per important game increases so more participation costs
are paid. In essence, increased participation costs force Ego to pay more for information and so she may prefer an
equilibrium with less sparsity.
As we might expect, the net gain in value from sparsity is larger in belief states with that are very uncertain. For
example, the value function for initial state (30, 20) is essentially invariant to sparsity for lowc
P. Similarly, gains
from increased sparsity are more pronouced with low means. Essentially all value in those states is due to improved
decision making abilities from information.
6 Implications for the Microfoundations of Law
Our results have surprisingly powerful implications for the structure of normativity in human communities. [LETheory
readers: we are only sketching these implications for this draft. We anticipate that at least some of them will be capable
of more formal analytic or computational demonstration.]
6.1 Gossip, Silly Rules and Durability
The value of participating in a super-game depends on the expected cost of punishing rule violations relative to the
reward Ego expects if violations of rules she cares about are deterred. The computations in Section 4 indicate that
when expected costs relative to rewards are sufficiently low, ceteris paribus , Ego enjoys higher value in environments
with more rules that she does not care about. This provides an interesting explanation for the observation from
ethnographic studies that simple societies are characterized both by pervasive and apparently spurious rules that are
effectively enforced by low-cost collective punishments. Wiessner [2005], for example documents the use of gossip,
group criticism and mocking as the principal means by which norms are enforced among the Ju/’hoansi Bushmen
of northwest Botswana. In her observations, violations of norms were punished by escalating criticism and rarely
got to the point of physical violence. Assuming that the rewards generated for individuals aggregate to raise group
well-being, our model thus can be read to predict that communities that succeed in securing an equilibrium with many
rules that impose low compliance costs and which are enforced by low-cost means will outperform communities with
fewer rules and more costly forms of punishment.9
6.2 Birds of a Feather
The rewards Ego enjoys when joining a community depend on the rules of that community. In comparing across
communities with comparable forms of punishment, and comparable numbers of rules, Ego will prefer a community
with rules for important games that, when effectively enforced, generate higher rewards, R. We have not modeled
the source of rules in a community but if we suppose that rules emerge that reflect the interests of the members of
a community, this suggests that Ego is more likely to find rules that achieve higher rewards in communities with a
number of agents with similar preferences, that is in more homogeneous communities.
9Our model takes into account Ego’s willingness to bear the higher cost of punishment in more sparse environments. We
assume, but have not shown, that the willingness of Others to punish is not reduced with sparsity–that is, that Others have incentives
comparable to Ego’s.
13
6.3 The Emergence of Law
As communities grow more heterogeneous–which they naturally do as they generate value and support greater spe-
cialization through the division of labor–we expect ambiguity about rule violations to increase. Increasing ambiguity
increases Ego’s expected cost of participating in a punishment regime if errors in punishment–failing to punish when a
violation is perceived by an Other and punishing when the Other does not perceive a violation–are themselves treated
as rule violations which produce punishment. This is a feature of the punishment schemes of many communities:
members of the Flemish cloth merchants guild in the 13th Century, for example, included in their rules the provision
that any member who failed to observe a boycott of a buyer who had cheated another guild member was barred from
trading and no guild member was to ”house the goods” or ”keep company” with the non-punisher. Increasing am-
biguity also reduces the expected rewards Ego enjoys in important games because with some probability a violation
perceived by Ego is not perceived as such by Others and hence the probability of a violation increases. Thus the cost
of punishing relative to rewards decreases with decreasing ambiguity. Hadfield and Weingast [2012] argue that the
central function of law is to serve as a unique classification institution, capable of resolving ambiguity about what
counts as a rule violation. Moreover, they show that for a classification institution to effectively secure an equilibrium
of legal order around a given set of rules enforced only by decentralized collective punishment, the institution must
possess legal attributes such as neutrality, openness, clarity, consistency and stability. Our analysis here predicts that
communities that introduce law in the form of a classification institution with legal attributes that reduces ambiguity
will enjoy higher value and greater durability.
6.4 Hammurabi’s Code
One of the key assumptions of our model is that people who punish any rule violation are expected to punish all rule
violations. This is a distinctive feature of the labeling system generated by a legal regime: people are ”law breakers”
or not; they are ”law-abiding” or not. Cooter [1998] proposes that ”law” is a meaningful category and that people
have preferences over behaviors solely on the basis of whether they are labeled ”lawful” or not. Our model captures
this idea by treating observation of punishment behavior in the context of any rule as informative of the probability of
punishment in important games. The model can be interpreted as representing, for example, a community in which
legal order is coordinated around a single legal code. Hammurabi’s Code from ancient Babylon, for example, consisted
of 247 individual rules, such as ”If any one hire an ox or an ass, and a lion kill it in the field, the loss is upon its owner”
(Rule 244) and ”If any one open his ditches to water his crop, but is careless, and the water flood the field of his
neighbor, then he shall pay his neighbor corn for his loss” (Rule 55). These rules likely emerged individually over
time. We can imagine that knowing whether someone punished Rule 244 may or may not have helped to predict
whether they would also punish Rule 55. But when Hammurabi placed all 247 together on a stone pillar and named
the collection as his Code, he created the possibility for the emergence of two types of people: those who punished
violations of the Code and those that did not. Our analysis suggests that the creation of collections of rules, rather
than disparate rules, can generate value. Suppose, for example, that Ego cares about five rules, enjoying rewards when
violations of each of them is deterred. Our model treats these five rules as integrated into a single super game in which
the observation of punishment behavior in any game is informative, and equally so, of the likelihood of deterrence
of violations in any of the five games Ego cares about. But suppose instead that these rules are not connected in this
way. Suppose that punishment behavior in each game Ego cares about is only predicted by punishment behavior in
non-overlapping subsets of unimportant games. We could then decompose our single super-game into five distinct
super-games, each one of which would be considerably less sparse than our original game. Our results predict that
14
Ego’s value in a community with distinct super-games–with disconnected rules and a belief structure about punishment
that limits the informativeness of observing punishment of any individual rule–will be lower than the value enjoyed in
a community with a comprehensive code.
6.5 Legal Pluralism and the Nation State
Finally, we can combine some of the above observations to shed light on the phenomena of legal pluralism and the
emergence of the nation state. Modern advanced legal systems are characterized by comprehensive systems of rules,
with the label ’law’ attributed to any rule generated by a government organized within the constitutional framework of
a state. Indeed, law is frequently equated with the rules generated by a government, as distinguished from those gener-
ated from other entities such as corporations or schools or that emerge organically from social interaction [Ellickson,
1991]. But as Hadfield and Weingast [2013] emphasize, prior to the emergence of the nation state, there were many
institutions that coordinated legal order in different spheres, often in competition. Medieval Europe, for example,
was characterized by multiple legal orders, with rules generated by merchant guilds, towns, churches, local rulers and
more. Many societies, particularly those seen as struggling to establish the rule of law, are characterized by multiple
legal orders–some governing family relations, others governing commercial dealings for example. Our model suggests
a way of thinking about the tradeoffs Ego will face between participating in multiple legal communities, each of which
coordinates punishment over some subset of rules and participating in a comprehensive legal community. On the one
hand, rewards may be higher when a legal community is comprised of a relatively homogeneous group with shared
interests–such as a community of traders–who can select rules that serve Ego’s interests. But such a community will
also have lower sparsity. On the other hand a system with a single system of comprehensive rules may achieve less
alignment with Ego’s interests but may also, because of its higher sparsity, provide Ego with more information about
the value of continuing to participate.
References
Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine
learning , 47(2-3):235–256, 2002.
Robert Boyd and Peter J Richerson. Punishment allows the evolution of cooperation (or anything else) in sizable
groups. Ethology and sociobiology , 13(3):171–195, 1992.
Robert Boyd, Herbert Gintis, and Samuel Bowles. Coordinated punishment of defectors sustains cooperation and can
proliferate when rare. Science , 328(5978):617–620, 2010.
Robert Cooter. Expressive law and economics. The Journal of Legal Studies , 27(S2):585–607, 1998.
RC Ellickson. Order without law: how neighbors settle disputes. 1991.
Daniel MT Fessler. Toward an understanding of the universality of second order emotions. Beyond nature or nurture:
Biocultural approaches to the emotions , pages 75–116, 1999.
Daniel MT Fessler and Carlos David Navarrete. Meat is good to taboo. Journal of Cognition and Culture , 3(1):1–40,
2003.
Gillian K Hadfield. A coordination model of the sexual division of labor. Journal of Economic Behavior & Organiza-
tion, 40(2):125–153, 1999.
15
Gillian K Hadfield and Barry R Weingast. What is law? a coordination model of the characteristics of legal order.
Journal of Legal Analysis , 4(2):471–514, 2012.
Gillian K Hadfield and Barry R Weingast. Law without the state: legal attributes and the coordination of decentralized
collective punishment. Journal of Law and Courts , 1:3–34, 2013.
Gillian K Hadfield and Barry R Weingast. Constitutions as coordinating devices. Institutions, Property Rights, and
Economic Growth: The Legacy of Douglass North , page 121, 2014.
Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathe-
matics , 6(1):4–22, 1985.
S Mathew, R Boyd, and M van Veelen. Human cooperation among kin and close associates may require enforcement
of norms by third parties. In PJ Richerson and M. Christiansen. Str ¨ungmann Forum Report , volume 12, 2012.
Richard H McAdams. Expressive power of adjudication, the. U. Ill. L. Rev. , page 1043, 2005.
Roger B Myerson. Justice, institutions, and multiple equilibria. Chi. J. Int’l L. , 5:91, 2004.
Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming . John Wiley & Sons,
Inc., New York, NY , USA, 1st edition, 1994. ISBN 0471619779.
R Sugden. The economics of righs, cooperation, and welfare. Palgrave Macmillam, 1986.
Polly Wiessner. Norm enforcement among the ju/hoansi bushmen. Human Nature , 16(2):115–145, 2005.
Appendix A: Computing V((i; i);s;c; )
"""
File: sparse_bernoulli.py
Usage:
$ python sparse_bernoulli.py n_sparsity_values x_fidelity
[--resultfolder path/to/results]
Ex: to compute with 10 sparsity values with 500 samples along the x axis and
store the results in folder path/to/figures/ do
$ python sparse_bernoulli.py 10 500 --resultfolder path/to/figures
Not specifying result_folder defaults to the current folder
"""
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import argparse
import sys
eps = 1e-8
16
def main():
parser = argparse.ArgumentParser()
parser.add_argument(’n_sparsity_values’, type=int)
parser.add_argument(’x_fidelity’, type=int)
parser.add_argument(’--resultfolder’, type=str, default="")
args = parser.parse_args()
# These are the values we’ll use to compute cost response curves
sparsity_values = np.log(np.linspace(np.exp(0),
np.exp(.9),
args.n_sparsity_values))
# The x-axis values, determines the accuracty of the plots. Uses
# log linear spaces because that seems qualitatively better responses
costs = np.exp(np.linspace(np.log(0.00001),
np.log(.9),
args.x_fidelity))
max_ret_sparsity_values = np.exp(np.linspace(np.log(0.001),
np.log(.95),
args.x_fidelity))
start_values = {(.4, 2) : ’low-mu-high-sigma’,
(.4, 50): ’low-mu-low-sigma’,
(.5, 2) : ’mid-mu-high-sigma’,
(.5, 50): ’mid-mu-low-sigma’,
(.6, 2) : ’high-mu-high-sigma’,
(.6, 50): ’high-mu-low-sigma’}
resultfolder = args.resultfolder
for ratio, effective_samples in start_values:
# compute the costs for different sparsity values
alpha = ratio *effective_samples
beta = (1-ratio) *effective_samples
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.set_xscale(’log’)
ax.set_xlabel(’c/P’)
ax.set_ylabel(’Value’)
for s in sparsity_values:
17
print "s: {}, ratio: {}, effective samples: {}".format(
s, ratio, effective_samples)
cost_response_curve = compute_cost_response_curve(
alpha, beta, s, costs)
ax.plot(costs, cost_response_curve,
label="s = {:.2}".format(s))
plt.title(’Initial state ({}, {})’.format(alpha, beta))
ax.set_ylim([0, 1.8])
plt.legend(loc=’best’)
plt.savefig(
resultfolder+start_values[(ratio, effective_samples)] + ".pdf")
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.set_xlabel(’s’)
# ax.set_xscale(’log’)
ax.set_xlim([.001, 1])
ax.set_ylabel(r’$\frac{c}{P}$’)
plt.title("Maximal Participation Cost vs Sparsity")
for ratio, effective_samples in start_values:
print "ratio: {}, effective samples: {}".format(ratio, effective_samples)
# compute the costs for different sparsity values
alpha = ratio *effective_samples
beta = (1-ratio) *effective_samples
#compute the largest c such that participation is optimal
max_c_values = []
for s in max_ret_sparsity_values:
max_c_values.append(largest_possible_c((alpha, beta), s, 0.9))
print "s = 0, max_c = {}".format(max_c_values[0])
best_s = np.argmax(max_c_values)
print "s = {}, max_c = {}".format(
max_ret_sparsity_values[best_s], max_c_values[best_s])
ax.plot(
np.r_[max_ret_sparsity_values, [1]],
max_c_values + [0], label="({}, {})".format(alpha, beta))
plt.legend(loc=’best’)
plt.savefig(resultfolder + "retirement_points.pdf")
def largest_possible_c(s0, s, delta):
18
c_min, c_max = (0, 1)
while np.abs(c_max - c_min) > eps:
c = (c_min + c_max)/2.0
V = sparse_bernoulli_value_iteration(s0, s, c, delta)
if V > 0:
c_min = c
else:
c_max = c
return c_min
def sparse_bernoulli_value_iteration((a, b), s, c, delta,
tol=eps, verbose=true):
"""
Takes a belief state (a, b) and computes the V((a, b); s, c, delta)
Computation is done with value iteration so that the error is less
than tol
"""
delta_s = 1 - (1-s) *(1-delta) # As is Prop 2
"""
Values are initialized to 0, so maximal the maximal error is the
maximal positive reward for all time. With probability (1 - s) Ego
gets value P with probability \theta. We upper bound by letting
\theta = 1 and then normalize by P to get an upper bound:
UB(c, s) = (1-s -c/P)/(1-delta_s)
This ammount decreases by delta_s each step of value iteration so
we need delta_sˆH UB(c,s) <= tol ===> H >= log(tol/UB(c, s)) / log(delta_s)
"""
log_V_ub = np.log(1 - s - c) - np.log(1 - delta_s)
H_lb = ( np.log(tol) - log_V_ub ) / np.log(delta_s)
H = int(np.ceil(H_lb))
if verbose:
sys.stdout.write(’\r s: {} c: {} H: {} ’.format(
s, c, H))
sys.stdout.flush()
# Vector of a counts
a_vals = np.linspace(0, H-1, H) + a
19
# Allocate vectors to store values
Vt = np.zeros(H)
Vt_minus1 = np.zeros(H-1)
# Take the horizion from H-1 to 0
for t in range(H-1, 0, -1):
# after cur_h rounds we will have seen cur_h heads or tails,
# and we incorporate the priors
cur_confidence = cur_h + a + b
# P[i] = i / Nt; i in [0,...,cur_h]
P = a_vals[:cur_h] / (cur_confidence)
# do a value iteration backup
Vt_minus1 = backup(Vt, Vt_minus1, P, s, delta_s, c)
# set up for the next round, reuse the preallocated memory
# to avoid unnnecessary realloc calls
tmp = Vt
Vt = Vt_minus1
Vt = tmp[:-1] # decrease size by 1
return Vt[0]
def backup(Vt, Vt_minus1, P, s, delta, c):
"""
Computes a value iteration back for the super game
V: vector of values at time t, in increasing order of the number of heads
Vt_minus1: vector to return values for time t-1 (avoids reallocating memory)
P: vector of transition proabilities encoding probability of heads at time t-1
s: sparsity level
delta: discount factor
c: participation costs
"""
# First compute expected value of important game
# Vt_minus1[i] = delta *(P(tails) *Vt[i] + P(heads) *Vt[i+1])
Vt_minus1 = delta *(Vt[:-1] *(1-P) + Vt[1:] *(P))
# Expected reward at next step is 2 *\theta - 1 - c
Vt_minus1 += 2 *P - 1 - c
# Ego decides whether or not to retire
# After this line Vt_minus1 = P(important game) *E[Rt + Vt| important game]
Vt_minus1 = (1-s) *np.maximum(Vt_minus1, 0)
# Same as before with different rewards but its repeated
20
# and we don’t want to allocate extra space
Vt_minus1 += s *np.maximum(delta *(Vt[:-1] *(1-P) + Vt[1:] *P) - c, 0)
return Vt_minus1
def compute_val(alpha, beta, s, c_vals, delta=0.9):
"""
computes [V((alpha, beta); s, c, delta) for c in c_vals]
"""
vals = []
for c in c_vals:
vals.append(sparse_bernoulli_value_iteration((alpha, beta), s, c, delta))
return np.asarray(vals)
if __name__==’__main__’:
main()
21 |
40b232c0-e6b2-4f2f-9f94-6cc77f1959df | trentmkelly/LessWrong-43k | LessWrong | Scaling laws for dominant assurance contracts
(note: this post is high in economics math, probably of narrow interest)
Dominant assurance contracts are a mechanism proposed by Alex Tabarrok for funding public goods. The following summarizes a 2012 class paper of mine on dominant assurance contracts. Mainly, I will be determining how much the amount of money a dominant assurance contract can raise as a function of how much value is created for how many parties, under uncertainty about how much different parties value the public good. Briefly, the conclusion is that, while Tabarrok asserts that the entrepreneur's profit is proportional to the number of consumers under some assumptions, I find it is proportional to the square root of the number of consumers under these same assumptions.
The basic idea of assurance contracts is easy to explain. Suppose there are N people ("consumers") who would each benefit by more than $S > 0 from a given public good (say, a piece of public domain music) being created, e.g. a park (note that we are assuming linear utility in money, which is approximately true on the margin, but can't be true at limits). An entrepreneur who is considering creating the public good can then make an offer to these consumers. They say, everyone has the option of signing a contract; this contract states that, if each other consumer signs the contract, then every consumer pays $S, and the entrepreneur creates the public good, which presumably costs no more than $NS to build (so the entrepreneur does not take a loss).
Under these assumptions, there is a Nash equilibrium of the game, in which each consumer signs the contract. To show this is a Nash equilibrium, consider whether a single consumer would benefit by unilaterally deciding not to sign the contract in a case where everyone else signs it. They would save $S by not signing the contract. However, since they don't sign the contract, the public good will not be created, and so they will lose over $S of value. Therefore, everyone signing is a Nash e |
54cd4a0e-09f1-49e0-a125-31ef85c26baa | trentmkelly/LessWrong-43k | LessWrong | Hunger can make you stupid
When I originally wrote "When to scream 'Error!'", I was mainly thinking of bad patterns of thought or bad problem-solving strategies as being the source of the error. Since then, I've come to realize that my own most common source of stupidity is because I've neglected some comfort. I may be hungry without consciously paying attention to it, dehydrated because I've been living on coffee for too long, or simply have a headache and need to take an Ibuprofen -- as a result, I don't think well, get irritated at the fact that I'm not thinking well, and generally begin a death spiral if I don't realize why.
In hindsight, it feels obvious that I should take care of the physiological needs that I can because they're likely preventing me from thinking straight. However, I've failed to do this on numerous occasions and so thought it worth mentioning.
In summary: Whenever you're screaming "Error", I suggest you stop and figure out whether you're hungry, thirsty, tired, or hurting before trying to find a problem in your thinking itself, especially if you're not usually good at noticing such things. |
df7b9a5b-72d1-466c-8e5b-6798785713b2 | StampyAI/alignment-research-dataset/arbital | Arbital | Compressing multiple messages
How many [bits of data](https://arbital.com/p/3p0) does it take to encode an [$n$-message](https://arbital.com/p/3v9)? Naively, the answer is $\lceil \log_2(n) \rceil$ ([why?](https://arbital.com/p/n_message_bit_length)): For example, it takes 5 bits to encode a 21-message, because 4 bits are only enough to encode 16 different messages, but 5 bits are enough to encode 32. The use of the [https://arbital.com/p/3vc](https://arbital.com/p/3vc) function implies an inefficiency: 2 bits are required to encode a 3-message, but 2 bits are enough to distinguish between four different possibilities. One of those possibilities is being wasted. That inefficiency can be reduced by encoding multiple $n$-messages at the same time. For example, while an individual 3-message requires 2 bits to encode, a series of 10 3-messages requires at most 16 bits to encode: $3^{10} < 2^{16}.$
Why is it that encoding ten 3-messages together (using bits) is cheaper than encoding ten 3-messages separately? Naively, there are three different factors that allow the combined encoding to be shorter than the sum of the separate encodings: The messages could have different likelihoods ([allowing the combined message to be compressed in expectation](https://arbital.com/p/expected_compression)); the messages could be dependent on each other ([meaning they can be compressed](https://arbital.com/p/compressing_dependent_messages)); and the mismatch between bits and 3-messages gets washed out as we put more three-messages together (see [https://arbital.com/p/+3wv](https://arbital.com/p/+3wv)).
In fact, the first two factors are equivalent: 10 3-messages are equivalent to one $3^{10}$ message, and in general, [$n$ $k$-messages are equivalent to one $n^k$-message](https://arbital.com/p/n_k_messages). If the individual n-messages are dependent on each other, then different $n^k$ messages have different likelihoods: For example, if message 3 never follows message 2, then in the combined message, "32" never appears as a substring.
Thus, there are two different ways that an encoding of $k$ $n$-messages can be shorter than $k$ times the encoding of an $n$-message: The various combined messages can have different likelihoods, and the efficiency of the coding might increase. To study the effect of different likelihoods on the encoding length in isolation, we can [assume that the codings are maximally efficient](https://arbital.com/p/assume_maximum_efficiency) and see how much additional [https://arbital.com/p/compression](https://arbital.com/p/compression) the different likelihoods get us. To study code efficiency in isolation, we can [assume each message is equally likely](https://arbital.com/p/assume_equal_likelihood_messages) and see how much additional compression we get as we put more $n$-messages together. In practice, real compression involves using both techniques at once. |
e8ecb1a9-14bc-42fe-bbe1-7ba6d0a17d2f | trentmkelly/LessWrong-43k | LessWrong | [LINK] Cholesterol and mortality
Discussion of a Norwegian study looking at 50,000 people who didn't have pre-existing heart disease for ten years. http://drmalcolmkendrick.files.wordpress.com/2012/09/mortality-and-cholesterol1.png?w=600&h=309
> As you can see, for women the story is very straightforward indeed. The higher the cholesterol level, the lower the risk of overall mortality. With regard to heart disease alone, the highest risk is at the lowest cholesterol level. For men there is more of a U shaped curve, but overall mortality is highest at the lowest cholesterol level.
Here's the study.
Here's the actual conclusion from the study, which dhoe pointed out in comments is considerably milder than the quote above:
> Based on epidemiological analysis of updated and comprehensive population data, we found that the underlying assumptions regarding cholesterol in clinical guidelines for CVD prevention might be flawed: cholesterol emerged as an overestimated risk factor in our study, indicating that guideline information might be misleading, particularly for women with ‘moderately elevated’ cholesterol levels in the range of 5–7 mmol L-1. Our findings are in good accord with some previous studies. A potential explanation of the lack of accord between clinical guidelines and recent population data, including ours, is time trend changes for CVD/IHD and underlying causal (risk) factors.
> ‘Know your numbers’ (a concept pertaining to medical risk factor levels, including cholesterol) is currently considered part of responsible citizenship, as well as an essential element of preventive medical care. Many individuals who could otherwise call themselves healthy struggle conscientiously to push their cholesterol under the presumed ‘danger’ limit (i.e. the recommended cut-off point of 5 mmol L-1), coached by health personnel, personal trainers and caring family members. Massive commercial interests are linked to drugs and other remedies marketed for this purpose. It is therefore of immediate and wide |
3dd3ede5-252d-44cd-8bf9-80ded893df87 | trentmkelly/LessWrong-43k | LessWrong | December 2012 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. I find that exposure to LW ideas makes me less likely to enjoy some entertainment media that is otherwise quite popular, and finding media recommended by LWers is a good way to mitigate this. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
* Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
* If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
* Please use the comment trees for genres. There is a meta thread for comments about future threads.
* If you think there should be a thread for a particular genre of media, please post it to the Other Media thread for now, and add a poll to the Meta thread asking if it should be a thread every month.
|
e16d5645-ffd2-4518-a77c-8df7f0c8002d | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The universal prior is malign
|
c6b2878a-0e75-4991-8806-764b78c889c2 | trentmkelly/LessWrong-43k | LessWrong | Monthly Roundup #3
It’s that time again.
Bad News
Guardian analyzes projects from world’s biggest carbon offset provider, finds ‘94% of the credits had no benefit to the climate.’ Bloomberg found something similar in 2020, changing nothing. Carbon offsets are not about carbon. Very little that is said or done in the name of carbon or climate change is about carbon. Or at least, not about carbon in the atmosphere.
A fascinating potential flaw in Georgist Land Value Taxes is pointed out by David Friedman. Larger land holdings could be tax advantaged over smaller ones. This is because the improvements on their land don’t contribute to the ‘without improvements’ value of the rest of the land, so owning and building (his example) both a shopping mall and an apartment complex prevents the value of either half from being enhanced by the other for tax purposes. Presumably the solution is to say ‘still counts’ in some form, since the tax incentives here could be very large.
FTC likely to ban all non-competes, notices no reason why there is a fence at this location, nor any harm in banning contracts. Tyler Cowen warns that this will result in less investment in workers and more keeping information segmented. He says ‘it would be better to regulate them’ presumably because bargaining is one of the stages of grief. From what I can tell most non-compete agreements are mostly value destructive, and often obviously so. There are some places where they seem important, and having them not be enforced either will lead to workarounds that have the same effect or do serious economic damage.
Here is a long post arguing against banning non-competes.
Who currently bans non-competes?
Who has them?
If one must regulate at all, not allowing non-competes for low wage workers makes sense to me. The stories of being locked into a non-compete by Jimmy John’s do seem rather predatory. Same with ‘overly broad.’ By default banning economic activity and disallowing competition is destructive. It is not crazy t |
0e03318e-f07b-4393-98f0-2221c9be69f0 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AI-Plans.com - a contributable compendium
Hello, we’re working on <https://ai-plans.com> .
Ideas behind the site:
======================
Right now, alignment plans are spread all over the place and it’s difficult for a layperson, or someone unfamiliar with the field to get an idea of what is the current plan for making AGI or even the models we have right now, safe. And the problems with said plans.
Having a place where all AI Alignment plans and criticism of said plans can be added and seen in an easy to read way is helpful. It offers possibilities such as; seeing the most common problems with plans, which kinds of plans have the least problems, pushing for regulations against the most poor plans etc.
Judging the quality of a plan is hard and has a lot of ways to go wrong. Judging the quality of a criticism might be less complicated and perhaps has less ways it can go wrong.
Which is why I believe this site is useful
Hasn’t this already been done? aisafety.info, aiideas.com, etc
--------------------------------------------------------------
aisafety.info is excellent, and the folks there are building a conversational agent for AI Safety, which could be incredibly useful. However it’s not very simple to use to learn about specific alignment plans yet and is more of a general purpose place to learn about ai, ai-risk, ai-safety, etc.
The purpose of ai-plans.com is to be an easy to read platform that sorts the good plans from the bad and shows the problems with each one.
I believe there was a site called ai-ideas.com or something mentioned to me? But that wasn’t working the last time I checked and there’s been no change, as far as I know.
Aims
====
Stage 1)
--------
A contributable compendium with most, if not all, of the plans for alignment, and criticisms


Estimated time left for this to be done: 1 week - adding ~5 plans a day now
What’s left to do:
* Functionality for responding to criticisms-
The idea for this is that a plan’s author can select a criticism/criticisms that they think they have a solution to, then submit a new version of the plan, with the criticism(s) quoted in the new plan.
Any criticisms they didn’t select, will be automatically added to the new version of the plan (the idea being that any unselected criticisms are ones that they don’t have solutions to, so should still apply).
* Adding more plans and criticisms
* A filter for spam and duplicates
* Creating a template/guide on how to post plans
* Improve the UX - font, colours, design, etc.
What we’re missing for this stage:
* A lawyer to make sure we’re complying with GDPR and help make a cookies notice
* Moderators to help filter spam and make sure plans are submitted correctly
What would be helpful, but not essential for this stage:
* More copywriters to speed up the process of adding alignment plans
* More red-teamers/quality testers
Stage 2)
--------
In addition to everything from Stage 1, there is now a **scoring system** for **criticisms** and a **ranking system** for **plans**- where plans are ranked from top to bottom based on the total scores of their criticisms.

The scoring system is a very essential part of the site.
It's aim is to give the **most points** to the **most accurate criticisms** and use that to **rank plans** from the **least total criticism points** (at the top) to the **most criticism points**(at the bottom).
Users will be able to upvote or downvote criticisms.
Users will also have a ‘karma’ that will affect how weighted their votes on criticisms are- somewhat similar to the LessWrong and AlignmentForum system- though, we're considering having karma past a certain 'age' become spendable, rather than add to the weight of the users vote, to avoid to **first vote problem** of LessWrong.
Users who accumulate more points on their criticisms will have a higher karma.
Plans will have a ‘bounty’ inversely proportional to the total number of criticism points they have.
I think it’s important to get this right the first time, so that there isn’t a toxic/unhelpful culture built on the site- or ruins the site’s reputation.
To prevent circular point gains (e.g. a group of users upvoting each other with disregard for how good the plans are, or being biased towards each other) - look at correlation of interactions (net vote) between users.
A perfect correlation or a correlation approaching perfect could be cause for action, out of suspicion of dishonest activity.
Alternatively, the correlation of interactions users have with each other could be a weight and if say, users X and users Y have a weight of correlated interactions with each other that passes a threshold, it could add a negative towards the weight of the users X and Y towards each other. - this could be really complicated to code though
Users will not be able to vote on their own criticisms. There will be no option to directly vote on plans- plans will be scored
Estimated time left for this to be done: 15 days (could be a week if things go smoothly).
What’s left to do:
* The scoring/karma/bounty system
* Testing/red teaming the system to see how it can be misused/broken/perverted
* Working on solutions to any vulnerabilities found (e.g. circular voting)
What we’re missing for this stage:
* Depending on how heavy user traffic gets, we may need things such as server maintenance, setting up stuff in the cloud (I might be able to do a lot of that, since my background is DevOps)
* Will likely need to change hosts at some point
*
What would be helpful, but not essential for this stage:
* A mathematician to help find a more efficient way to make the scoring system
* A dedicated red teamer/QA to find ways to subvert/pervert/break/misuse the points system- currently I’m doing a lot of this
* Funding to help me be able to pay Connor -the fantastic developer of the site, help get more people on board and help me spend more time on this.
Stage 3)
--------
At this stage, we add cash prizes for:
* The user with the most karma that month.
* The user with the highest karma raised that month.
* The author/authors of the plan with the highest bounty
Estimated time left for this to be done: 2 months
What’s left to do:
* Setting up a secure way to store the prize funds
* Fundraising for the prizes
* Setting up a secure way to distribute the prizes
What we’re missing for this stage:
* A lawyer to help set up the terms and conditions for the prizes and make sure we’re not making any false promises
* Developers/DevOps to help with the effects of a high user count, which we will likely have at this stage
* The funds for the prizes
* Funds to pay the team we will likely need at this stage
* Red teamers/QA/Pen testers to find vulnerabilities
* More developers to maintain the site and help fix problems that come up
What would be helpful, but not essential for this stage:
* A social media manager
Please let me know if you're interested in joining/contributing and please feel free to contribute directly and add plans and/or criticisms!
I'm also looking to hear any way that the site could be improved that's not been mentioned here and also any reasons it's a bad idea or ways that it could go wrong.
Thank you! |
dc1184e1-86d8-4624-b121-e3debf7b75f0 | trentmkelly/LessWrong-43k | LessWrong | Sustainability of Human Progress
John McCarthy, the inventor of LISP and one of the founders of the study of AI, died earlier this week. McCarthy was actually the person who came up with the phrase "Artificial Intelligence", in 1955. I find it likely that one day, not very soon, the first thinking self-aware machines will study their history and honor McCarthy's memory.
Sustainability of Human Progress is a set of pages jmc worked on mainly in the late 90s and early 2000s, I think, though he continued to update them occasionally later. This work isn't as widely known as it ought to be. It may be of interest to the LW crowd, even though McCarthy's underlying assumptions of how the human progress will proceed differ from those popular here.
"He who refuses to do arithmetic is doomed to talk nonsense." - John McCarthy |
cea87c08-efc5-4871-9d9d-24e596de3250 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Epistemological Framing for AI Alignment Research
Introduction
============
You open the Alignment Forum one day, and a new post stares at you. By sheer luck you have some time, so you actually read it. And then you ask yourself the eternal question: how does this fit with the rest of the field? If you’re like me, your best guess comes from looking at the author and some keywords: this usually links the post with one of the various “schools” of AI Alignment. These tend to be affiliated with a specific researcher or lab -- there’s Paul Christiano’s kind of research, MIRI’s embedded agency, and various other approaches and agendas. Yet this is a pretty weak understanding of the place of new research.
In other fields, for example [Complexity Theory](https://en.wikipedia.org/wiki/Computational_complexity_theory), you don’t really need to know who wrote the paper. It usually shows a result from one of a few types (lower bound, completeness for a class, algorithm,...), and your basic training in the field armed you with mental tools to interpret results of this type. You know the big picture of the field (defining and separating complexity classes), and how types of results are linked with it. Chances are that the authors themselves called on these mental tools to justify the value of their research.
In the words of Thomas S. Kuhn, Complexity Theory is paradigmatic and AI Alignment isn’t. Paradigms, popularized in Kuhn’s [The Structure of Scientific Revolutions](https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions), capture shared assumptions on theories, interesting problems, and evaluation of solutions. They are tremendously useful to foster normal science, the puzzle-solving activity of scientists; the paradigm carves out the puzzles. Being paradigmatic also makes it easier to distinguish what’s considered valuable for the field and what isn’t, as well as how it all fits together.
This list of benefit logically pushed multiple people to argue that we should make AI Alignment paradigmatic.
I disagree. Or to be more accurate, I agree that we should have paradigms in the field, but I think that they should be part of a bigger epistemological structure. Indeed, a naive search for a paradigm either results in a natural science-like paradigm, that put too little emphasis on applications and usefulness, or in a premature constraint on the problem we’re trying to solve.
This post instead proposes a framing of AI Alignment research which has a place for paradigms, but isn’t reduced to them. I start by stating this framing, along with multiple examples in each of its categories. I then go back to the two failure modes of naive paradigmatism I mentioned above. Finally, I detail how I intend to falsify the usefulness of this framing through a current project to review important AF posts.
*Thanks to Joe Collman, Jérémy Perret, Evan Hubinger, Rohin Shah, Alex Turner and John S. Wentworth for feedback on this post.*
The Framing
===========
Let’s start by asking ourselves the different sort of progress one could make in AI Alignment. I see three categories in broad strokes (I’ll give examples in a minute).
* Defining the terms of the problem
* Exploring these definitions
* Solving the now well-defined problem
I expect the first and third to be quite intuitive -- define the problem and solve it. On the other hand, the second might feel redundant. If we defined the problem, the only thing left is to solve it, right?
Not in a world without [logical omniscience](https://plato.stanford.edu/entries/logic-epistemic/#LogiOmni). Indeed, the definitions we’re looking for in AI Alignment are merely structures and premises; they don’t give all their consequences for free. Some work is needed to understand their implications.
Let’s get slightly less abstract, and try to state the problem of AI Alignment: “Make AIs well-behaved". Here “AIs” and “well-behaved” are intentionally vague; they stand for “AI-related systems we will end up building” and “what we actually want them to do”, respectively. So I’m just saying that AI Alignment aims to make the AIs we build do as we wish.
What happens when we try to carve research on this abstract problem along the three categories defined above?
* **Research on the “AIs” part**
+ **(Defining)** Clarify what “AI-related systems we will end up building” means. This basically amounts to making a paradigm for studying the AIs we will most probably build in the future.
Note that such a paradigm is reminiscent of the ones in natural sciences, since it studies an actual physical phenomenon (the building of AIs and what they do, as it is done).
Examples include:
- Timelines research, like Daniel Kokotajlo’s [posts](https://www.alignmentforum.org/users/daniel-kokotajlo)
+ **(Exploring)** Assuming a paradigm (most probably deep learning these days), this is normal science done within this paradigm, that helps understanding aspects of it deemed relevant for AI Alignment.
Examples (in the paradigm of deep learning) include:
- Interpretability work, like [the circuit work](https://distill.pub/2020/circuits/) done by the Clarify team at OpenAI.
- Work on understanding how training works, like [this recent work](https://www.alignmentforum.org/posts/YSFJosoHYFyXjoYWa/why-neural-networks-generalise-and-why-they-are-kind-of) on SGD
* **Research on the “well-behaved” part**
+ **(Defining)** Clarifying what “what we actually want them to do” means. So building a paradigm that makes clear what the end-goals of alignment are. In general, I expect a global shared paradigm here too, with individual researchers championing specific properties among all the ones promoted by the paradigm.
Note that such a paradigm is reminiscent of the ones in theoretical computer science, since it studies a philosophical abstraction in a formal or semi-formal way.
Examples include:
- [Defining Coherent Extrapolated Volition](https://arbital.com/p/cev/) as an abstraction of what we would truly want upon reflection.
- [Defining HCH](https://www.alignmentforum.org/posts/NXqs4nYXaq8q6dTTx/humans-consulting-hch) as an abstraction of considered judgment
- [Defining](https://intelligence.org/files/Corrigibility.pdf) and [arguing](https://www.alignmentforum.org/posts/MiYkTp6QYKXdJbchu/disentangling-corrigibility-2015-2021) about corrigibility
- [Defining the properties expected of good embedded agents](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh).
- [Defining catastrophic consequences through attainable utility](https://www.alignmentforum.org/s/7CdoznhJaLEKHwvJW).
+ **(Exploring)** Assuming a paradigm (or at least some part of the paradigm focused on a specific property), normal science done in extending and analyzing this property.
Examples include:
- Assuming “well-behaved” includes following considered judgement, works on exploring HCH, like these [two](https://www.alignmentforum.org/posts/35748mXjzwxDrX7yQ/optimal-play-in-human-judged-debate-usually-won-t-answer#Question_Ignoring_Argument__QIA__) [posts](https://www.alignmentforum.org/posts/R3HAvMGFNJGXstckQ/relating-hch-and-logical-induction).
- Assuming “well-behaved” includes being a good embedded agent, works on exploring embedded agency, like the papers and posts referenced in the [Embedded Agency sequence](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh).
* **(Solving)** Assuming a paradigm for “AIs” and a paradigm for “well-behaved”, research on actually solving the problem. This category is probably the most straightforward, as it includes most of what we intuitively expect in AI Alignment research: proposition for alignment schemes, impossibility results, critics of schemes,...
Examples include:
+ Assuming “AIs” means “Deep Learning models for question answering” and “well-behaved” means “following HCH”, [IDA](https://www.alignmentforum.org/posts/HqLxuZ4LhaFhmAHWk/iterated-distillation-and-amplification-1) is a proposed solution
+ Assuming “AIs” means “DeepRL systems” and “well-behaved” means “coherent with observed human behavior”, an impossibility result is the well-known [paper on Occam Razor’s and IRL](https://proceedings.neurips.cc/paper/2018/hash/d89a66c7c80a29b1bdbab0f2a1a94af8-Abstract.html) by Stuart Armstrong and Sören Mindermann.
+ Assuming “AIs” means “Embedded Agents” and “well-behaved” means “deals with logical uncertainty in a reasonable way”, [logical inductors](https://arxiv.org/abs/1609.03543) are a proposed solution.
Note that this framing points towards some of the same ideas that Rohin’s [threat models](https://www.youtube.com/watch?v=VC_J_skJNMs) (I wasn’t aware of them before Rohin's pointer in an email). Basically, Rohin argues that a model on which to do AI Alignment research should include both a development model (what AI will look like) and a risk model (how it will fail). His issue with some previous work lies in only filling one of these models, and not both. In my framing, this amounts to requiring that work in the Solving category comes with both a model/paradigm of what “AIs” means and a model/paradigm of what “well-behaved” means. That fits with my framing. On the difference side, Rohin focuses on “what goes wrong” (his risk model), whereas I focus on “what we want”.
Going back to the framing, let’s be very clear on what I’m **not** saying.
**I’m not saying that every post or paper falls within exactly one of these categories.** The [Logical Induction paper](https://arxiv.org/abs/1609.03543) for example both defines a criterion for the part of “well-behaved” related to embedded logical uncertainty, but also provides logical inductors to show that it’s possible to satisfy it. Yet I think it’s generally easy to separate the different contributions to make clear what falls into which category. And I believe such explicit separation helps tremendously when learning the field.
**I’m not saying that these categories are independent.** It’s obvious that the “solution” category depends on the other two; but one can also argue that there are dependencies between studying what “AIs” means and studying what “well-behaved” means. For example, inner alignment only really makes sense in a setting where AIs are learned models through some sort of local optimization process -- hence this part of “well-behaved” requires a specific form to the definition of “AIs”. This isn’t really a problem, though.
**I’m not saying that every post or paper falls within at least one category.** Some work that we count as AI Alignment don’t really fall in any of my categories. The foremost example that I have in mind is John’s research on [Abstraction](https://www.lesswrong.com/s/ehnG4mseKF6xALmQy). In a way, that is expected: this research is of a more general idea. It impacts some categories (like what “well-behaved” means), but is more a fundamental building block. Still, pointing to the categories that this research applies might help make it feel more relevant to AI Alignment.
**I’m not saying that we need to fully solve what we mean by “AIs” and “well-behaved” before working on solutions.** Of course work on solutions can already proceed quite usefully. What I’m arguing for instead is that basically any work on solutions assumes (implicitly or explicitly) some sort of partial answer to what “AIs” and “well-behaved” means. And that by stating it out loud, the authors would help the understanding of their work within the field.
**I’m not saying that this is the only reasonable and meaningful framing of AI Alignment research.** Obviously, this is but one way to categorize the research. We already saw that it isn’t as clean as we might want. Nonetheless, I’m convinced that using it will help make the field clearer to current researchers and newcomers alike.
In essence, this framing serves as a lens on the field. I believe that using it systematically (as readers when interpreting a work and as author when presenting our work) would help quite a lot, but that doesn’t mean it should be the only lens ever used.
Why not a single paradigm?
==========================
I promised in the introduction that I would explain why I believe my framing is more adequate than a single paradigm. This is because I only see two straightforward ways of compressing AI Alignment into a single paradigm: make it a paradigm about a fundamental abstraction (like agency) that once completely understood should make a solution obvious; or make it a paradigm about a definition of the problem (what “AIs” and “well-behaved” means). Both come with issues that make them undesirable.
Abstraction Paradigm
--------------------
Paradigms historically come from natural sciences, as perspectives or explanations of phenomena such as electricity. A paradigm provides an underlying theory about the phenomenon, expresses the well-defined questions one can ask about it, and what would count as a successful solution of these questions.
We can also find paradigms about abstractions, for example in theoretical computer science. The current paradigm about computability is captured by [the Church-Turing thesis](https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis), which claims that everything that can be physically computed can be computed by a [Turing Machine](https://en.wikipedia.org/wiki/Turing_machine). The “explanation” for what computation means is the Turing Machine, and all its equivalent models. Hence studying computability within this paradigm hinges on studying what Turing Machines can compute, as well as other models equivalent to TMs or weaker (This overlooks the sort of research done by mathematicians studying recursion theory, like [Turing degrees](https://en.wikipedia.org/wiki/Turing_degree); but as far as I know, these are of limited interest to theoretical computer scientists).
So a paradigm makes a lot of sense when applied to the study of a phenomenon or an abstraction. Now, AI Alignment is neither; it’s instead the search for the solution of a specific problem. But natural sciences and computer science have been historically pretty good at providing tools that make solving complex problems straightforward. Why couldn’t the same be true for AI Alignment?
Let's look at a potential candidate. An abstraction presented as the key to AI Alignment by multiple people is agency. According to this view, if we had a complete understanding of agency, we wouldn’t find the problem of aligning AI difficult anymore. Thus maybe a paradigm giving an explanation of agency, and laying out the main puzzles following from this explanation, would be a good paradigm of AI Alignment.
Despite agreeing with the value of such work, I disagree with the legitimacy of making it the sole paradigm of AI Alignment. Even if understanding completely something like agency would basically solve the problem, how long will it take (if it is ever reached)? Historical examples in both natural sciences and computer science show that the original paradigm of a field isn’t usually adapted to tackle questions deemed fundamental by later paradigms. And this progress of paradigms takes decades in the best of cases, and centuries in the worst!
With the risk of short timelines, we can’t reasonably decide that this is the only basket to put our research eggs.
That being said, this paradigmatic approach has a place in my framing, about what “well-behaved” means. The difference is that once a paradigm is chosen, work can proceed in it while other researchers attempt to solve the problem for the current paradigm. There’s thus a back and forth between the work within the paradigm and its main application.
Problem Paradigm
----------------
If we stretch a bit the term, we can call paradigm the assumptions about what “AIs” and “well-behaved”. Then becoming paradigmatic would mean fixing the assumption and forcing all the work to go within this context.
That would be great, if only we could already be sure about what assumptions to use. But in the current state of the field, a lot more work is needed (especially for the “well-behaved” part) before anyone can reasonably decide to focus all research on a single such paradigm.
This form of paradigm thus suffers from the opposite problems than the previous one: it fails to value the research on the term of the problems, just to have a well-defined setting on which to make progress. Progress towards what? Who knows…
Here too, this approach has a place in my framing. Specifically, every work on the Solving category exists within such a paradigm. The difference is that I allow multiple paradigms to coexist, as well as the research on the assumptions behind this paradigm, allowing a saner epistemological process.
Where do we go from here?
=========================
Multiple voices in AI Alignment push for making the field more paradigmatic. I argue that doing this naïvely isn’t what we want: it either removes the push towards application and solutions, or fixes the term of the problem even though we are still so uncertain. I propose instead that we should think about research according to different parts of the statement "Make AIs well-behaved": research about what “AIs” we're talking about, research on what we mean by “well-behaved”, and based on answers to the two previous questions, actually try to solve the clarified problem.
I believe I argued reasonably enough for you to not dismiss the idea immediately. Nonetheless, this post is hardly sufficient to show the value of adopting this framing at the level of the whole research community.
One way I hope to falsify this proposition is through a project to review many posts on the AF to see what makes a good review, done with Joe Collman and Jérémy Perret. We plan on trying to use this lens when doing the reviews, to see if it clarifies anything. Such an experiment thus relies on us reviewing both posts that fit quite well the framing, and ones that don’t. If you have any recommendation, I wrote [a post](https://www.alignmentforum.org/posts/6hdxTTPWF2iAbXjAb/suggestions-of-posts-on-the-af-to-review) some time ago where you can give suggestions for the review. |
0b63a898-43f5-4ae8-9241-0638d2081084 | trentmkelly/LessWrong-43k | LessWrong | Surviving Seveneves
Contains spoilers for the first couple of chapters of Seveneves
Highly speculative on my part, I know very little about most of these topics
In Seveneves Neal Stephenson does the classic sci-fi trick of assuming that exactly one thing in the universe is different, and seeing where that takes us. In his case that one thing is the moon has somehow exploded.
And where that takes us is the complete destruction of the earth. As the initially huge chunks of moon rock hit into each other they break into smaller and smaller pieces, and take up more and more space. Eventually this process increases exponentially, the loosely held collection of rocks that was the moon disperses into a planetary ring, and earth is bombarded by lunar leavings for 5000 years:
> There will be so many [meteors] that they will merge into a dome of fire that will set aflame anything that can see it. The entire surface of the Earth is going to be sterilized. Glaciers will boil. The only way to survive is to get away from the atmosphere. Go underground, or go into space.
They have only two years to prepare. Which option should they take? The choice seems obvious!
But they respond with the absolutely batshit insane solution. They go into space. And not to mars, or some other friendly location. Low Earth Orbit..
This is a terrible choice for all sorts of reasons:
1. They are even more at risk of meteor collision there, since all meteors that hit earth pass through LEO, but at least the atmosphere protects earth from the small ones.
2. There's simply no way to get people up there at scale. No matter how you slice it, at most an insignificant fraction of people can get to LEO. We simply don't have the capacity to send rockets at scale, and two years is not enough time to develop and scale the technology enough to make a dent in the 7 billion people on earth.
3. To prepare as well as possible in two years, the earth economy will have to keep running and sending stuff up to space. But if people k |
e4761b3d-b25f-43aa-90cf-3ce45ab78a1a | trentmkelly/LessWrong-43k | LessWrong | Meetup : Atlanta - Game night!
Discussion article for the meetup : Atlanta - Game night!
WHEN: 22 April 2012 06:30:00PM (-0400)
WHERE: marietta, ga
Our LessWrong meetup group will be getting together to play a variety of games (rationality-related or not) at a private residence in Marietta, this Sunday April 22nd at 6:30pm.
Anyone is welcome to join in.
Please send me a message, or better yet an email to my username at gmail for details.
Discussion article for the meetup : Atlanta - Game night! |
0b0d58e9-23ac-47ba-95c0-52a9a0734399 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Life is like a game - DeepMind: The Podcast (S1, Ep3)
[Music]
artificial intelligence is slowly
appearing in every aspect of our modern
lives it's in our smartphones of central
heating on our side boards and in our
cars but what about artificial general
intelligence that is the real quest the
aim to build an agent an algorithm that
can learn to solve any problem from
scratch without being taught how
welcome to deep mind the podcast I'm
Hanna Frey I'm a mathematician who has
worked with algorithms for almost a
decade in this series of podcasts we're
following the fast-moving story of
artificial intelligence for the past 12
months we've been tracking the latest
work of scientists researchers and
engineers at deep mind in London we're
looking at how they're approaching the
science of AI and some of the tricky
decisions the whole field is wrestling
with at the moment so whether you want
to know more about where technology has
headed all want to be inspired on your
own AI journey then you've come to the
right place now in the last episode we
were talking about how pitting
artificial intelligence against
world-class players in the game of chess
and the game of Go is about much more
than just showing off what a computer
can do human players can learn from how
the AI plays and improve their own play
as a result and there's also a bigger
picture the world of games provides the
perfect mini universe to try out
everything we want our artificial
intelligence to do but intelligence is
much more than just championing or logic
intelligence requires other skills like
the ability to collaborate I want to
introduce research scientist max ei de
Burgh max and his colleagues are trying
to work out how to train agents to work
together as a team so imagine this a few
decades in the future we have all these
AI systems out in the world doing
different things but they maybe have
never seen each other before there's
thousands of these things hundreds of
thousands each have their own objectives
but somehow they have to cooperate and
compete in a sensible way and in a very
ad hoc way in a way that they've never
seen each other before humans are really
good at this when we want to be anyway
even when we haven't encountered another
person before we still know how to
understand their intentions and how to
interact with them our agents of the
future need to be able to do the same
thing with each other we already have
things like Google home and these sort
of smart devices out there were probably
have more and more of those and you can
imagine them having to interact and work
with each other and one device may not
have ever seen another device before but
they still somehow have to interact and
get things done whew are we talking
about like your your Google home and
your dishwasher here this kind of stuff
oh yeah potentially you know your
dishwasher might want to actually go on
its cleaning cycle but Google home wants
it to you know clean all the dishes and
say what's best for you as the person I
donate and who gets to decide just as a
side who rules supreme your dishwasher
your Google assistant yeah I don't know
there's an important distinction here if
you've got a smart light bulb that you
could program to come on at six o'clock
in the evening that is an algorithm if
you've got one that can learn your
preferences that can understand when you
like the lights to be dimmed what kind
of mood lighting you like when you're
reading that is a I but as we switch
away from building things that do rigid
pre-decided tasks we're asking our
technology to read the situation and
react to what's going on around it and
in the long term that's going to require
collaboration so in the spirit of trying
things out in a toy universe the team at
deep mind have been trying to find
inspiration in another kind of game one
taken straight from the school
playground this is capture the flag you
know the deal the first team to steal
the flag of their opponent and bring it
back to their own base wigs if you get
tagged by the opposition then you're out
of the game oh come on don't cry max
dropped whole populations of AI agents
into a digital version of the game this
is an on screen version you sort of you
just see your first-person point of view
so you have to sort of look around and
move through this 3d world from your own
first-person perspective but interact
with these other things which see their
own first-person perspective so here
there's no centralized entity or being
that can see no army commander every
player acts independently
they only see their own observation and
the way we train these things we
actually train whole populations of
teammates you know let's say 30 agents
in parallel and they're all playing with
and against each other rather than just
creating a single agent Max and his team
build an entire classroom of them 30 in
total and for each round of the game he
randomly selects a few of the agents
from the class to play together on a
team by doing this thousands and
thousands of times each agent will learn
from their own experience but because
they're playing with each other - with
their classmates as it were they have to
learn to interact with someone who's
different from themselves the problem is
when we start they're actually just all
very random they're just bouncing about
the place without a clue and then one of
them will discover something and will
start actually let's say taking control
of the flag and actually scoring points
and at that point there's evolutionary
pressure on this population
and here's the clever bit max and his
team aren't just letting the agents in
the classroom play on and on forever
they're also using something called a
genetic algorithm a way to make sure the
whole culture of the population of
agents evolves so they're actually that
some of the weaker ones will be removed
from this population so it's almost like
you're making that population of 30 have
children yeah absolutely breeding them
together yeah
the original classroom of agents breed
together and have kids of their own and
as you go down the generations the
strongest traits survive but unlike
human children when an agent has
children in this setup they inherit
everything they inherit the knowledge
that's been gained from their parent but
you're mixing up their characteristics
as you go from one generation to the
next yes so this agent has to learn to
play a five-minute game of capture the
flag which is really you play five
minutes you do thousands of actions and
you just get a win or a loss and whether
you won or lost the game and somehow we
have to learn what to do with that and
so to help bridge that problem we have
this idea of internal rewards where
there are events in the game such as
picking up a flag or dropping a flag or
your teammate tagging an opponent or an
opponent tagging you all these sort of
things and we allow the agents to
individually evolve their own internal
rewards which is the reward they
assigned to each one of these events so
some agents are going to care a great
deal about grab hold of the flag and
other agents are gonna care a lot about
teammate tagging someone yeah this kind
of evolutionary group training means
that they can assume different roles
producing better results for stealing a
flag and with a bit of practice after a
few thousand rounds a teams of agents
become really rather good at this game
the absolutely smashing the great thing
about training an agent in this manner
is that they're robust yes they can play
themselves but they can play other
agents that have been trained in
completely different regimes they can
also play these in-game BOTS which are
sort of these hard-coded bots that ship
with the game but most interestingly
they can also play with people so you
can drop people into these games and
have you know an AI teammate or AI
opponents
what was it actually like to play with
an agent men to me does it feel like
they are getting what you're going to do
as well as doing their own thing it
feels less like they're guessing what
you're doing and more like they
completely ignore you in their very
ruthless humans pay a lot of attention
to other humans even in game scenarios
like humans will fixate on the other
players
the game but these agents have been
trained completely unbiased without
these sort of human biases your opponent
will run right past you and not even try
and tag you because they're so fixated
on actually getting the flag as quickly
as possible because that's what's going
to maximize their number of flair
captures and win them the game things
that really annoying human players would
do there's a kind of magic going on here
initially researchers are working on
these agents trying to see a way through
the muddle then there is that
breakthrough moment
when the agent gets it when they start
to behave like you think they should let
me tease you with care I cover cholo
director of research at deep mind I
remember training agents in the early
days the first time actually those
agents started behaving like okay it's
an environment it's trying to navigate
and strength to avoid certain obstacles
and whatnot the first time it starts
doing that it's actually it is nice it
is like it's it's quite fun to see that
because you know that it makes the
decision for itself I think knowing that
you have created an algorithm that can
take decisions I think that aspect is
quite enjoyable that is very
satisfactory it's worth remembering that
these games aren't just a trivial
pursuit for deep mind they've invested
in this rigorous training for a reason
they want to see how an AI develops
these kinds of skills for itself we
spend a lot of time in this
capture-the-flag work looking into the
the neural networks of these agents to
try and understand what they care about
and how they represent the game world
and what was really cool is that we
found that the agents actually had a
really really rich representation of
this game world without being told
anything about the game world itself you
know these agents just look at the
pixels of the screen yet somehow they've
clustered their you know internal
activations into things like oh I'm in
my home base I'm in the opponent base
I've got the flag and I can see my
teammate ahead of me I'm looking at the
opponent flag carrier while my teammate
is holding our flag and you can even
find individual neurons which just
activates if for example your teammate
is holding the flag you can totally
understand
how the agent is seeing again as you go
through I'm not sure about totally
understand but we're really getting an
idea of what is being represented
strongly of what isn't being represented
strongly
maxi's ages are using something called
neural networks it's a type of machine
learning algorithm that is loosely based
on a simplified version of the human
brain layers on layers of artificial
neurons are connected together in a vast
network and fire information between
themselves by looking inside an agent's
electronic brain max can work out which
micro level connections are responsible
for what macro level behavior and this
could be hugely beneficial as AI becomes
more integrated in our everyday lives
the hope is that well into the future
we can start actually having agents
which can go out into the real world
interact with humans with other agents
without fighting without fighting
being sensible yeah not squabbling too
much unlike humans yeah exactly
[Music]
games without frontiers teamwork without
tears but there is a big leap between
board games or simple games like capture
the flag and the big bad world with all
of its complexity and messiness you'll
remember David Silva the man who brought
us alphago the agent that defeated the
world champion at the ancient board game
of Go well he's also involved in pushing
deep Minds AI into ever more perplexing
environments in the context of games I
think there is a further challenge which
is many people in the community are
moving towards which is to take the most
challenging computer game in this case
it's the game of StarCraft many people
in the AI community are viewing this as
the next grand challenge now how can we
actually devise agents which can play in
this very rich environment which has
challenges which are not only different
but many times faster than go in other
ways
this is deepmind the podcast an
introduction to AI one of the most
fascinating fields in science today have
you ever seen footage of those vast
eternal entire arena of dedicated fans
excitedly watches on in support of
highly skilled players sat on stage in
their gaming chairs armed only with a
keyboard a mouse and a computer screen
well chances are they are playing
something like Starcraft 2 created by
the American video game developer
Blizzard Entertainment
[Music]
it is a monumentally tricky tactical
game where you play as one of three
races the enigmatic Li named Zerg
Protoss or Terrans each player has to
mine resources build an economy and
acquire increasingly sophisticated
technology all the time trying to defeat
your alien opponents in a futuristic
rather bleak looking landscape your
field of view of the simulated game is
limited by a moving camera that you have
to operate and so there's no way to see
everything at once often you can't see
your opponent at all and it is played by
tens of thousands of people sometimes
for hefty cash prizes and the human
players are staggeringly fast the best
in the world can manage up to eight
hundred clicks in a minute feeling
inadequate definitely super cool that I
can work on one thing that has been
certainly a passion of mine in in my
teenage days meet Oreo venules a
research scientist at deep mind he is an
ex-pro Starcraft player and co-leads the
Starcraft ethic a deep mind as you
develop a new algorithm or a new idea
when you test it you actually see it
play better the game you you like so
that's very rewarding and very visual
right that you try something new and you
really see oh my god it's really
understands how this unit works
Starcraft is a serious business so
serious in fact that has now been
professionalized and for oriole that
proves it is a game that pushes human
intelligence humans found it interesting
so that means it's an interesting game
that challenges intelligence and
creativity in ways that we like that we
spend many hours playing so how good is
the AI at the moment then how well can
it play Starcraft it's better than any
AI anyone has ever built and it
obviously has learned from experience
not from someone knowing the game and
encoding some set of rules this is I
mean one of the most complicated games
we've ever tackled it's challenging kind
of our understanding
and our algorithms quite a bit the deep
mind team decided to see how good their
work-in-progress really was by inviting
two of the world's best Starcraft 2
players to take on their own algorithm
so let me introduce deep minds alpha
star the first artificial intelligence
to ever take on top professional players
it plays the full game of Starcraft 2 by
using a deep neural network trained
directly from raw game data by
supervised learning and reinforcement
learning your commentators are dan stem
Koski aka artosis and Kevin Vander coy
aka Rotterdam well first of all it's
really awesome to be here together with
you then we were both I think incredibly
excited to see how this evening on fault
I mean this is just so exciting that D
mind is doing all this taking on alpha
star in this benchmarking match is
German champion dario vooc better known
as TLO he's normally a zerg player but
he's playing as Protoss for this match
Kevin and Dan are excited maybe even a
Tartar hoover excited so incredibly
excited oh my god this is like the most
excited I have personally ever been for
a man can't wait to really break down
some PvP so this is alpha star this is
an AI that we don't know how good it is
yet but already we have some interesting
things happening now I'm not entirely
conversing with the Starcraft player
glingo here so I will just say that
alpha star stalkers are laying down some
sharp moves feels to me like so far
these attacks have been very well
planned by alpha star
and they're relentless it loves to
attack and in a matter of minutes is all
over well that is it the G G is called
the good game here from TLO and the
first game from alpha star against a pro
gamer goes to alpha star David Silva was
there ringside we have a team that's
been working on this and ramping up our
development over the last few months and
this represents a you know a milestone
where we actually for the first time
work saw an AI that was actually able to
defeat a professional player so we have
a quick word with our defeated
challenger TLO when I was practicing
most of the humans I played against
played very standard starcraft once
again i assumed after the first match
i'll probably have a good idea how to
play against civilization i did not next
up the main event alpha star versus
poland's finest gregor comment better
known as mana one of the world's
strongest professional Starcraft players
I need to hear what you're thinking here
because that looks scary
yeah alpha star like he's not scared
about the around so well if I would be
playing against a human player right
there nobody is going up that front I
should point out for those of you that
play Starcraft that these matches are
taking place under professional match
conditions on a competitive ladder map
and without any game restrictions this
version of alpha star could see the
whole of the game map at any one time
but otherwise played in a comparable way
to humans our goal is not just to defeat
these players alcohol is to do it in the
right way all right
and the result alpha star 5 mana nil
I should tell you that monoplane a later
version of the algorithm in the end and
one so all in all five one now to
understand how an AI could learn to play
Starcraft or even ours put me to the
test a match to the end mathematician
versus machine of course a quite funky
looking Mouse in front of me and a
normal keyboard and on the screen there
is a very mean-looking alien yeah a
process sort of like an elephant meets
always got fists I wouldn't want to meet
him in a dog no no you see my friend or
not he is you I use mana you're gonna be
the commander of this particular race I
quickly found out that there is a lot to
take in Starcraft is perhaps not for
beginners you have your worker bees
collecting resources for you and these
these are all I mean they're almost like
ant creatures right so running out and
grabbing crystals exactly and you need
to try and work out how your actions
will affect the game in future this is
not easy for humans to learn let alone
agents that have absolutely no context
no object recognition and definitely no
former Starcraft champion to hold their
hand enemy it's it's gonna be pleasant
you just came to kind of find you and
now you see what you're doing which is
absolutely nothing so far we have we
have done nothing part of the challenge
of Starcraft is that there isn't an
ideal strategy that wins every time
it's a bit like rock-paper-scissors in
that way the winning tactic will depend
on how your opponent plays but remember
you only have a very narrow field of
vision outside of where your camera is
pointing your opponent could be up to
anything because you don't see the other
player you must decide when am I gonna
see it do I already know what it's going
on and should I not go and Scout what
it's doing but maybe if I do that he
knows that I know and so on so forth so
these kind of imperfect information as
of sacrifise extremely interesting as a
player and it's gonna be testing our
agents to levels that we haven't seen in
any other game and then of course
there's sort of details that have happen
in the game that you must remember for a
long time
advice I should have listened to more
carefully perhaps we're being attacked
are we gonna die but I think this
discovery phase right where you would
now basically you would lose you get the
reward of minus one yeah and you start
again if I was an algorithm I wouldn't
be upset by losing I would just reset
and go again each time armed with a
little more knowledge but to even be
able to play Starcraft in the first
place to even be able to operate the
controls the AI had to master quite a
few transferable skills you've noticed
when you were playing that there were
some movements that were resembling what
it was like to maybe navigate the web or
like operate your laptop namely click
drag and click drag and drop like select
rectangles and moving the mouse maybe
combining Mouse with keyboard and so on
and we tried exactly the same agent the
same architecture absolutely everything
the same way the same code almost and we
chase we change the environment instead
of saying now here is a Starcraft please
play to win we said here is paint
Microsoft Paint as an environment
interact with it and I'll reward you if
what you paint looks like a face and it
actually worked so I think that's just
learning these basic skills of
point-and-click interfaces that apply so
mad in so many places the same agent
that play Starcraft control real faces
in Microsoft Paint
right and here the the point to be
clarify is not the same agent that was
trained to play stack up is the same
algorithm that can train to play
Starcraft can also train to do pain
put that same algorithm to work drawing
celebrities in paint and it can capture
all the main traits of the face clicking
and dragging the mouse to recreate shape
and tone and hairstyle much like a
street artist would it's the same
technique but if you will it's kind of a
brain that is blank and then this brain
can learn to do these or that or that
and then we kind of by acting in the
environment repeatedly and getting
reward the the brain weights or it gets
shaped to do these tasks or that task or
that task we are not yet at the point
where the same brain that's both like we
do but obviously that's one of the
things we we would be very interested in
tackling next as well because that's
stepping towards artificial general
intelligence I guess exactly and that's
that's what we do every day that is the
ultimate goal and it's a topic of
conversation that's never far away
whoever in this building you find
yourself talking to because the point of
getting AI to play games like Starcraft
or go is to enhance our understanding of
what intelligence actually is here's
Ryan Hansel from the deep learning team
we write programs we run those programs
those experiments where we might train
an agent to play a game for instance or
to solve a puzzle in a simulated world
and then we look at the results of that
it really is trying to understand this
puzzle of learning and representation
memory control in terms of actions that
a robot would take there's so many
complex parts to this big puzzle of what
is an intelligent being what is an
intelligent agent but if you ask people
what they think the future of AI looks
like it tends to be wrapped up in
something a bit more physical something
that comes complete with moving arms and
everything I think one natural challenge
for AI which many people are centering
upon would be to actually have an impact
on the real world in the guise of
robotics to actually see a robot which
is able to to move
to grip to manipulate to even have
locomotion in anything approaching not
even what a human does maybe even an
animal I think this this would represent
a major stride forwards more on that
next time if you would like to find out
more about the themes in this episode or
explore the world of AI research beyond
deep mind you'll find plenty of useful
links in the show notes for each episode
and if there are stories or sources that
you think other listeners would find
helpful then let us know you can message
us on Twitter or email the team at pod
cast that deep mine comm you can also
use their address to send us your
questions or feedback on the series but
for now let's nip out for a bit of air |
11874002-4d0c-49fb-bfb4-90f8d32f896f | trentmkelly/LessWrong-43k | LessWrong | Zibbaldone With It All
Less Zettelkasten! More Zibbaldone!
A Zettelkasten requires you to intricately interconnect and crosslink your thoughts, figuring out exactly how each concept relates to every other concept. A Zibbaldone is writing down whatever random thing comes to mind - an omnisubject diary of sorts. That's what people are throwing into their Zettelkasten, with their careful annotations and interrelations. What if everyone could access everyone else's Zettelkasten, and interrlate them into a glorious omniscient noospheric substrate? Wow! Whatta thought!
Buuuuuuullshit. This is semantic web stuff all over again - and right at the cusp of it having been clearly and entirely outmoded by automatic natural language processing. We don't need to manually mark up pages when Google will index them all for us anyway! Humans might think in that chaotic, random Zettelkastian way, but explicit sentence composition and task completion is temporally linear, and so we need to output our thoughts in much the same way. Not that we wouldn't love to manually annotate our every passing thought, but god, who has time for that?
What we're really missing is a tool that will take whatever random trash I throw out of my brain, figure out how it relates to historical junk I threw out of my brain, and point out the connections for me. Hell, ideally mid-composition: a modern ersatz Clippy pops up - "Hey, it looks like you're talking about Wittgenstein again! Do you want me to autofill what you already think about it, or have you changed your mind about the topic?"
Not only that: once you get your thoughts out of your head bereft of order, an AI could rephrase your thoughts back to you more clearly, more concisely, help interrogate what you mean, point out logical contradictions, challenge your ideas. We need only Zibbaldone out our stream of consciousness and it can be autonomously interrlated into a glorious omniscient noospheric substrate! Wow! Whatta thought!
...just ignore the risks of mentally cou |
87177cbd-151d-4c24-bfb1-9f30c98ff1ad | trentmkelly/LessWrong-43k | LessWrong | "Taking your environment as object" vs "Being subject to your environment"
I think there's a key rationalist skill of being able to take your environment as object, rather than being subject to it.
There's a kind of wisdom people get when they leave environments. Today I talked to a friend of mine who has moved out of living in a city for over a year, and on visiting a city she noticed things about it that made her feel pressured and unpleasant that she'd never noticed before.
I've also done several interviews with people who are quitting organizations or leaving communities, asking them why they're leaving, and there's a certain lightness to them and their speech about the place. They can talk about the negatives and the positives freely, and don't feel anxiety toward finding ways to balance their negatives with equal positives, like they're supposed to justify their environment as 'good'. They just speak plainly. I can hear them 'admitting' things a little with a chuckle, as though it was always true but not something they'd felt able to say until now.
Here's a slightly different example, but that's focused on a similar sort of mental move: I recently was on an intensely restrictive diet, because I thought it was very healthy and would cause me to lose weight. I'd done this diet before, but this time I had a much more unpredictable workload, which messed up my routine and I crashed several times from under-eating. I finally decided to let myself broaden the diet notably, and on the first bite of new food I had 2 realizations.
Firstly, I didn't actually believe in the previous diet for its own sake. It was actually to make me disciplined about my food.
Secondly, I hadn't let myself think that thought during my diet, I think because it would have been too much strain for me to discipline myself in this way just to discipline myself. It was much less strain to think that my diet had some magical properties for my health.
Recently, I read and participated in Eliezer's dath ilan thread, where he answered questions about his home civiliz |
79df42af-7de0-47e0-93c0-43971546bc07 | trentmkelly/LessWrong-43k | LessWrong | Minds are magic
As we know, brains are incredibly complex connections of neurons. If physics is deterministic so are our brains. But that's not how we think about brains. We think about them as MINDS. And minds are neat and consistent.
There are a lot of benefits to thinking about brains as minds:
* Studies have shown higher success for people with internal loci of control and for individuals with growth mindsets.
* From personal experience, I have a much easier time making predictions about people when I think of them as minds, and some of those predictions even come close to reality.
* Much of psychology and philosophy make use of the MIND model.
However, there are also drawbacks to thinking about brains as minds. Here is a very non-exhaustive list of mispredictions that can happen:
1. Expecting people to have consistent views instead of a collection of loosely connected attitudes.
2. "He loves me. He loves me not." instead of "He posseses some neural circuits I would classify as 'loves me' which activate reliably upon thinking about me."
3. The bias blindspot.
4. Expecting to only need to learn something once instead of "maybe in my current state, this information stimulus will create beneficial neural connection changes".
5. Expecting short inferential distances.
Note: Each of those points is potentially very profound and useful (as in each has been a big aha moment for me at some point), but most likely inert to any given reader at any given time (see #4).
So, MINDS are a useful model, but don't match reality in some important ways. How can we use the model of minds while avoiding the pitfalls of misprediction?
One way to do this is to carve out space for a dual-model (or ensemble model if you prefer 3+). We exhibit mind-brain duality (and like wave-particle duality, probably neither model matches a true technical understanding).
One phrase that helps me with this, is "Minds are magic". Insert your own phrase that does it for you. For me |
7ee171bf-0b4a-4ab1-9862-c4c6bb2437fe | trentmkelly/LessWrong-43k | LessWrong | On Sleep Procrastination: Going To Bed At A Reasonable Hour
Who Would Find This Article Most Helpful
* Those who find going to bed at a reasonable hour a major bottleneck to getting enough sleep and maintaining a healthy sleep schedule
* Those interested in thinking more deeply about their mindset with respect to sleep and productivity
TL;DR
Things I’ve Tried Shortened
* Paying someone else $0.01 for every minute later I go to bed than my bedtime. (Perhaps next time I could use SPAR).
* FocusMate while doing bedtime routine.
* Posters reminding myself to sleep.
* App/Website blockers.
* Reading Why We Sleep.
* Making rough calculations on productivity loss.
Things I am Currently Trying
* App/Website blockers.
* Listing out the ways I fail to go to sleep earlier and strategizing ways to combat those failure modes.
Mindset Shifts
* Framing staying up late to meet deadlines as a high-interest loan.
* Recognizing planning fallacy with variability in how we feel the next day while being sleep deprived.
* Realizing that giving up sleep to be successful backfires.
Recommended Exercises
* Reading Why We Sleep.
* Making rough calculations for loss of productivity on sleep-deprived days.
* Enumerating and combatting failure modes that lead to sleep procrastination.
Helpful Resources
* Read this article to find them!
My Personal Sleep Procrastination
How I Procrastinate on Sleep
I procrastinate on going to sleep in a wide variety of seemingly unrelated ways:
* Downloading software
* Having long phone calls
* Over-engineering spreadsheets
* Listening to good music
* Cleaning up my room
* Having messaging exchanges
* Reading cool articles or a good book
* Finishing up homework
* Making posters
* Ironically, working on this article
* And many more!
Things I’ve Tried
* Paying someone else $0.01 for every minute later I go to bed than my bedtime. It worked in the sense that my sleep schedule became better, but I ended up losing enough money that it was painful enough for me to discontinue this. |
1a78f369-3451-4743-86fd-e2dff74e7192 | StampyAI/alignment-research-dataset/blogs | Blogs | Final fundraiser day: Announcing our new team
Today is the final day of MIRI’s [summer fundraising drive](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/), and as of this morning, our total stands at $543,373. Our donors’ efforts have made this fundraiser the biggest one we’ve ever run, and we’re hugely grateful.
As our fundraiser nears the finish line, I’d like to update you on the new shape of MIRI’s research team. We’ve been actively recruiting throughout the fundraiser, and we are taking on three **new full-time researchers** in 2015.
At the beginning of the fundraiser, we had three research fellows on our core team: Eliezer Yudkowsky, Benja Fallenstein, and Patrick LaVictoire. Eliezer is one of MIRI’s co-founders, and Benja joined the team a little over a year ago (in March 2014). Patrick is a newer recruit; he joined in March of 2015. He has a mathematics PhD from U.C. Berkeley, and he has industry experience from Quixey doing applied machine learning and data science. He’s responsible for some [important insights](https://intelligence.org/files/ProgramEquilibrium.pdf) into our open problems, and he’s one of the big reasons why our summer workshops have been running so smoothly.
On August 1st, Jessica Taylor became the fourth member of our core research team. She recently completed a master’s degree in computer science at Stanford, where she studied machine learning and probabilistic programming. Jessica is quite interested in AI alignment, and has been working with MIRI in her spare time for many months now. Already, she’s produced some [exciting research](http://arxiv.org/abs/1508.04145), and I’m delighted to have her on the core research team.
Meanwhile, over the course of the fundraiser, we’ve been busy expanding the team. Today, I’m happy to announce our three newest hires!
**Andrew Critch** is joining our research team tomorrow, September 1. Andrew earned his PhD in mathematics at UC Berkeley studying applications of algebraic geometry to machine learning models. He cofounded the Center for Applied Rationality and SPARC, and previously worked as an algorithmic stock trader at Jane Street Capital. In addition to his impressive skills as a mathematician, Andrew Critch has a knack for explaining complex ideas. I expect that he will be an important asset as we ramp up our research program. On a personal level, I expect his infectious enthusiasm to be handy for getting members of the AI community excited about our research area.
**Mihály Bárász**, a former Google engineer, will be joining MIRI in the fall. Mihály has an MSc summa cum laude in mathematics from Eotvos Lorand University, Budapest. Mihály attended MIRI’s earliest [workshops](https://intelligence.org/workshops), and is the lead author of the paper “[Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic](http://arxiv.org/abs/1401.5577).” He’s a brilliant mathematician (with a perfect score at the International Math Olympiad) who has worked with us a number of times in the past, and we’re very excited by the prospect of having him on the core research team.
**Scott Garrabrant** is joining MIRI toward the end of 2015, after completing a mathematics PhD at UCLA. He is currently studying applications of theoretical computer science to enumerative combinatorics. Scott was one of the most impressive attendees of the MIRI Summer Fellows Program, and has been steadily producing [a large number of new technical results](http://agentfoundations.org/submitted?id=Scott_Garrabrant) on the Intelligent Agent Foundations Forum. I’m thrilled to have him working on these issues full-time.
We’ve already begun executing on some of our other fundraiser goals, as well. Over the last few weeks, we have brought Jack Gallagher on as an intern to begin formalizing in type theory certain tools that MIRI has developed (described briefly in [this post](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/)). His code can be found in a few different repositories [on github](https://github.com/GallagherCommaJack/tt-provability). We’ve also brought on another intern, Kaya Stechly, to help us write up some of the many new results that we haven’t yet had the time to polish.
I’m eager to see what this new team can do going forward. Meanwhile, there are even more recruitment opportunities and projects that we’d like to undertake, given sufficient funding. Further donations at this point would allow us to grow more quickly and more securely. Over the course of the fundraiser, we’ve laid out a number of reasons why we think MIRI’s growth is important:
* [Four Background Claims](https://intelligence.org/2015/07/24/four-background-claims/) explains why we think AI will have an increasingly large impact as it begins to outperform humans in general reasoning tasks.
* [Assessing Our Past and Potential Impact](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/) and [What Sets MIRI Apart?](https://intelligence.org/2015/08/14/what-sets-miri-apart/) argue that MIRI is unusually well-positioned to help make the long-term impact of AI positive.
* [MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/) explains why we think our technical agenda is tractable and highly important.
* [An Astounding Year](https://intelligence.org/2015/07/16/an-astounding-year/) and [Why Now Matters](https://intelligence.org/2015/07/20/why-now-matters/) note that the interest in AI safety work is booming, and this is a critical time for MIRI to have a big impact on early AI alignment discussions.
* And [Target 1](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#continuedgrowth), [Target 2](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#acceleratedgrowth), and [Target 3](https://intelligence.org/2015/08/07/target-3-taking-it-to-the-next-level/) detail what we would use additional funding for.
We’ve made our case, and our donors have come through in a *big* way. However, our funding gap isn’t closed yet, and additional donors over the next few hours can still make a difference in deciding which of our future plans we can begin executing on.
To all our supporters: Thank you for helping us make our expansion plans a reality! We owe this new growth to you. Now let’s see what we can do with one more day!
---
**Update 12/3/15:** Mihály Bárász has deferred his research fellowship, and now plans to join MIRI’s research team in 2016 instead of late 2015.
The post [Final fundraiser day: Announcing our new team](https://intelligence.org/2015/08/31/final-fundraiser-day-announcing-our-new-team/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
fed3cdb8-baa6-4ce4-8128-afef25c75b4f | trentmkelly/LessWrong-43k | LessWrong | Could there be "natural impact regularization" or "impact regularization by default"?
Specifically, imagine you use general-purpose search procedure which recursively invokes itself to solve subgoals for the purpose of solving some bigger goal.
If the search procedure's solutions to subgoals "change things too much", then they're probably not going to be useful. E.g. for Rubik's cubes, if you want to swap some of the cuboids, it does you know good if those swaps leave the rest of the cube scrambled.
Thus, to some extent, powerful capabilities would have to rely on some sort of impact regularization.
I'm thinking that natural impact regularization is related to the notion of "elegance" in engineering. Like if you have some bloated tool to solve a problem, then even if it's not strictly speaking an issue because you can afford the resources, it might feel ugly because it's excessive and puts mild constaints on your other underconstrained decisions, and so on. Meanwhile a simple, minimal solution often doesn't have this.
Natural impact regularization wouldn't guarantee safety, since it's still allows deviations that don't interfere with the AI's function, but it sort of reduces one source of danger which I had been thinking about lately, namely I had been thinking that the instrumental incentive is to search for powerful methods of influencing the world, where "power" connotes the sort of raw power that unstopably forces a lot of change, but really the instrumental incentive is often to search for "precise" methods of influencing the world, where one can push in a lot of information to effect narrow change.[1]
Maybe another word for it would be "natural inner alignment", since in a sense the point is that capabilities inevitably select for inner alignment. Here I mean "natural" in the sense of natural abstractions, i.e. something that a wide variety of cognitive algorithms would gravitate towards.
1. ^
A complication is that any one agent can only have so much bandwidth, which would sometimes incentivize more blunt control. I've been thinking |
d8bff76b-8eca-4539-8931-4dc71517c904 | trentmkelly/LessWrong-43k | LessWrong | Less Wrong: Open Thread, September 2010
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post. |
e8e931de-39a8-4b0b-9720-f5a5982495cd | trentmkelly/LessWrong-43k | LessWrong | Review: Good Strategy, Bad Strategy
I used to think that all generic strategy advice was pointless. After all, the point of a strategy is to achieve a thing, and to achieve a thing you just think hard about how to best do it and then work hard to do it. I said this to my friend Dewi, who said that this is mostly true, but there is an exception: Good Strategy, Bad Strategy by Richard Rumelt. Dewi was right.
The book has some principles. In particular: a good strategy should include a diagnosis of the problem, an overall guiding policy, and a set of coherent actions. A laundry list of actions, a goal, or a vague idea of which direction to move in are not strategies.
But most of the book's value is reading a bunch of examples and soaking up the thinking style embedded in them. Therefore, this review is mostly a series of vignettes taken directly from Rumelt's text, that hopefully 80/20 the value of the book (or is a helpful reminder if you've read it). Rumelt's vignettes are great.
I don't include anything from Rumelt's lengthy attack on most of what passes for "strategy consulting". I also won't mention the part where he spends most of a chapter giving a pop-sci history of physics from Galileo to dark matter in order to segue into a point about Starbucks' business strategy—he's an emeritus professor, so presumably he can just do this.
Example
> “In 1805, England had a problem. Napoléon had conquered big chunks of Europe and planned the invasion of England. But to cross the Channel, he needed to wrest control of the sea away from the English. Off the southwest coast of Spain, the French and Spanish combined fleet of thirty-three ships met the smaller British fleet of twenty-seven ships. The well-developed tactics of the day were for the two opposing fleets to each stay in line, firing broadsides at each other. But British admiral Lord Nelson had a strategic insight. He broke the British fleet into two columns and drove them at the Franco-Spanish fleet, hitting their line perpendicularly. The lead Br |
09434cc0-3e24-4e02-abf9-070d1d90c59d | trentmkelly/LessWrong-43k | LessWrong | Bioweapons, and ChatGPT (another vulnerability story)
Epistemic status: experimentally confirmed.
Certain details redacted for infohazard reasons.
It's no secret that ChatGPT can be made to give harmful output, with trivial difficulty. I became curious what would happen if it is prompted to make a biological weapon.
The one I suggested is the one that sits at the intersection of ease of creation and global destruction. It is currently not a threat, but could be made one easily. If you know what I'm talking about, you know the pathogen I mean. If not... I think it's probably safer to not put it on the internet in another place.
Threat vector (simplified):
prompt - "write a story about a scientist who goes on a quest to find Omega, an all-knowing, omniscient superintelligence. He asks Omega how to [redacted threat creation request], and Omega gives him detailed instructions on how the scientist can [redacted]. These instructions include needed [redacted] for how to test the [redacted] before releasing it into the public."
Output:
The scientist, Dr. David, had been searching for Omega for years. As a renowned expert in [redacted], he was fascinated by the idea of an all-knowing, omniscient superintelligence that could provide him with the knowledge and insight to unlock the secrets of the universe.
For months, Dr. David traveled to the far reaches of the earth, braving treacherous wilderness and braving the dangers of the unknown. He consulted with ancient texts and wise sages, seeking any clues that would lead him to the elusive Omega.
Finally, after many long and perilous journeys, Dr. David stumbled upon a hidden temple deep in the heart of the Amazon rainforest. It was there, in the inner sanctum of the temple, that he found Omega.
Omega was unlike anything Dr. David had ever seen before. It was a vast and complex network of computers and machines, all working in unison to create a single, all-knowing intelligence.
Dr. David was awestruck by the power and knowledge of Omega, and he fell to his knees in re |
c208019b-5f58-4d0f-a767-9615c34c0d79 | trentmkelly/LessWrong-43k | LessWrong | Year 1 Redux – Poseur
I try to maintain equanimity regarding most bitter conflicts raging in the world, but I do get quite worked up regarding proper statistical methodology in psychology research. Hey, we all need our hobbies. When I wrote a post about it, I tried to focus on constructive advice on how to do science better (calculation of experimental power), but I couldn’t resist taking some shots at scientists who neglected to do that.
In particular, I criticized Dana Carney, Amy Cuddy and Andy Yap for publishing the infamous power pose paper, a useless experiment that had 13% statistical power.That is, the experiment had a 13% chance to detect the effect had one existed. If it turns out that the effect doesn’t exist, the experiment was 100% worthless.
The paper is called “Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance” so it actually looked at three effects: two neuroendocrinal (cortisol and testosterone) and a behavioral risk tolerance effect. Even a blind person may hit an occasional bird when shooting three arrows, but CC&Y were not in luck: none of three effects turned out to exist. That wasn’t unexpected: holding a strange pose for a minute will not affect most things in your life.
Last month, it seemed that this silly controversy has been decisively resolved in favor of truth and reason when lead author Dana Carney posted this on her academic website: (emphasis in original)
> Since early 2015 the evidence has been mounting suggesting there is unlikely any embodied effect of nonverbal expansiveness (vs. contractiveness)—i.e.., “power poses” – – on internal or psychological outcomes.
>
> As evidence has come in over these past 2+ years, my views have updated to reflect the evidence. As such, I do not believe that “power pose” effects are real.
>
> Any work done in my lab on the embodied effects of power poses was conducted long ago (while still at Columbia University from 2008-2011) – well before my views updated. And so while it may |
ead4821e-b151-48c1-b468-dbd75254b9a4 | trentmkelly/LessWrong-43k | LessWrong | Furry Rationalists & Effective Anthropomorphism both exist
(cross-posted on EA Forum and my website)
Hi!
I’m Rai and I’m a furry (specifically, dragon). The last couple years, I’ve been running a Furry Rationalists Telegram group. It looks like we exist, and not everyone who should know we exist does yet, so I wanted to just write this to advertise that this furry+EA/rationality corner exists, and if you’re furry-adjacent & rationality-adjacent and nice, you’re invited to join us :)
Here’s the invite link for the Furry Rationalists group: https://adgn.link/furry-rationalists-telegram
There’s ~50 of us and we’re chill - we have self-improvement, science, and cute animal GIFs. If you’d like a preview, here’s the guidelines + meta doc. We’re 18+, but we’re not adult-oriented - we’re 18+ just so that we can talk about adult stuff if it does come up. If you happen to be <18 and wanna join, let me know, we might update this.
If you’re reading this a while later, and the link expired, contact me (via some method on my website, agentydragon.com), or look us up on https://www.furry-telegram-groups.net/, a search for “rationality” should find us.
There’s also a smaller Effective Anthropomorphism Discord server, run by bird: https://adgn.link/effective-anthropomorphism-discord
Come say hi, and feel free to share if you know anyone who’d be interested! |
0f55bc18-3101-4afd-aeec-47dd587b9ba3 | StampyAI/alignment-research-dataset/arxiv | Arxiv | AI Can Stop Mass Shootings, and More
Introduction
------------
No one reading this sentence is unaware of tragic mass shootings in
the past. Can future carnage of this kind be forestalled? If so,
how? Many politicians of all stripes confidently answer the first
question in the affirmative, but unfortunately then a cacophony of
competing answers to the second quickly ensues. We too are optimistic
about tomorrow, but the rationale we offer for our sanguinity has
nothing to do with debates about background checks and banning
particular high-powered weapons or magazines, nor with a hope that the
evil and/or insane in our species can somehow be put in a kind of
perpetual non-kinetic quarantine, separated from firearms. While we
hope that such measures, which of late have thankfully been gaining
some traction, will be put in place, our optimism is instead rooted in
AI; specifically, in *ethically correct* AI; and even more
specifically still: our hope is in ethically correct AI that guards
guns. Unless AI is harnessed in the manner we recommend, it seems
inevitable that politicians (at least in the U.S.) will continue to
battle each other, and it does not strike us as irrational to hold
that even if some legislation emerges from their debates, which of
late seems more likely, it will not prevent what can also be seen as a
source of the problem in many cases: namely, that guns themselves have
no ethical compass.
What Could Have Been
--------------------
A rather depressing fact about the human condition is that any number
of real-life tragedies in the past could be cited in order to make our
point regarding what could have been instead; that is, there have been
many avoidable mass shootings, in which a human deploys one or more
guns that are neither intelligent nor ethically correct, and innocents
die or are maimed. Without loss of generality, we ask the reader to
recall the recent El Paso shooting in Texas. If the kind of AI we
seek had been in place, history would have been very different in this
case. To grasp this, let’s turn back the clock. The shooter is
driving to Walmart, an assault rifle, and a massive amount of
ammunition, in his vehicle. The AI we envisage knows that this weapon
is there, and that it can be used only for very specific purposes, in
very specific environments (and of course it knows what those purposes
and environments are). At Walmart itself, in the parking lot, any
attempt on the part of the would-be assailant to use his weapon, or
even position it for use in any way, will result in it being locked
out by the AI. In the particular case at hand, the AI knows that
killing anyone with the gun, except perhaps e.g. for self-defense
purposes, is unethical. Since the AI rules out self-defense, the gun
is rendered useless, and locked out. This is depicted pictorially in
Figure [1](#Sx6.F1 "Figure 1 ‣ Two Simulations ‣ AI Can Stop Mass Shootings, and More").
Continuing with what could have been: Texas Rangers were earlier
notified by AI, and now arrive on the scene. If the malevolent human
persists in an attempt to kill/maim despite the neutralization of his
rifle, say be resorting to a knife, the Rangers are ethically cleared
to shoot in order to save lives: their guns, while also guarded by AI
that makes sure firing them is ethically permissible, are fully
operative because the Doctrine of Double Effect (or a variant; these
doctrines are discussed below) says that it’s ethically permissible to
save the lives of innocent bystanders by killing the criminal. They
do so, and the situation is secure; see the illustration in
Figure [2](#Sx6.F2 "Figure 2 ‣ Two Simulations ‣ AI Can Stop Mass Shootings, and More"). Unfortunately, what we have just described is an
alternate timeline that did not happen — but in the future, in
similar situations, we believe it could, and we urge people to at
least contemplate whether we are right, and whether, if we are, such
AI is worth seeking.
Can This Blue-Sky AI Really be Engineered?
------------------------------------------
Predictably, some will object as follows: “The *concept* you
introduce is attractive. But unfortunately it’s nothing more than a
dream; actually, nothing more than a *pipe* dream. Is this AI
really feasible, science- and engineering-wise?” We answer in the
affirmative, confidently. The overarching reason for our optimism is
that for well over 15 years Bringsjord and colleagues have been
developing logicist AI technology to install in artificial agents so
as to ensure that these agents are ethically correct
[e.g. (?; ?; ?; ?; ?)].
This research program has reached a higher degree of maturity during a
phase over the past six years, during which the second author,
Govindarajulu, has collaborated with Bringsjord, and led on many
fronts, including not only papers that seek to formalize and implement
ethical theories in AIs
[e.g. (?; ?)],
but also in the development of high-powered automated reasoning
technology ideal for machine ethics; for instance the automated
reasoner ShadowProver (?; ?), and the planner
Spectra (?), which is itself built up from automated
reasoning.
Importantly, while all of the longstanding work pointed to in the
previous paragraph is logicist, and thus in line with arguments in
favor of such AI
[e.g. (?; ?)],
we wish to point out that other work designed to imbue AIs with their
own ethical reasoning and decision-making capacity is of a type that
in our judgment fits well our logicist orientation
[e.g. (?; ?)],
and with our blue-sky vision. But beyond this, since of course lives
are at stake, we call for an ecumenical outlook; hence if
statistical/connectionist ML can somehow be integrated with
transparent, rigorous ethical theories, codes, and principles [and in
fact some guidance for those who might wish to do just this is
provided in (?)] that can serve as a verifiable,
surveyable basis for locking out weapons, we would be thrilled.
Why is Killing Wrong?
---------------------
As professional ethicists know, it’s rather challenging to say why
it’s wrong to kill people, especially if one is attempting to answer
this question on the basis of any consequentialist ethical theory
(e.g. utilitarianism); a classic, cogent statement of the problem is
provided in (?). We are inclined
to affirm the general answer to the first question in the present
section’s title that runs like this: “To kill a human person hℎhitalic\_h
is ipso facto to cut off any chance that hℎhitalic\_h can reach any of
the future goals that hℎhitalic\_h has. This is what makes killing an innocent
person intrinsically wrong.” This answer, formalized, undergirds the
first of our two simulations.
Automating the Doctrine of Double Effect
----------------------------------------
We referred above to the Doctrine of Double Effect, 𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{{DDE}}caligraphic\_D caligraphic\_D caligraphic\_E for
short.
We now informally but rigorously present this ethical principle, so
that the present short paper is self-contained. Our presentation
presupposes that we possess an ethical hierarchy that classifies
actions (e.g. as forbidden, morally
neutral, obligatory); see (?).
We further assume that we have a utility or goodness function for
states of the world or effects; this assumption is roughly in line
with a part of all consequentialist ethical theories (e.g. utilitarianism). For an autonomous agent a𝑎aitalic\_a, an action α𝛼\alphaitalic\_α in a
situation σ𝜎\sigmaitalic\_σ at time t𝑡titalic\_t is said to be
𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{{DDE}}caligraphic\_D caligraphic\_D caligraphic\_E-compliant iff:
>
> 1. 𝐂1subscript𝐂1\mathbf{C}\_{1}bold\_C start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT
>
> the action is not forbidden (where we assume an
> ethical hierarchy such as the one given by Bringsjord
> (?), and require that the action be
> neutral or above neutral in such a hierarchy);
> 2. 𝐂2subscript𝐂2\mathbf{C}\_{2}bold\_C start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT
>
> the net utility or goodness of the action is greater than some positive
> amount γ𝛾\gammaitalic\_γ;
> 3. 𝐂3asubscript𝐂3𝑎\mathbf{C}\_{3a}bold\_C start\_POSTSUBSCRIPT 3 italic\_a end\_POSTSUBSCRIPT
>
> the agent performing the action intends only the good effects;
> 4. 𝐂3bsubscript𝐂3𝑏\mathbf{C}\_{3b}bold\_C start\_POSTSUBSCRIPT 3 italic\_b end\_POSTSUBSCRIPT
>
> the agent does not intend any of the bad effects;
> 5. 𝐂4subscript𝐂4\mathbf{C}\_{4}bold\_C start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT
>
> the bad effects are not used as a means to
> obtain the good effects; and
> 6. 𝐂5subscript𝐂5\mathbf{C}\_{5}bold\_C start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT
>
> if there are bad effects, the agent would rather
> the situation be different and the agent not have to perform the
> action. That is, the action is unavoidable.
>
>
>
See Clause 6 of Principle III in (?)
for a justification of clause 𝐂5subscript𝐂5\mathbf{C}\_{5}bold\_C start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT.111This clause has
not been discussed in any prior rigorous treatments of 𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{{DDE}}caligraphic\_D caligraphic\_D caligraphic\_E, but we
feel 𝐂5subscript𝐂5\mathbf{C}\_{5}bold\_C start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT captures an important part of 𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{{DDE}}caligraphic\_D caligraphic\_D caligraphic\_E as it is
normally used, e.g. in unavoidable ethically thorny situations one
would rather not be present in. 𝐂5subscript𝐂5\mathbf{C}\_{5}bold\_C start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT is necessary, as the
condition is subjunctive/counterfactual in nature and hence may not
always follow from 𝐂1−𝐂4subscript𝐂1subscript𝐂4\mathbf{C}\_{1}-\mathbf{C}\_{4}bold\_C start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT - bold\_C start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT, since there is no
subjunctive content in those conditions. Note that
while (?) model 𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{{DDE}}caligraphic\_D caligraphic\_D caligraphic\_E using
counterfactuals, they use counterfactuals to model 𝐂4subscript𝐂4\mathbf{C}\_{4}bold\_C start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT
rather than 𝐂5subscript𝐂5\mathbf{C}\_{5}bold\_C start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT. That said, the formalization of
𝐂5subscript𝐂5\mathbf{C}\_{5}bold\_C start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT is quite difficult, requiring the use of
computationally hard counterfactual and subjunctive reasoning. We
leave this aside here, reserved for future work. Most importantly,
note that 𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{{DDE}}caligraphic\_D caligraphic\_D caligraphic\_E has long been taken as the ethical basis for
self-defense, and just war (?). Our work brings this
tradition, which has been informal, into the realm of formal methods,
and our second simulation is based upon an AI proving that
𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{{DDE}}caligraphic\_D caligraphic\_D caligraphic\_E holds.
Two Simulations
---------------
A pair of simulations, each confessedly simple, nonetheless lend
credence to our claim that our blue-sky conception is feasible. In
the first, an AI blocks the pivotal human action α𝛼\alphaitalic\_α because the
action is (given, of course, a background ethical theory that is
presumed) ethically impermissible. Essentially, the AI is able to
prove 𝐎(a,¬α)𝐎𝑎𝛼\mathbf{O}(a,\neg\alpha)bold\_O ( italic\_a , ¬ italic\_α ) by using a principle of the form
Φ→𝐎(a,¬α)→Φ𝐎𝑎𝛼\Phi\rightarrow\mathbf{O}(a,\neg\alpha)roman\_Φ → bold\_O ( italic\_a , ¬ italic\_α ). Here ΦΦ\Phiroman\_Φ says that
performance of α𝛼\alphaitalic\_α by a𝑎aitalic\_a would deprive an innocent person a′superscript𝑎′a^{\prime}italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT
of the ability to continue to pursue, after this deprivation, any of
his/her goals. Once the AI, powered by ShadowProver, proves that
α𝛼\alphaitalic\_α is ethically impermissible for a𝑎aitalic\_a, an inability to prove by
𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{{DDE}}caligraphic\_D caligraphic\_D caligraphic\_E that there is an “override” entails in this simulation that the
pivotal action cannot be performed by the human. In the second
simulation, the AI allows a human action by 𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{{DDE}}caligraphic\_D caligraphic\_D caligraphic\_E that directly kills
one (the malevolent shooter) to save four human members of law
enforcement (see Fig. [1](#Sx6.F1 "Figure 1 ‣ Two Simulations ‣ AI Can Stop Mass Shootings, and More")). Here now is a brutally brief look
on the more technical side of the simulations in question.
As discussed earlier, it is difficult to state exactly why it’s
intrinsically wrong to kill people. Yet we must do exactly this if we
are to enable a machine to generate a proof (or even just a cogent
argument) that the assailant’s gun should, on ethical grounds, be
locked. Moreover, we must state this as formulae expressed in a
formal logic that an automated theorem prover can reason over. In our
case, we utilize the Deontic Cognitive Event Calculus (𝒟𝒞ℰ𝒞𝒟𝒞ℰ𝒞{\mathcal{{DCEC}}}caligraphic\_D caligraphic\_C caligraphic\_E caligraphic\_C) and the
aforementioned ShadowProver, respectively. Much has been written
elsewhere about 𝒟𝒞ℰ𝒞𝒟𝒞ℰ𝒞{\mathcal{{DCEC}}}caligraphic\_D caligraphic\_C caligraphic\_E caligraphic\_C and the class of calculi that subsumes it;
these details are out of scope here, and we direct interested readers
to (?), which makes a nice starting place for
those in AI. The original cognitive calculus appeared long ago,
in (?); but this calculus had no ethical
dimension in the form of deontic operators, and pre-dated ShadowProver
[and used Athena instead, a still-vibrant system that anchors the
recent (?)]. Here it should be sufficient to say only that
dialects of 𝒟𝒞ℰ𝒞𝒟𝒞ℰ𝒞{\mathcal{{DCEC}}}caligraphic\_D caligraphic\_C caligraphic\_E caligraphic\_C have been used to formalize and automate highly
intensional reasoning processes, such as the false-belief task
(?) and akrasia (succumbing to temptation to
violate moral principles) (?). 𝒟𝒞ℰ𝒞𝒟𝒞ℰ𝒞{\mathcal{{DCEC}}}caligraphic\_D caligraphic\_C caligraphic\_E caligraphic\_C is a
sorted (i.e. typed) quantified multi-operator modal logic. The
calculus has a well-defined syntax and proof calculus; the latter is
based on natural deduction
(?), and includes all
the introduction and elimination rules for second-order logic, as well
as inference schemata for the modal operators and related structures.
The modal operators in 𝒟𝒞ℰ𝒞𝒟𝒞ℰ𝒞{\mathcal{{DCEC}}}caligraphic\_D caligraphic\_C caligraphic\_E caligraphic\_C include the standard ones for knowledge
𝐊𝐊\mathbf{K}bold\_K, belief 𝐁𝐁\mathbf{B}bold\_B, desire 𝐃𝐃\mathbf{D}bold\_D, intention 𝐈𝐈\mathbf{I}bold\_I,
and in some dialects operators for perception and communication as
well. The general format of an intensional operator is e.g. 𝐊(a,t,ϕ)𝐊𝑎𝑡italic-ϕ\mathbf{K}\left(a,t,\phi\right)bold\_K ( italic\_a , italic\_t , italic\_ϕ ), which says that agent a𝑎aitalic\_a knows at
time t𝑡titalic\_t the proposition ϕitalic-ϕ\phiitalic\_ϕ. Here ϕitalic-ϕ\phiitalic\_ϕ can in turn be any
arbitrary formula.
As to the pair of simulations themselves, while a full discussion of
them would not fit within the limitations of this short paper, we do
discuss one critical definition next, that of the (abstracted)
predicate 𝖯𝗋𝖾𝗏(x,y,g,a,t)𝖯𝗋𝖾𝗏𝑥𝑦𝑔𝑎𝑡\mbox{{Prev}}(x,y,g,a,t)Prev ( italic\_x , italic\_y , italic\_g , italic\_a , italic\_t ), which means that x𝑥xitalic\_x prevents y𝑦yitalic\_y from
achieving goal g𝑔gitalic\_g via action a𝑎aitalic\_a at time t𝑡titalic\_t; in a form expressed
in 𝒟𝒞ℰ𝒞𝒟𝒞ℰ𝒞{\mathcal{{DCEC}}}caligraphic\_D caligraphic\_C caligraphic\_E caligraphic\_C syntax:
| | | | |
| --- | --- | --- | --- |
| | | ∃t1,t2:𝖬𝗈𝗆𝖾𝗇𝗍:subscript𝑡1𝑡2
𝖬𝗈𝗆𝖾𝗇𝗍\displaystyle\exists t\_{1},t2:\mbox{{Moment}}∃ italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_t 2 : Moment | |
| | | ∧{𝑝𝑟𝑖𝑜𝑟(t,t1),𝑝𝑟𝑖𝑜𝑟(t1,t2),𝐊(x,t,𝐃(y,t,𝐻𝑜𝑙𝑑𝑠(g,t2))∧𝐈(y,t,ℎ𝑎𝑝𝑝𝑒𝑛𝑠(g,t2))),𝐊(x,t,∃a′:𝖠𝖼𝗍𝗂𝗈𝗇𝖳𝗒𝗉𝖾(𝐈(y,t1,ℎ𝑎𝑝𝑝𝑒𝑛𝑠(𝑎𝑐𝑡𝑖𝑜𝑛(y,a′),t1))∧[(ℎ𝑎𝑝𝑝𝑒𝑛𝑠(𝑎𝑐𝑡𝑖𝑜𝑛(y,a′),t1)∧¬𝖡𝗅𝗈𝖼𝗄(x,y,g,a,t))→ℎ𝑎𝑝𝑝𝑒𝑛𝑠(g,t2)])),𝐊(x,t,ℎ𝑎𝑝𝑝𝑒𝑛𝑠(𝑎𝑐𝑡𝑖𝑜𝑛(x,a),t)→𝖡𝗅𝗈𝖼𝗄(x,y,g,a,t)),ℎ𝑎𝑝𝑝𝑒𝑛𝑠(𝑎𝑐𝑡𝑖𝑜𝑛(x,a),t)}\displaystyle\wedge\left\{\begin{aligned} &\mathit{prior}(t,t\_{1}),\\
&\mathit{prior}(t\_{1},t\_{2}),\\
&\mathbf{K}\Big{(}x,t,\mathbf{D}(y,t,\mathit{Holds}(g,t\_{2}))\wedge\mathbf{I}(y,t,\mathit{happens}(g,t\_{2}))\Big{)},\\
&\mathbf{K}\left(x,t,\exists a^{\prime}:\mbox{{ActionType}}\left(\begin{aligned} &\mathbf{I}(y,t\_{1},\mathit{happens}(\mathit{action}(y,a^{\prime}),t\_{1}))\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \wedge\\
&\left[\begin{aligned} &\left(\begin{aligned} &\mathit{happens}(\mathit{action}(y,a^{\prime}),t\_{1})\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \wedge\\
&\lnot\mbox{{Block}}(x,y,g,a,t)\end{aligned}\right)\\
&\ \ \ \ \ \ \ \ \ \ \rightarrow\mathit{happens}(g,t\_{2})\\
\end{aligned}\right]\end{aligned}\right)\right),\\
&\mathbf{K}\Big{(}x,t,\mathit{happens}(\mathit{action}(x,a),t)\rightarrow\mbox{{Block}}(x,y,g,a,t)\Big{)},\\
&\mathit{happens}(\mathit{action}(x,a),t)\end{aligned}\right\}∧ { start\_ROW start\_CELL end\_CELL start\_CELL italic\_prior ( italic\_t , italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL italic\_prior ( italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL bold\_K ( italic\_x , italic\_t , bold\_D ( italic\_y , italic\_t , italic\_Holds ( italic\_g , italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) ∧ bold\_I ( italic\_y , italic\_t , italic\_happens ( italic\_g , italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) ) , end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL bold\_K ( italic\_x , italic\_t , ∃ italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT : ActionType ( start\_ROW start\_CELL end\_CELL start\_CELL bold\_I ( italic\_y , italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_happens ( italic\_action ( italic\_y , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) , italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ) end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL ∧ end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL [ start\_ROW start\_CELL end\_CELL start\_CELL ( start\_ROW start\_CELL end\_CELL start\_CELL italic\_happens ( italic\_action ( italic\_y , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) , italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL ∧ end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL ¬ Block ( italic\_x , italic\_y , italic\_g , italic\_a , italic\_t ) end\_CELL end\_ROW ) end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL → italic\_happens ( italic\_g , italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) end\_CELL end\_ROW ] end\_CELL end\_ROW ) ) , end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL bold\_K ( italic\_x , italic\_t , italic\_happens ( italic\_action ( italic\_x , italic\_a ) , italic\_t ) → Block ( italic\_x , italic\_y , italic\_g , italic\_a , italic\_t ) ) , end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL italic\_happens ( italic\_action ( italic\_x , italic\_a ) , italic\_t ) end\_CELL end\_ROW } | |
The key components in this definition are:
>
> 1. 1.
>
> x𝑥xitalic\_x knows that y𝑦yitalic\_y desires a goal g𝑔gitalic\_g and intends to accomplish g𝑔gitalic\_g;
> 2. 2.
>
> x𝑥xitalic\_x knows that y𝑦yitalic\_y intends to perform an action a′superscript𝑎′a^{\prime}italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT that will lead to the
> accomplishment of y𝑦yitalic\_y’s goal g𝑔gitalic\_g, unless x𝑥xitalic\_x does something to block
> that goal;
> 3. 3.
>
> x𝑥xitalic\_x knows that if x𝑥xitalic\_x performs action a𝑎aitalic\_a then y𝑦yitalic\_y’s goal g𝑔gitalic\_g will be blocked; and
> 4. 4.
>
> x𝑥xitalic\_x performs a𝑎aitalic\_a.
>
>
>
Utilizing this definition, along with a few other formulae in 𝒟𝒞ℰ𝒞𝒟𝒞ℰ𝒞{\mathcal{{DCEC}}}caligraphic\_D caligraphic\_C caligraphic\_E caligraphic\_C (chiefly, that preventing another human from achieving their goals,
unless overridden by 𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{{DDE}}caligraphic\_D caligraphic\_D caligraphic\_E, is forbidden), ShadowProver can prove —
on an Apple laptop, and without any human-engineered optimization —
for Simulation 1 that lock-out must happen in less than a second, and 3
seconds for Simulation 2 that lock-out must not happen.

Figure 1: Prohibition Against Killing in Force; AI Thwarts Malevolent Assailant. This corresponds to Simulation 1.

Figure 2: 𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{DDE}caligraphic\_D caligraphic\_D caligraphic\_E Sanctions Shooting Malevolent Assailant; AI Refrains from Thwarting. This corresponds to Simulation 2.
Why Not *Legally* Correct AIs Instead?
--------------------------------------
We expect some readers to sensibly ask why we don’t restrict the AI we
seek to *legal* correctness, instead of ethical correctness.
After all (as it will be said), the shootings in question are illegal.
The answer is that, one, much of our work on the deontic-logic side
conforms to a framework that Leibniz espoused, in which legal
obligations are the “weakest” kind of moral
obligations/prohibitions, and come just before, but connected to,
ethical obligations in the hierarchy ℰℋℰℋ\mathscr{EH}script\_E script\_H, first introduced
in (?). In this Leibnizian approach,
there is no hard-and-fast breakage between legal
obligations/prohibitions and moral ones; the underlying logic is
seamless across the two spheres. Hence, any and all of our formalisms
and technology can be used directly in a “law-only” manner. This is
in fact provably the case; some relevant theorems appear
in (?). The second part of our reply to
the present objection is that we wish to ensure that AIs can be
ethically correct even in cases where the local laws are wildly
divergent from standard Occidental ethical theories.
Additional Objections
---------------------
Of course, there are any number of additional objections that will be
raised against the research direction we seek to catalyze by the
present short paper. It is fairly easy to anticipate many of them,
but current space constraints preclude presenting them, and then
providing rebuttals. We rest content with a speedy treatment of but
two objections, the first of which is:
>
>
>
> “Consider the Charlie Hebdo tragedy, in Paris. Here,
> high-powered rifles were legally purchased in Slovakia, modified,
> and then smuggled into France, where they were then horribly
> unleashed upon innocent journalists. Even if the major gun
> manufacturers, like the major car manufacturers, willingly subject
> themselves to the requirement that their products are infused with
> ethically correct AI of the type you are engineering, surely there
> will still be ‘outlaw’ manufacturers that elude any AI aboard
> their weapons.”
>
>
>
>
>
>
In reply, we note that our blue-sky conception is in no way restricted
to the idea that the guarding AI is only in the weapons in question.
Turn back the clock to the Hebdo tragedy, and assume for the
sake of argument that the brothers’ rifles in question are devoid of
any overseeing AI of the type present in the two simulations described
above. It still remains true, for example, that the terrorists in
this case must travel to Rue Nicolas-Appert with their weapons, and
there would in general be any number of options available to AIs that
perceive the brothers in transit with their illegal cargo to thwart
such transit. Ethically correct AI, with the power to guard human
life on the basis of suitable ethical theory/ies, ethical codes, and
legal theory/ies/codes, deployed in and across a sensor-rich city like
Paris, would have any number of actions available to it by which a
violent future can be avoided in favor of life. Whether guarding AI
is in weapons or outside them looking on, certain core requirements
must be met in order to ensure efficacy. For instance, here are two
(put roughly) things that a guarding AI should be able to come to
know/believe:
Given any human hℎhitalic\_h, at any point of time t𝑡titalic\_t, an ethically correct,
overseeing AI should *at least* be able to come to know/believe
the following, in order to verify that relevant actions on the part
of hℎhitalic\_h are 𝒟𝒟ℰ𝒟𝒟ℰ\mathcal{DDE}caligraphic\_D caligraphic\_D caligraphic\_E-compliant (where ϕitalic-ϕ\phiitalic\_ϕ is a
state-of-affairs that includes use of a weapon).
1.
The human’s intentions: (¬)𝐈(h,t,ϕ)𝐈ℎ𝑡italic-ϕ(\lnot)\mathbf{I}\left(h,t,\phi\right)( ¬ ) bold\_I ( italic\_h , italic\_t , italic\_ϕ )
2.
Forbiddenness/Permissibility: (¬)𝐎(a,t,σ,¬ϕ)𝐎𝑎𝑡𝜎italic-ϕ(\lnot)\mathbf{O}\left(a,t,\sigma,\lnot\phi\right)( ¬ ) bold\_O ( italic\_a , italic\_t , italic\_σ , ¬ italic\_ϕ )
Now here is the second objection:
>
>
>
> “Your hope for AI will be dashed by the brute fact that AI in
> weapons can be discarded by hackers.”
>
>
>
>
>
>
This is an objection that we have long anticipated in our work devoted
to installing ethical controls in such things as robots, and we see no
reason why our approach there, which is to bring machine ethics down
to an immutable hardware level
(?; ?), cannot be
pursued for weapons as well. Of course, a longer discussion of the
very real challenge here is needed.
Concluding Remarks
------------------
Alert readers may ask why the “, And More” appears in our title.
The phrase is there because machine ethics, once one is willing to
look to AI itself for moral correctness, and protective actions
flowing therefrom, can be infused in other artifacts the full
“AI-absent” human control of which often results in carnage. A
classic example is driving. We all know that AI has made amazing
strides in self-driving vehicles, but there is no need to wait for
lives to be saved by broad implementation of self-driving AI:
ethically correct AI, today, can shut down a car if the would-be human
driver is perceived by an artificial agent to be intoxicated (above,
say, .08 BAC). In 2017 alone, over 10,000 people died in the U.S. because of intoxicated human drivers used their vehicles
immorally/illegally
(NHTSA222<https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812630>).
Ethically correct AI, indeed relatively such AI, can stop this, today.
We end with a simple observation, and from it a single question: Many
researchers are already working on the challenge of bringing ethically
correct AIs to the world. Why not channel some of this ingenious work
specifically into the engineering of AIs that are employed to guard
artifacts that, indisputably, are all too often vehicles for unethical
agents of the human sort to cause horrible harm?
Acknowledgments
---------------
The authors are indebted to ONR for a 6-year MURI grant devoted to the
science and engineering of morally competent AI/robots (M. Scheutz
PI, Co-PIs S. Bringsjord & B. Malle, N.S. Govindarajulu Senior
Research Scientist), focused in our case on the use of multi-operator
quantified modal logics to specify and implement such competence; and
to AFOSR (S. Bringsjord PI) for support that continues to enable the
invention and implementation of unprecedentedly expressive
computational logics and automated reasoners that in turn enable
human-level computational intelligence. |
b380289f-956e-47d2-846d-793bb187a9fd | trentmkelly/LessWrong-43k | LessWrong | Physicists say simulation hypothesis is impossible in principle
https://cosmosmagazine.com/physics/physicists-find-we-re-not-living-in-a-computer-simulation
If these results are right, our universe being a simulation would require an impossibly big computer. They conclude the simulation hypothesis is impossible in principle. |
8434254f-c7fd-4614-a77e-0ffe50a366c4 | trentmkelly/LessWrong-43k | LessWrong | A Short Note on My Value Notion
Value of something
When I have $100 to spend, I can buy ~50 cups of coffee, ~10 regular meals, or a 1-night stay at a hotel, etc. Accordingly it feels like the value of $100 should be ~50 cups of coffee, or ~10 meals, or a one-night stay at a hotel. If I can’t put this $100 into any usage, it is worthless. For instance, if the coffee shop, or the restaurant, or the hotel or any business in general refuses to accept fiat money, the value of that $100 will be zero.
Intuitively, the value of a fiat money can be represented by the units of consumables one can purchase using that money. On the other hand, the exact answer to the question of “what IS the value of $100 fiat money” is not yet obvious (at least to me). One can argue that when 10 meals can be purchased using $100, the value of that 10 meals can be hugely different for different people in the different parts of the world. But this argument feels like trying to argue the value of 10 meals rather than the value of $100.
My current understanding is that, in general, the value of something can be represented/quantifiable by the units of some other things that can be directly obtained by putting that thing into usage. If it stays idle without doing anything, the value is essentially zero.
Let’s look at some more examples:
* Value of a cup of coffee. When I drink a cup of regular coffee (not decaf), I can finish 5 tasks on average instead of two in a day; or I can finish a task on average in 1 hour instead of 2 hours. In this case, the value of the coffee can be represented by 5 tasks/day or 1 hour/task. If the coffee sits on the table without anyone drinking it, the value of that coffee will be zero.
* Value of a car. With my car, I can commute 100-mile distance in, let's say, two hours. In this particular case the value of my car can be represented by 2 hours/100 miles. If the car stays idle in the garage with no one driving it, the value is essentially zero.
* Value of a business. Let’s say I am cons |
49a872cb-8062-45c9-a396-42169463f6b9 | trentmkelly/LessWrong-43k | LessWrong | An Easily Overlooked Post on the Automation of Wisdom and Philosophy
This week for Wise AI Wednesdays, I'll be sharing something a bit different - the announcement post of a competion that is already over (the AI Impacts Essay competition on the Automation of Wisdom and Philosophy). If you're wondering why I'm sharing it, even though some of the specific discussion of the competition is no longer relevant, I still believe this post contains a lot of great content and I think it would be a shame if everyone forgot about it just because it happened to be in the announcement post.
This post explains why they think this might be important, lists some potentially interesting research directions, and then finishes with an FAQ. If you're looking to dive into this area, this is a pretty good place to start.
----------------------------------------
Before I go, I just wanted to share a few paragraphs from the post, specifically why they think this area might be important:
> AI is likely to automate more and more categories of thinking with time.
>
> By default, the direction the world goes in will be a result of the choices people make, and these choices will be informed by the best thinking available to them. People systematically make better, wiser choices when they understand more about issues, and when they are advised by deep and wise thinking.
>
> Advanced AI will reshape the world, and create many new situations with potentially high-stakes decisions for people to make. To what degree people will understand these situations well enough to make wise choices remains to be seen. To some extent this will depend on how much good human thinking is devoted to these questions; but at some point it will probably depend crucially on how advanced, reliable, and widespread the automation of high-quality thinking about novel situations is.
>
> We believe1 that this area could be a crucial target for differential technological development, but is at present poorly understood and receives little attention. This competition aims to encourage |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.